Databases

Posted by on 30 July, 2024

This post was originally published on this site

video

How to use the watch command

In this Linux tip, we will try out the watch command. It’s a command that will run repeatedly, overwriting its previous output until you stop it with a ^c (Ctrl + “c”) command. It can be used to sit and wait for some change in the output that you’re waiting to see.
By default, a command that is run through watch will run two seconds. You can change the time with the -t option. If you, for example, use the command “watch who”, the output will not change except for the date/time in the upper right corner – at least not until someone logs in or out of the system.
Every 2.0s: who fedora: Sat May 25 15:11:22 2024

fedora seat0 2024-05-25 14:24 (login screen)
fedora tty2 2024-05-25 14:24 (tty2)
shs pts/1 2024-05-25 14:25 (192.168.0.11)
Once another person logs in or someone logs out, a line will be added or removed from the list of logged in users.
Closing: Well, that’s your Linux tip for the watch command. It can be useful when you’re waiting for some change to happen on your Linux system.
If you have questions or would like to suggest a topic, please add a comment below. And don’t forget to subscribe to the InfoWorld channel on YouTube.
If you like this video, please hit the like and share buttons. For more Linux tips, be sure to follow us on Facebook, YouTube and NetworkWorld.com.

Jul 30, 2024 2 mins

Open Source

Posted Under: Database
Technology Industry

Posted by on 30 July, 2024

This post was originally published on this site

video

How to use the watch command

In this Linux tip, we will try out the watch command. It’s a command that will run repeatedly, overwriting its previous output until you stop it with a ^c (Ctrl + “c”) command. It can be used to sit and wait for some change in the output that you’re waiting to see.
By default, a command that is run through watch will run two seconds. You can change the time with the -t option. If you, for example, use the command “watch who”, the output will not change except for the date/time in the upper right corner – at least not until someone logs in or out of the system.
Every 2.0s: who fedora: Sat May 25 15:11:22 2024

fedora seat0 2024-05-25 14:24 (login screen)
fedora tty2 2024-05-25 14:24 (tty2)
shs pts/1 2024-05-25 14:25 (192.168.0.11)
Once another person logs in or someone logs out, a line will be added or removed from the list of logged in users.
Closing: Well, that’s your Linux tip for the watch command. It can be useful when you’re waiting for some change to happen on your Linux system.
If you have questions or would like to suggest a topic, please add a comment below. And don’t forget to subscribe to the InfoWorld channel on YouTube.
If you like this video, please hit the like and share buttons. For more Linux tips, be sure to follow us on Facebook, YouTube and NetworkWorld.com.

Jul 30, 2024 2 mins

Open Source

Posted Under: Database, General, Networking, Tech News
Technology Industry

Posted by on 30 July, 2024

This post was originally published on this site

video

How to use the watch command

In this Linux tip, we will try out the watch command. It’s a command that will run repeatedly, overwriting its previous output until you stop it with a ^c (Ctrl + “c”) command. It can be used to sit and wait for some change in the output that you’re waiting to see.
By default, a command that is run through watch will run two seconds. You can change the time with the -t option. If you, for example, use the command “watch who”, the output will not change except for the date/time in the upper right corner – at least not until someone logs in or out of the system.
Every 2.0s: who fedora: Sat May 25 15:11:22 2024

fedora seat0 2024-05-25 14:24 (login screen)
fedora tty2 2024-05-25 14:24 (tty2)
shs pts/1 2024-05-25 14:25 (192.168.0.11)
Once another person logs in or someone logs out, a line will be added or removed from the list of logged in users.
Closing: Well, that’s your Linux tip for the watch command. It can be useful when you’re waiting for some change to happen on your Linux system.
If you have questions or would like to suggest a topic, please add a comment below. And don’t forget to subscribe to the InfoWorld channel on YouTube.
If you like this video, please hit the like and share buttons. For more Linux tips, be sure to follow us on Facebook, YouTube and NetworkWorld.com.

Jul 30, 2024 2 mins

Open Source

Posted Under: General
What’s new in MySQL 9.0

Posted by on 4 July, 2024

This post was originally published on this site

Oracle celebrated the beginning of July with the general availability of three releases of its open source database, MySQL: MySQL 8.0.38, the first update of its long-term support (LTS) version, MySQL 8.4, and the first major version of its 9.x innovation release, MySQL 9.0.

While the v8 releases are bug fixes and security releases only, MySQL 9.0 Innovation is a shiny new version with additional features, as well as some changes that may require attention when upgrading from a previous version.

The new 9.0 versions of MySQL Clients, Tools, and Connectors are also live, and Oracle recommends that they be used with MySQL Server 8.0, and 8.4 LTS as well as with 9.0 Innovation.

New features in MySQL 9.0.0

This initial 9.x Innovation release, Oracle says, is preparation for new features in upcoming releases. But it still contains useful things and can be upgraded to from MySQL 8.4 LTS; the MySQL Configurator automatically does the upgrade without user intervention during MSI installations on Windows.

The major changes include:

  • A new Vector datatype is supported in CREATE and ALTER statements.
  • JavaScript Stored Programs, which support JavaScript-based stored programs and functions, has come to MySQL Enterprise Edition. JavaScript Stored Programs can call SQL, and SQL can call them.
  • MySQL 9.0 Innovation has moved to newer versions of libraries and compilers: Linux 8 and 9 on GCC13, and Boost 1.85.
  • In the Event Scheduler, users can now prepare SQL statements CREATE EVENT, ALTER EVENT, and DROP EVENT.

What’s going away in MySQL 9.0

Insecure and elderly SHA-1, after being deprecated in MySQL 8, is gone, and the server now rejects mysql_native authentication requests from older client programs which do not have CLIENT_PLUGIN_AUTH capability. Before upgrading to 9.0, Oracle says, user accounts in 8.0 and 8.4 must be altered from mysql_native_password to caching_sha2_password.

In the Optimizer, ER_SUBQUERY_NO_1_ROW has been removed from the list of errors which are ignored by statements which include the IGNORE keyword. This change can make an UPDATE, DELETE, or INSERT statement which includes the IGNORE keyword raise errors if it contains a SELECT statement with a scalar subquery that produces more than one row.  

What’s next after MySQL 9.0

MySQL is now on a three-month release cadence, with major LTS releases every two years. In October, Oracle says we can expect bug and security releases MySQL 8.4.2 LTS and MySQL 8.0.39, and the MySQL 9.1 Innovation release, with new features as well as bug and security fixes.

Next read this:

Posted Under: Database
Qdrant unveils vector-based hybrid search for RAG

Posted by on 2 July, 2024

This post was originally published on this site

Open-source vector database provider Qdrant has launched BM42, a vector-based hybrid search algorithm intended to provide more accurate and efficient retrieval for retrieval-augmented generation (RAG) applications. BM42 combines the best of traditional text-based search and vector-based search to lower the costs for RAG and AI applications, Qdrant said.

Qdrant’s BM42 was announced July 2. Traditional keyword search engines, using algorithms such as BM25, have been around for more than 50 years and are not optimized for the precise retrieval needed in modern applications, according to Qdrant. As a result they struggle with specific RAG demands, particularly with short segments requiring further context to inform successful search and retrieval. Moving away from a keyword-based search to a fully vectorized based offers a new industry standard, Qdrant said.

“BM42, for short texts which are more prominent in RAG scenarios, provides the efficiency of traditional text search approaches, plus the context of vectors, so is more flexible, precise, and efficient,” Andrey Vasnetsov, Qdrant CTO and co-founder, said. This helps to make vector search more universally applicable, he added.

Unlike traditional keyword-based search suited for long-form content, BM42 integrates sparse and dense vectors to pinpoint relevant information within a document. A sparse vector handles exact term matching, while dense vectors handle semantic relevance and deep meaning, according to the company.

Next read this:

Posted Under: Database
SingleStoreDB joins the Apache Iceberg bandwagon

Posted by on 26 June, 2024

This post was originally published on this site

Buoyed by customer demand, SingleStore, the company behind the relational database SingleStoreDB, has decided to natively integrate Apache Iceberg into its offering to help its enterprise customers make use of data stored in data lakehouses.

“With this new integration, SingleStore aims to transform the dormant data inside lakehouses into a valuable real-time asset for enterprise applications. Apache Iceberg, a popular open standard for data lakehouses, provides CIOs with cost-efficient storage and querying of large datasets,” said Dion Hinchcliffe, senior analyst at The Futurum Group.

Hinchcliffe pointed out that SingleStore’s integration includes updates that help its customers bypass the challenges that they may typically face when adopting traditional methods to make the data in Iceberg tables more immediate.

These challenges include complex, extensive ETL (extract, transform, load) workflows and compute-intensive Spark jobs.

Some of the key features of the integration are low-latency ingestion, bi-directional data flow, and real-time performance at lower costs, the company said.

Explaining how SingleStore achieves low latency across queries and updates, IDC research vice president Carl Olofson said that the company —formerly known as MemSQL — a memory-optimized and high-performance version of the relational database management system — uses memory features as a sort of cache.

“By doing so, the company can dramatically improve the speed with which Iceberg tables can be queried and updated,” Olofson explained, adding that the company might be proactively loading data from Iceberg into their internal memory-optimized format.

Before the Iceberg integration, SingleStore held data in a form or format that is optimized for rapid swapping into memory, where all data processing took place, the analyst said.

Several other database vendors, notably Databricks, have made attempts to adopt the Apache Iceberg table format due to its rising popularity with enterprises.

Earlier this month, Databricks agreed to acquire Tabular, the storage platform vendor led by the creators of Apache Iceberg, in order to promote data interoperability in lakehouses.

Another data lakehouse format — Delta Live Tables — developed by Databricks and later open sourced via The Linux Foundation, competes with Iceberg tables.

Currently, the company is working on another format that allows enterprises to use both Iceberg and Delta Live tables.

Both Olofson and Hinchcliffe pointed out that several vendors and offerings — such as Google’s BigQuery, Starburst, IBM’s Watsonx.data, SAP’s DataSphere, Teradata, Cloudera, Dremio, Presto, Hive, Impala, StarRocks, and Doris — have integrated Iceberg as an open source analytics table format for very large datasets.

The native integration of Iceberg into SingleStoreDB is currently in public preview.

Updates to search and deployment options

As part of the updates to SingleStoreDB, the company is adding new capabilities to its full-text search feature that improve relevance scoring, phonetic similarity, fuzzy matching, and keyword proximity-based ranking.

The combination of these capabilities allows enterprises to eliminate the need for additional specialty databases to build generative AI-based applications, the company explained.

Additionally, the company has introduced an autoscaling feature in public preview that allows enterprises to manage workloads or applications by scaling compute resources up or down.

It also lets users define thresholds for CPU and memory usage for autoscaling, to avoid any unnecessary consumption.

Further, the company said it is introducing a new deployment option for the database via Helios -BYOC, which is a managed version of the database via a virtual private cloud.

This offering is now available in private preview in AWS and enterprise customers can run SingleStore in their own tenants while complying with data residency and governance policies, the company said.

Next read this:

Posted Under: Database
Oracle HeatWave’s in-database LLMs to help reduce infra costs

Posted by on 26 June, 2024

This post was originally published on this site

Oracle is adding new generative AI-focused features to its Heatwave data analytics cloud service, previously known as MySQL HeatWave.

The new name highlights how HeatWave offers more than just MySQL support, and also includes HeatWave Gen AI, HeatWave Lakehouse, and HeatWave AutoML, said Nipun Agarwal, senior vice president of HeatWave at Oracle.  

At its annual CloudWorld conference in September 2023, Oracle previewed a series of generative AI-focused updates for what was then MySQL HeatWave.

These updates included an interface driven by a large language model (LLM), enabling enterprise users to interact with different aspects of the service in natural language, a new Vector Store, Heatwave Chat, and AutoML support for HeatWave Lakehouse.

Some of these updates, along with additional capabilities, have been combined to form the HeatWave Gen AI offering inside HeatWave, Oracle said, adding that all these capabilities and features are now generally available at no additional cost.

In-database LLM support to reduce cost

In a first among database vendors, Oracle has added support for LLMs inside a database, analysts said.

HeatWave Gen AI’s in-database LLM support, which leverages smaller LLMs with fewer parameters such as Mistral-7B and Meta’s Llama 3-8B running inside the database, is expected to reduce infrastructure cost for enterprises, they added.

“This approach not only reduces memory consumption but also enables the use of CPUs instead of GPUs, making it cost-effective, which given the cost of GPUs will become a trend at least in the short term until AMD and Intel catch up with Nvidia,” said Ron Westfall, research director at The Futurum Group.

Another reason to use smaller LLMs inside the database is the ability to have more influence on the model with fine tuning, said David Menninger, executive director at ISG’s Ventana Research.

“With a smaller model the context provided via retrieval augmented generation (RAG) techniques has a greater influence on the results,” Menninger explained.

Westfall also gave the example of IBM’s Granite models, saying that the approach to using smaller models, especially for enterprise use cases, was becoming a trend.

The in-database LLMs, according to Oracle, will allow enterprises to search data, generate or summarize content, and perform RAG with HeatWave’s Vector Store.

Separately, HeatWave Gen AI also comes integrated with the company’s OCI Generative Service, providing enterprises with access to pre-trained and other foundational models from LLM providers.

Rebranded Vector Store and scale-out vector processing

A number of database vendors that didn’t already offer specialty vector databases have added vector capabilities to their wares over the last 12 months—MongoDB, DataStax, Pinecone, and CosmosDB for NoSQL among them — enabling customers to build AI and generative AI-based use cases over data stored in these databases without moving data to a separate vector store or database.

Oracle’s Vector Store, already showcased in September, automatically creates embeddings after ingesting data in order to process queries faster.

Another capability added to HeatWave Gen AI is scale-out vector processing that will allow HeatWave to support VECTOR as a data type and in turn help enterprises process queries faster.

“Simply put, this is like adding RAG to a standard relational database,” Menninger said. “You store some text in a table along with an embedding of that text as a VECTOR data type. Then when you query, the text of your query is converted to an embedding. The embedding is compared to those in the table and the ones with the shortest distance are the most similar.”  

A graphical interface via HeatWave Chat

Another new capability added to HeatWave Gen AI is HeatWave Chat—a Visual Code plug-in for MySQL Shell which provides a graphical interface for HeatWave GenAI and enables developers to ask questions in natural language or SQL.

The retention of chat history makes it easier for developers to refine search results iteratively, Menninger said.

HeatWave Chat comes in with another feature dubbed the Lakehouse Navigator, which allows enterprise users to select files from object storage to create a new vector store.

This integration is designed to enhance user experience and efficiency of developers and analysts building out a vector store, Westfall said.

Next read this:

Posted Under: Database
DataStax updates tools for building gen AI applications

Posted by on 25 June, 2024

This post was originally published on this site

DataStax is updating its tools for building generative AI-based applications in an effort to ease and accelerate application development for enterprises, databases, and service providers.

One of these tools is Langflow, which DataStax acquired in April. It is an open source, web-based no-code graphical user interface (GUI) that allows developers to visually prototype LangChain flows and iterate them to develop applications faster.

LangChain is a modular framework for Python and JavaScript that simplifies the development of applications that are powered by generative AI language models or LLMs.  

According to the company’s Chief Product Officer Ed Anuff, the update to Langflow is a new version dubbed Langflow 1.0, which is the official open source release that comes after months of community feedback on the preview.

“Langflow 1.0 adds more flexible, modular components and features to support complex AI pipelines required for more advanced retrieval augmented generation (RAG) techniques and multi-agent architectures,” Anuff said, adding that Langflow’s execution engine was now Turing complete.

Turing complete or completeness is a term used in computer science to describe a programmable system that can carry out or solve any computational problem.

Langflow 1.0 also comes with LangSmith integration that will allow enterprise developers to monitor LLM-based applications and perform observability on them, the company said.

A managed version of Langflow is also being made available via DataStax in a public preview.

“Astra DB environment details will be available in Langflow and users will be able to access Langflow via the Astra Portal, and usage will be free,” Anuff explained.

RAGStack 1.0 gets new capabilities

DataStax has also released a new version of RAGStack, its curated stack of open-source software for implementing RAG in generative AI-based applications using Astra DB Serverless or Apache Cassandra as a vector store.

The new version, dubbed RAGStack 1.0, comes with new features such as Langflow, Knowledge Graph RAG, and ColBERT among others.

The Knowledge Graph RAG feature, according to the company, provides an alternative way to retrieve information using a graph-based representation. This alternative method can be more accurate than vector-based similarity search alone with Astra DB, it added.

Other features include the introduction of Text2SQL and Text2CQL (Cassandra Query Language) to bring all kinds of data into the generative AI flow for application development.

While DataStax offers a separate non-managed version of RAGStack 1.0 under the name Luna for RAGStack, Anuff said that the managed version offers more value for enterprises.

“RAGStack is based on open source components, and you could take all of those projects and stitch them together yourself. However, we think there is a huge amount of value for companies in getting their stack tested and integrated for them, so they can trust that it will deliver at scale in the way that they want,” the chief product officer explained.

Other updates related to easing RAG

The company has also partnered with several other companies such as Unstructured to help developers extract and transform data to be stored in AstraDB for building generative AI-based applications.

“The partnership with Unstructured provides DataStax customers with the ability to use the latter’s capabilities to extract and transform data in multiple formats – including HTML, PDF, CSV, PNG, PPTX – and convert it into JSON files for use in AI initiatives,” said Matt Aslett, director at ISG’s Ventana Research.

Other partnerships include collaboration with the top embedding providers, such as OpenAI, Hugging Face, Mistral AI, and Nvidia among others.

Next read this:

Posted Under: Database
Amazon Q Developer review: Code completions, code chat, and AWS skills

Posted by on 24 June, 2024

This post was originally published on this site

When I reviewed Amazon CodeWhisperer, Google Bard, and GitHub Copilot in June of 2023, CodeWhisperer could generate code in an IDE and did security reviews, but it lacked a chat window and code explanations. The current version of CodeWhisperer is now called Amazon Q Developer, and it does have a chat window that can explain code, and several other features that may be relevant to you, especially if you do a lot of development using AWS.

Amazon Q Developer currently runs in Visual Studio Code, Visual Studio, JetBrains IDEs, the Amazon Console, and the macOS command line. Q Developer also offers asynchronous agents, programming language translations, and Java code transformations/upgrades. In addition to generating, completing, and discussing code, Q Developer can write unit tests, optimize code, scan for vulnerabilities, and suggest remediations. It supports conversations in English, and code in the Python, Java, JavaScript, TypeScript, C#, Go, Rust, PHP, Ruby, Kotlin, C, C++, shell scripting, SQL, and Scala programming languages.

You can chat with Amazon Q Developer about AWS capabilities, and ask it to review your resources, analyze your bill, or architect solutions. It knows about AWS well-architected patterns, documentation, and solution implementation.

According to Amazon, Amazon Q Developer is “powered by Amazon Bedrock” and trained on “high-quality AWS content.” Since Bedrock supports many foundation models, it’s not clear from the web statement which one was used for Amazon Q Developer. I asked, and got this answer from an AWS spokesperson: “Amazon Q uses multiple models to execute its tasks and uses logic to route tasks to the model that is the best fit for the job.”

Amazon Q Developer has a reference tracker that detects whether a code suggestion might be similar to publicly available code. The reference tracker can label these with a repository URL and project license information, or optionally filter them out.

Amazon Q Developer directly competes with GitHub Copilot, JetBrains AI, and Tabnine, and indirectly competes with a number of large language models (LLMs) and small language models (SLMs) that know about code, such as Code Llama, StarCoder, Bard, OpenAI Codex, and Mistral Codestral. GitHub Copilot can converse in dozens of natural languages, as opposed to Amazon Q Developer’s one, and supports a number of extensions from programming, cloud, and database vendors, as opposed to Amazon Q Developer’s AWS-only ties.

Installing Amazon Q Developer

Given the multiple environments in which Amazon Q Developer can run, it’s not a surprise that there are multiple installers. The only tricky bit is signing and authentication.

Installing Q Developer in Visual Studio Code

You can install Amazon Q Developer from the Visual Studio Code Marketplace, or from the Extensions sidebar in Visual Studio Code. You can get to that sidebar from the Extensions icon at the far left, by pressing Shift-Command-X, or by choosing Extensions: Install Extensions from the command palette. Type “Amazon Q” to find it. Once you’ve installed the extension, you’ll need to authenticate to AWS as discussed below.

amazon q developer 01 IDG

Amazon Q Developer in Visual Studio Code includes a chat window (at the left) as well as code generation. The chat window is showing Amazon Q Developer’s capabilities.

Installing Q Developer in JetBrains IDEs

Like Visual Studio Code, JetBrains has a marketplace for IDE plugins, where Amazon Q Developer is available. You’ll need to reboot the IDE after downloading and installing the plugin. Then you’ll need to authenticate to AWS as discussed below. Note that the Amazon Q Developer plugin disables local inline JetBrains full-line code completion.

amazon q developer 02 IDG

Amazon Q Developer in IntelliJ IDEA, and other JetBrains IDEs, has a chat window on the right as well as code completion. The chat window is showing Amazon Q Developer’s capabilities.

Installing Q Developer in the AWS Toolkit for Visual Studio

For Visual Studio, Amazon Q Developer is part of the AWS Toolkit, which you can find it in the Visual Studio Marketplace. Again, once you’ve installed the toolkit you’ll need to authenticate to AWS as discussed below.

Signing and authenticating Amazon Q Developer

The authentication process is confusing because there are several options and several steps that bounce between your IDE and web browser. You used to have to repeat this process frequently, but the product manager assures me that re-authentication should now only be necessary every three months.

Installing Q Developer for command line

Amazon Q Developer for the command line is currently for macOS only, although a Linux version is on the roadmap and documented as a remote target. The macOS installation is basically a download of a DMG file, followed by running the disk image, dragging the Q file to the applications directory, and running that Q app to install the CLI q program and a menu bar icon that can bring up settings and the web user guide. You’ll also need to authenticate to AWS, which will log you in.

amazon q developer 03IDG

On macOS, the command-line program q supports multiple shell programs and multiple terminal programs. Here I’m using iTerm2 and the z shell. The q translate command constructs shell commands for you, and the q chat command opens an AI assistant.

Amazon Q Developer in the AWS Console

If you are running as an IAM user rather than a root user, you’ll have to add IAM permissions to use Amazon Q Developer. Once you have permission, AWS should display an icon at the right of the screen that brings up the Amazon Q Developer interface.

amazon q developer 04 IDG

The Amazon Q Developer window at the right, running in the AWS Console, can chat with you about using AWS and can generate architectures and code for AWS applications.

Evaluating Amazon Q Developer

According to AWS, “Amazon Q Developer Agent achieved the highest scores of 13.4% on the SWE-Bench Leaderboard and 20.5% on the SWE-Bench Leaderboard (Lite), a data set that benchmarks coding capabilities. Amazon Q security scanning capabilities outperform all publicly benchmarkable tools on detection across the most popular programming languages.”

Both of the quoted numbers are reflected on the SWE-Bench site, but there are two issues. Neither number has as yet been verified by SWE-Bench, and the Amazon Q Developer ranking on the Lite Leaderboard has dropped to #3. In addition, if there’s a supporting document on the web for Amazon’s security scanning claim, it has evaded my searches.

SWE-Bench, from Cornell, is “an evaluation framework consisting of 2,294 software engineering problems drawn from real GitHub issues and corresponding pull requests across 12 popular Python repositories.” The scores reflect the solution rates. The Lite data set is a subset of 300 GitHub issues.

Let’s explore how Amazon Q Developer behaves on the various tasks it supports in some of the 15 programming languages it supports. This is not a formal benchmark, but rather an attempt to get a feel for how well it works. Bear in mind that Amazon Q Developer is context sensitive and tries to use the persona that it thinks best fits the environment where you ask it for help.

Predictive inline code generation with Amazon Q Developer

I tried a softball question for predictive code generation and used one of Amazon’s inline suggestion examples. The Python prompt supplied was # Function to upload a file to an S3 bucket. Pressing Option-C as instructed got me the code below the prompt in the screenshot below, after an illegal character that I had to delete. I had to type import at the top to prompt Amazon Q to generate the imports for logging, boto3, and ClientError.

I also used Q Chat to tell me how to resolve the imports; it suggested a pip command, but on my system that fixed the wrong Python environment (v 3.11). I had to do a little sleuthing in the Frameworks directory tree to remind myself to use pip3 to target my current Python v 3.12 environment. I felt like singing “Daisy, Daisy” to Dave and complaining that my mind was going.

amazon q developer 05 IDG

Inline code generation and chat with Amazon Q Developer. All the code below the # TODO comment was generated by Amazon Q Developer, although it took multiple steps.

I also tried Amazon’s two other built-in inline suggestion examples. The example to complete an array of fake users in Python mostly worked; I had to add the closing ] myself. The example to generate unit tests failed when I pressed Option-C: It generated illegal characters instead of function calls. (I’m starting to suspect an issue with Option-C in VS Code on macOS. It may or may not have anything to do with Amazon Q Developer.)

When I restarted VS Code, tried again, and this time pressed Return on the line below the comment, it worked fine, generating the test_sum function below.

# Write a test case for the above function.
def test_sum():
    """
    Unit test for the sum function.
    """
    assert sum(1, 2) == 3
    assert sum(-1, 2) == 1
    assert sum(0, 0) == 0

AWS shows examples of completion with Amazon Q Developer in up to half a dozen programming languages in its documentation. The examples, like the Python ones we’ve discussed, are either very simple, e.g. add two numbers, or relate to common AWS operations supported by APIs, such as uploading files to an S3 bucket.

Natural language to code generation with Amazon Q Developer

Since I now believed that Amazon Q Developer can generate Python, especially for its own test examples, I tried something a little different. As shown in the screenshot below, I created a file called quicksort.cpp, then typed an initial comment:

//function to sort a vector of generics in memory using the quicksort algorithm

Amazon Q Developer kept trying to autocomplete this comment, and in some cases the implementation as well, for different problems. Nevertheless it was easy to keep typing my specification while Amazon Q Developer erased what it had generated, and Amazon Q Developer eventually generated a nearly correct implementation.

Quicksort is a well-known algorithm. Both the C and C++ libraries have implementations of it, but they don’t use generics. Instead, you need to write type-specific comparison functions to pass to qsort. That’s historic, as the libraries were implemented before generics were added to the languages.

1

2



Page 2

I eventually got Amazon Q Developer to generate the main routine to test the implementation. It initially generated documentation for the function instead, but when I rejected that and tried again it generated the main function with a test case.

Unsurprisingly, the generated code didn’t even compile the first time. I saw that Amazon Q Developer had left out the required #include <iostream>, but I let VS Code correct that error without sending any code to Amazon Q Developer or entering the #include myself.

It still didn’t compile. The errors were in the recursive calls to sortVector(), which were written in a style that tried to be too clever. I highlighted and sent one of the error messages to Amazon Q Developer for a fix, and it solved a different problem. I tried again, giving Amazon Q Developer more context and asking for a fix; this time it recognized the actual problem and generated correct code.

This experience was a lot like pair programming with an intern or a junior developer who hadn’t learned much C++. An experienced C/C++ programmer might have asked to recast the problem to use the qsort library function, on grounds of using the language library. I would have justified my specification to use generics on stylistic grounds as well as possible runtime efficiency grounds.

Another consideration here is that there’s a well-known worst case for qsort, which takes a maximum time to run when the vector to be sorted is already in order. For this implementation, there’s a simple fix to be made by randomizing the partition point (see Knuth, The Art of Computer Programming: Sorting and Searching, Volume 3). If you use the library function you just have to live with the inefficiency.

amazon q developer 06 IDG

Amazon Q Developer code generation from natural language to C++. I asked for a well-known sorting algorithm, quicksort, and complicated the problem slightly by specifying that the function operate on a vector of generics. It took several fixes, but got there eventually.

Code references from Amazon Q Developer

So far, none of my experiments with Amazon Q Developer have generated code references, which are associated with recommendations that are similar to training data. I do see a code reference log in Visual Studio Code, but it currently just says “Don’t want suggestions that include code with references? Uncheck this option in Amazon Q: Settings.”

Vulnerability detection with Amazon Q Developer 

By default, Q Developer scans your open code files for vulnerabilities in the background, and generates squiggly underlines when it finds them. From there you can bring up explanations of the vulns and often invoke automatic fixes for them. You can also ask Q to scan your whole project for vulnerabilities and generate a report. Scans look for security issues such as as resource leaks, SQL injection, and cross-site scripting; secrets such as hardcoded passwords, database connection strings, and usernames; misconfiguration, compliance, and security issues in infrastructure as code files; and deviations from quality and efficiency best practices.

Q Chat in Amazon Q Developer

You’ve already seen how you can use Q Chat in an IDE to explain and fix code. It can also optimize code and write unit tests. You can go back to the first screenshot in this review to see Q Chat’s summary of what it can and can’t do, or use the /help command yourself once you have Q Chat set up in your IDE. On the whole, having Q Chat in Amazon Q Developer improves the product considerably over last year’s CodeWhisperer.

Customization in Amazon Q Developer 

If you set up Amazon Q Developer at the Pro level, you can customize its code generation of Python, Java, JavaScript, and TypeScript by giving it access to your code base. The code base can be in an S3 bucket or in a repository on GitHub, GitLab, or Bitbucket.

Running a customization generates a fine-tuned model that your users can choose to use for their code suggestions. They’ll still be able to use the default base model, but companies have reported that using customized code generation increases developer productivity even more than using the base model.

Developer agents in Amazon Q Developer 

Developer agents are long-running Amazon Q Developer processes. The one agent I’ve seen so far is for code transformation, specifically transforming Java 8 or Java 11 Maven projects to Java 17. There are a bunch of specific requirements your Java project needs to meet for a successful transformation, but the transformation agent worked well in AWS’s internal tests. While I have seen it demonstrated, I haven’t run it myself.

Amazon Q Developer in command line

Amazon Q Developer for the CLI currently (v 1.2.0) works in macOS; supports the bash, zsh, and fish shells; runs in the iTerm2, macOS Terminal, Hyper, Alacritty, Kitty, and wezTerm terminal emulators; runs in the VS Code terminal and JetBrains terminals (except Fleet); and supports some 500 of the most popular CLIs such as git, aws, docker, npm, and yarn. You can extend the CLI to remote macOS systems with q integrations install ssh. You can also extend it to 64-bit versions of recent distributions of Fedora, Ubuntu, and Amazon Linux 2023. (That one’s not simple, but it’s documented.)

Amazon Q Developer CLI performs three major services. It can autocomplete your commands as you type, it can translate natural language specifications to CLI commands (q translate), and it can chat with you about how to perform tasks from the command line (q chat).

For example, I often have trouble remembering all the steps it takes to rebase a Git repository, which is something you might want to do if you and a colleague are working on the same code (careful!) on different branches (whew!). I asked q chat, “How can I rebase a git repo?”

It gave me the response in the first screenshot below. To get brushed up on how the action works, I asked the follow-up question, “What does rebasing really mean?” It gave me the response in the second screenshot below. Finally, to clarify the reasons why I would rebase my feature branch versus merging it with an updated branch, I asked, “Why rebase a repo instead of merging branches?” It gave me the response in the third screenshot below.

The simple answer to the question I meant to ask is item 2, which talks about the common case where the main branch is changing while you work on a feature. The real, overarching answer is at the end: “The decision to rebase or merge often comes down to personal preference and the specific needs of your project and team. It’s a good idea to discuss your team’s Git workflow and agree on when to use each approach.”

amazon q developer 789 IDG

In the first screenshot above, I asked q chat, “How can I rebase a git repo?” In the second screenshot, I asked “What does rebasing really mean?” In the third, I asked “Why rebase a repo instead of merging branches?”

Amazon Q Developer in AWS Console

As you saw earlier in this review, a small Q icon at the upper right of the AWS Management Console window brings up a right-hand column where Amazon Q Developer invites you to “Ask me anything about AWS.” Similarly a large Q icon at the bottom right of an AWS documentation page brings up that same AMAaA column as a modeless floating window.

Recommended for experienced programmers

Overall, I like Amazon Q Developer. It seems to be able to handle the use cases for which it was trained, and generate whole functions in common programming languages with only a few fixes. It can be useful for completing lines of code, doc strings, and if/for/while/try code blocks as you type. It’s also nice that it scans for vulnerabilities and can help you fix code problems.

On the other hand, Q Developer can’t generate full functions for some use cases; it then reverts to line-by-line suggestions. Also, there seems to be a bug associated with the use of Option-C to trigger code generation. I hope that will be fixed fairly soon, but the workaround is to press Return a lot.

According to Amazon, a 33% acceptance rate is par for the course for AI code generators. By acceptance rate, they mean the percentage of generated code that is used by the programmer. They claim a higher rate than that, even for their base model without customization. They also claim over 50% boosts in programmer productivity, although how they measure programmer productivity isn’t clear to me.

Their claim is that customizing the Amazon Q Developer model to “the way we do things here” from the company’s code base offers an additional boost in acceptance rate and programmer productivity. Note that code bases need to be cleaned up before using them for training. You don’t want the model learning bad, obsolete, or unsafe coding habits.

I can believe a hefty productivity boost for experienced developers from using Amazon Q Developer. However, I can’t in good conscience recommend that programming novices use any AI code generator until they have developed their own internal sense for how code should be written, validated, and tested. One of the ways that LLMs go off the rails is to start generating BS, also called hallucinating. If you can’t spot that, you shouldn’t rely on their output.

How does Amazon Q Developer compare to GitHub Copilot, JetBrains AI, and Tabnine? Stay tuned. I need to reexamine GitHub Copilot, which seems to get updates on a monthly basis, and take a good look at JetBrains AI and Tabnine before I can do that comparison properly. I’d bet good money, however, that they’ll all have changed in some significant way by the time I get through my full round of reviews.

Cost: Free with limited monthly access to advanced features; Pro tier $19/month.

Platform: Amazon Web Services. Supports Visual Studio Code, Visual Studio, JetBrains IDEs, the Amazon Console, and the macOS command line. Supports recent 64-bit Fedora, Ubuntu, and Amazon Linux 2023 as remote targets from macOS ssh.

Pros

  1. Works fairly well, especially for popular languages and AWS applications
  2. Basic version is free
  3. Supports Python, Java, JavaScript, TypeScript, C#, Go, Rust, PHP, Ruby, Kotlin, C, C++, shell scripting, SQL, and Scala programming languages
  4. Can chat as well as generate code

Cons

  1. Only converses in English
  2. No Windows CLI support

Next read this:

Posted Under: Tech Reviews
Refurbished Computers Making a Difference

Posted by on 15 June, 2015

Each day we all have an opportunity make a difference in the lives of other people. Each of us has unique ways we can make that happen. Whether donating through a charity, through a small kindness offered to a stranger, helping out special people in our own lives, and the list can go on and on.

If you take a moment to look at charities, there are a variety of ways in which you can help, as there are so many charities available. Whether your choice is donating money to a charity, donating food to a food pantry, or donating clothes to a shelter, each of us can make a difference. The key to this is finding a charity that means something to you and a charity where you can make a difference.

At Innovative Computer Products we are so pleased to have found a charity where we can make a difference, and that organization is CFY (www.cfy.org). Through CFY and our One for One Program we are able to reach out to the neediest students who have no means of obtaining home technology.

Donating refurbished computers is the key to our One for One Program. For every refurbished desktop computer we sell at Innovative Computer Products, we donate one refurbished desktop computer to CFY. CFY is such a worthy organization and through their own means along with our One for One Program, the technology is truly getting out there to the families that need it. Last year we were able to donate over two thousand refurbished computers to CFY through our One on One Program.

CFY has many different ways you can make a difference in their organization, and we urge you to do so. While our method is donating computers, maybe your method will be with your money, time or talent, or possibly computer donation as well. Your contribution will make a difference in the lives of children.

Please consider helping CFY – you can make a difference.

Posted Under: General, Refurbished IT Hardware
Page 1 of 212

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue