Monthly Archives: February 2024

Google delivers Gemini LLM support to BigQuery data warehouse

Posted by on 29 February, 2024

This post was originally published on this site

Google is integrating its Gemini 1.0 Pro large language model with its AI and machine learning platform, Vertex AI, to help enterprises unlock new capabilities of large language models (LLMs), including analysis of text, image and video.

The Gemini API, which has been made generally available, can also be used in Google’s data warehouse, BigQuery, to develop generative AI-based analytical applications.

“The Gemini 1.0 Pro model is designed for higher input-output scale and better result quality across a wide range of tasks like text summarization and sentiment analysis. You can now access it using simple SQL statements or BigQuery’s embedded DataFrame API from right inside the BigQuery console,” Gerrit Kazmaier, general manager of data analytics at Google Cloud, said in a statement.

The company is also expected to integrate the vision version of the Gemini Pro model in the coming months.

In addition, Google is extending Vertex AI’s document processing and speech-to-text APIs to BigQuery to help enterprises analyze unstructured data, such as documents and audio.

Earlier this month, the company announced the preview of BigQuery vector search, which when integrated with Vertex AI can enable vector similarity search on data inside BigQuery along with other features such as retrieval augmented generation (RAG), text clustering and summarization.

Hyoun Park, principal analyst at Amalgam Insights, sees RAG support as table stakes for data warehouse vendors these days.

“Retrieval augmented generation is a capability every data warehouse will need to support, as it refers to accessing data from a third party source when someone asks a question,” Park said. “For instance, if someone asks an HR question, the RAG would also ask the employee’s HR system for relevant and current data to contextualize the question. The relevant capability here is in accessing a real-time update of a specific table or data source when someone asks a question to an LLM.”

Other companies are moving in a similar direction. Steven Dickens, vice president and practice lead at The Futurum Group, said that warehouse stalwarts such as Teradata and Cloudera are also adding vector capabilities alongside players such as Oracle and Elastic.

Next read this:

Posted Under: Database
Google Cloud adds vector support to all its database offerings

Posted by on 29 February, 2024

This post was originally published on this site

Google Cloud on Thursday said it is adding vector support and integrating LangChain with all of its database offerings in an effort to outdo rival cloud service providers, such as Amazon Web Services (AWS), Microsoft, and Oracle.

Cloud service providers have been locked in a race to add generative AI and AI-related capabilities to their database offerings to have the first mover advantage in order to garner a bigger pie of the growing AI and generative AI market.

The new updates to database offerings include the addition of vector support for relational, key value, document, and in-memory databases such as CloudSQL, Spanner, Firestore, Bigtable, and Memorystore for Redis.

Nearest neighbor search is a key differentiator

The vector capabilities added to the databases feature search capabilities including the approximate nearest neighbor search (ANN) and exact nearest neighbor search (KNN).

While ANN is used to optimize search, in other words, reduce latency, for large datasets, KNN is used to return more specific or precise search results on smaller datasets, said David Menninger, executive director at ISG’s Ventana Research.

“Support for ANN and KNN reflects that there isn’t a one-size-fits-all approach to vector search and that different use cases require different indexing algorithms to provide the required level of accuracy and performance,” Menninger explained, adding that this highlights that it is incumbent for developers to understand the nature of their data and application, and experiment with various databases to identify the capabilities that best fit the requirements of an individual project.  

The other advantage from Google’s standpoint, according to Forrester’s principal analyst Noel Yuhanna, is that most database vendors don’t offer both ANN and KNN.

“Some vendors support KNN, while others support the ANN approach. ANN is more popular since it is scalable and performs well for large datasets and high-dimensional vectors,” Yuhanna said.

All the vector capabilities added to the database offerings are currently in preview. In July last year, Google launched support for the popular pgvector extension in AlloyDB and Cloud SQL to support building generative AI applications. 

The addition of vector capabilities across multiple database offerings since July last year at regular intervals, seemingly, makes Google Cloud “more aggressive” than rival hyperscalers, according to Menninger.

However, he did point out that almost all database vendors are adding support for vector and vector search capabilities.

Microsoft, AWS, and Oracle, according to Yuhanna, have some level of vector support capabilities in the works in their respective database offerings.

The announcements by Google Cloud might just give it an edge over its rivals as it seems to be a bit further ahead in the journey than others in terms of making these capabilities generally available to enterprises, Yuhanna said.

Both analysts also pointed out that adding support for vector capabilities will soon become table stakes for data platform vendors to support the development of generative AI applications by complementing large language models (LLMs) with approved enterprise data to improve accuracy and trust.

ISG, according to Menninger, believes that almost all enterprises developing applications based on generative AI will explore the use of vector search and retrieval-augmented generation to complement foundation models with proprietary data and content by the end of 2026.

Rivalry between vector databases and traditional databases

The addition of vector capabilities by hyperscalers and other database vendors to their offerings has resulted in a growing rivalry between vector databases and traditional databases, according to analysts.

While traditional databases have been adding vector capabilities to make their case to enterprises, vector databases have been capabilities to make their products more easily consumable by non-experts, they added.

However, ISG’s Menninger believes that more than 50% of enterprises will use traditional database offerings with vector support by 2026, given their reliance on these traditional databases.

Specialized vector databases will still continue to exist, though only for more complex and sophisticated use cases, Menninger said. Pinecone, Chroma, Weaviate, Milvus, and Qdrant are examples of specialized databases.

Explaining further, Menninger said that whether vector search is best performed using a specialist vector database or a general-purpose database will depend on a variety of factors, including the relative reliance of an enterprise on an existing database, developer skills, the size of the dataset, and specific application requirements.

Integration of LangChain with all Google database offerings

Google Cloud is adding LangChain integrations for all of its databases. “We will support three LangChain Integrations that include vector stores, document loaders, and chat messages memory,” said Andi Gutmans, vice president of engineering for Google Cloud’s databases division.

LangChain is a framework for developing applications powered by LLMs and the integration into databases will allow developers built-in Retrieval Augmented Generation (RAG) workflows across their preferred data source, Gutmans added.

While the LangChain vector stores integration is available for AlloyDB, Cloud SQL for PostgreSQL, Cloud SQL for MySQL, Memorystore for Redis, and Spanner, the document loaders and chat messages memory integration is available for all databases, including Firestore, Bigtable, and SQL Server.

Analysts see the addition of LangChain integrations as an “assertive” move from Google.

“LangChain is currently the most popular framework for connecting LLMs to private sources of enterprise data, providing vendor-neutral integration with enterprise databases, as well as commercial machine learning development and deployment environments, such as SageMaker Studio and Vertex AI Studio,” Menninger explained. 

AlloyDB AI made generally available

Google has made its AlloyDB AI offering generally available. It can be used via AlloyDB and AlloyDB Omni.

AlloyDB AI, which was moved into preview last year in August, is a suite of integrated capabilities that allow developers to build generative AI-based applications using real-time data. 

It builds on the basic vector support available with standard PostgreSQL and can introduce a simple PostgreSQL function to generate embeddings on data.  

AlloyDB AI is an integral part of AlloyDB and AlloyDB Omni, and is available at no additional charge, the company said.

Next read this:

Posted Under: Database
Amazon Bedrock: A solid generative AI foundation

Posted by on 27 February, 2024

This post was originally published on this site

Amazon Web Services’ fully managed service for building, deploying, and scaling generative AI applications, Amazon Bedrock offers a catalog of foundation models, implements retrieval-augmented generation (RAG) and vector embeddings, hosts knowledge bases, implements fine-tuning of foundation models, and allows continued pre-training of selected foundation models.

Amazon Bedrock complements the almost 30 other Amazon machine learning services available, including Amazon Q, the AWS generative AI assistant.

There are currently six major features in Amazon Bedrock:

  • Experiment with different models: Use the API or GUI in the console to test various prompts and configurations with different foundation models.
  • Integrate external data sources: Improve response generation by incorporating external data sources into knowledge bases, which can be queried to augment the responses from foundation models.
  • Develop customer support applications: Build applications that use foundation models, API calls, and knowledge bases to reason and execute tasks for customers.
  • Customize models: Tailor a foundation model for particular tasks or domains by providing training data for fine-tuning or additional pretraining.
  • Boost application efficiency: Optimize the performance of foundation model-based applications by purchasing provisioned throughput.
  • Choose the most suitable model: Compare the outputs of various models using standard or custom prompt data sets to choose the model that best aligns with the requirements of your application.

One major competitor to Amazon Bedrock is Azure AI Studio, which, while still in preview and somewhat under construction, checks most of the boxes for a generative AI application builder. Azure AI Studio is a nice system for picking generative AI models, grounding them with RAG using vector embeddings, vector search, and data, and fine-tuning them, all to create what Microsoft calls copilots, or AI agents.

Another major competitor is Google Vertex AI’s Generative AI Studio, which allows you to tune foundation models with your own data, using tuning options such as adapter tuning and reinforcement learning from human feedback (RLHF), or style and subject tuning for image generation. Generative AI Studio complements the Vertex AI model garden and foundation models as APIs.

Other possible competitors include LangChain (and LangSmith), Poe, and the ChatGPT GPT Builder. LangChain does require you to do some programming.

Amazon Bedrock model setup

There are two setup tasks for Bedrock: model setup and API setup. You need to request access to models before you can use them. If you want to use the AWS command line interface or any of the AWS SDKs, you also need to install and configure the CLI or SDK.

I didn’t bother with API setup, as I’m concentrating on using the console for the purposes of this review. Completing the model access request form was easier than it looked, and I was granted access to models faster than I expected.

amazon bedrock 02 IDG

You can’t use a model in Amazon Bedrock until you’ve requested and received permission to use it. Most vendors grant access immediately. Anthropic takes a few minutes, and requires you to fill out a short questionnaire about your planned usage. This screenshot was taken just before my Claude access requests were granted.

Amazon Bedrock model inference parameters

Amazon Bedrock uses slightly different parameters to control the response of models than, say, OpenAI. Bedrock controls randomness and diversity using the temperature of the probability distribution, the top K, and the top P. It controls the length of the output with the response length, penalties, and stop sequences.

Temperature modulates the probability for the next token. A lower temperature leads to more deterministic responses, and a higher temperature leads to more random responses. In other words, choose a lower temperature to increase the likelihood of higher-probability tokens and decrease the likelihood of lower-probability tokens; choose a higher temperature to increase the likelihood of lower-probability tokens and decrease the likelihood of higher-probability tokens. For example, a high temperature would allow the completion of “I hear the hoof beats of” to include unlikely beasts like unicorns, while a low temperature would weight the output to likely ungulates like horses.

Top K is the number of most-likely candidates that the model considers for the next token. Lower values limit the options to more likely outputs, like horses. Higher values allow the model to choose less likely outputs, like unicorns.

Top P is the percentage of most-likely candidates that the model considers for the next token. As with top K, lower values limit the options to more likely outputs, and higher values allow the model to choose less likely outputs.

Response length controls the number of tokens in the generated response. Penalties can apply to length, repeated tokens, frequency of tokens, and type of tokens in a response. Stop sequences are sequences of characters that stop the model from generating further tokens.

Amazon Bedrock prompts, examples, and playgrounds

Amazon Bedrock currently displays 33 examples of generative AI model usage, and offers three playgrounds. Playgrounds provide a console environment to experiment with running inference on different models and with different configurations. You can start with one of the playgrounds (chat, text, or image), select a model, construct a prompt, and set the metaparameters. Or you can start with an example and open it in the appropriate playground with the model and metaparameters pre-selected and the prompt pre-populated. Note that you need to have been granted access to a model before you can use it in a playground.

Amazon Bedrock examples demonstrate prompts and parameters for various supported models and tasks. Tasks include summarization, question answering, problem solving, code generation, text generation, and image generation. Each example shows a model, prompt, parameters, and response, and presents a button you can press to open the example in a playground. The results you get in the playground may or may not match what is shown in the example, especially if the parameters allow for lower-probability tokens.

Our first example shows arithmetic word problem solving using a chain-of-thought prompt and the Llama 2 Chat 70B v1 model. There are several points of interest in this example. First, it works with a relatively small open-source chat model. (As an aside, there’s a related example that uses a 7B (billion) parameter model instead of the 70B parameter model used here; it also works.) Second, the chain-of-thought action is triggered by a simple addition to the prompt, “Let’s think step by step.” Note that if you remove that line, the model often goes off the rails and generates a wrong answer.

amazon bedrock 03 IDG

The chain-of-thought problem-solving example uses a Llama 2 chat model and presents a typical 2nd or 3rd grade arithmetic word problem. Note the [INST]You are a…[/INST] block at the beginning of the prompt. This seems to be specific to Llama. You’ll see other models respond to different formats for defining instructions or system prompts.

amazon bedrock 04 IDG

The chain-of-thought problem-solving example running in the Amazon Bedrock Chat playground. This particular set of prompts and hyperparameters usually gives correct answers, although not in the exact same format every time. If you remove the “Let’s think step by step” part of the prompt it usually gives wrong answers. The temperature setting of 0.5 asks for moderate randomness in the probability mass function, and the top P setting of 0.9 allows the model to consider less likely outputs.

Our second example shows contract entity extraction using Cohere’s Command text generation model. Text LLMs (large language models) often allow for many different text processing functions.

amazon bedrock 05 IDG

Amazon Bedrock contract entity extraction example using Cohere’s Command text generation model. Note that the instruction here is on the first line followed by a colon, and then the contract body follows.

amazon bedrock 06 IDG

Contract entity extraction example running in the Amazon Bedrock text playground. Note that there was an opportunity for additional interaction in the playground, which didn’t show up in the example. While the temperature of this run was 0.9, Cohere’s Command model takes temperature values up to 5. The top p value is set to 1 (and displayed at 0.99) and the top k parameter is not set. These allow for high randomness in the generated text.

Our final example shows image inpainting, an application of image generation that uses a reference image, a mask, and prompts to produce a new image. Up until now, I’ve only done AI image inpainting in Adobe Photoshop, which has had the capability for awhile.

amazon bedrock 07 IDG

Amazon Bedrock’s image inpainting example uses the Titan Image Generator G1 model. Note the reference image and mask image in the image configuration.

amazon bedrock 08 IDG

In order to actually select the flowers for inpainting, I had to move the mask from the default selection of the backpack to the area containing the white flowers in the reference image. When I didn’t do that, orange flowers were generated in front of the backpack.

amazon bedrock 09 IDG

Successful inpainting in Amazon Bedrock. Note that I could have used the mask prompt to refine the mask for complex mask selections in noncontiguous areas, for example selecting the flowers and the books. You can use the Info links to see explanations of individual hyperparameters.

Amazon Bedrock orchestration

Amazon Bedrock orchestration currently includes importing data sources into knowledge bases that you can then use for setting up RAG, and creating agents that can execute actions. These are two of the most important techniques available for building generative AI applications, falling between simple prompt engineering and expensive and time-consuming continued pre-training or fine-tuning.

Using knowledge bases takes multiple steps. Start by importing your data sources into an Amazon S3 bucket. When you do that, specify the chunking you’d like for your data. The default is approximately 300 tokens per chunk, but you can set your own size. Then set up your vector store and embeddings model in the database you prefer, or allow AWS to use its default of Amazon OpenSearch Serverless. Then create your knowledge base from the Bedrock console, ingest your data sources, and test your knowledge base. Finally, you can connect your knowledge base to a model for RAG, or take the next step and connect it to an agent. There’s a good one-hour video about this by Mani Khanuja, recorded at AWS re:Invent 2023.

Agents orchestrate interactions between foundation models, data sources, software applications, and prompts, and call APIs to take actions. In addition to the components of RAG, agents can follow instructions, use an OpenAPI schema to define the APIs that the agent can invoke, and/or invoke a Lambda function.

amazon bedrock 10 IDG

Amazon Bedrock knowledge base creation and testing starts with this screen. There are several more steps.

Amazon Bedrock model assessment and deployment

The Assessment and Deployment panel in Amazon Bedrock contains functionality for model evaluation and provisioned throughput.

Model evaluation supports automatic evaluation of a single model, manual evaluation of up to two models using your own work team, and manual evaluation of as many models as you wish using an AWS-managed work team. Automatic evaluation uses recommended metrics, which vary depending on the type of task being evaluated, and can either use your own prompt data or built-in curated prompt data sets.

Provisioned throughput allows you to purchase dedicated capacity to deploy your models. Pricing varies depending on the model that you use and the level of commitment you choose.

amazon bedrock 11 IDG

Automatic model evaluation selection in Amazon Bedrock. Bedrock can also set up human model evaluations. The metrics and data sets used vary with the task type being evaluated.

amazon bedrock 12 IDG

Amazon Bedrock’s provisioning throughput isn’t cheap, and it isn’t available for every model. Here we see an estimated monthly cost of provisioning five model units of the Llama 2 Chat 13B model for one month. It’s $77.3K. Upping the term to six months drops the monthly cost to $47.7K. You can’t edit the provisioned model units or term once you’ve purchased the throughput.

Model customization methods

It’s worth discussing ways of customizing models in general at this point. Below we’ll talk specifically about the customization methods implemented in Amazon Bedrock.

Prompt engineering, as shown above, is one of the simplest ways to customize a generative AI model. Typically, models accept two prompts, a user prompt and a system or instruction prompt, and generate an output. You normally change the user prompt all the time, and use the system prompt to define the general characteristics you want the model to take on. Prompt engineering is often sufficient to define the way you want a model to respond for a well-defined task, such as generating text in specific styles by presenting sample text or question-and-answer pairs. You can easily imagine creating a prompt for “Talk Like a Pirate Day.” Ahoy, matey.

1

2



Page 2

Retrieval-augmented generation helps to ground LLMs with specific sources, often sources that weren’t included in the models’ original training. As you might guess, RAG’s three steps are retrieval from a specified source (the knowledge base in Amazon Bedrock parlance), augmentation of the prompt with the context retrieved from the source, and then generation using the model and the augmented prompt.

RAG procedures often use embedding to limit the length and improve the relevance of the retrieved context. Essentially, an embedding function takes a word or phrase and maps it to a vector of floating point numbers; these are typically stored in a database that supports a vector search index. The retrieval step then uses a semantic similarity search, typically using the cosine of the angle between the query’s embedding and the stored vectors, to find “nearby” information to use in the augmented prompt. Search engines usually do the same thing to find their answers.

Agents, aka conversational retrieval agents, expand on the idea of conversational LLMs with some combination of tools, running code, embeddings, and vector stores. In other words, they are RAG plus additional steps. Agents often help to specialize LLMs to specific domains and to tailor the output of the LLM. Azure Copilots are usually agents; Google and Amazon use the term agents. LangChain and LangSmith simplify building RAG pipelines and agents.

Fine-tuning large language models is a supervised learning process that involves adjusting the model’s parameters to a specific task. It’s done by training the model on a smaller, task-specific data set that’s labeled with examples relevant to the target task. Fine-tuning often takes hours or days using many server-level GPUs and requires hundreds or thousands of tagged exemplars. It’s still much faster than extended pre-training.

Pre-training is the unsupervised learning process on huge text data sets that teaches LLMs the basics of language and creates a generic base model. Extended or continued pre-training adds unlabeled domain-specific or task-specific data sets to the base model to specialize the model, for example to add a language, add terms for a specialty such as medicine, or add the ability to generate code. Continued pre-training (using unsupervised learning) is often followed by fine-tuning (using supervised learning).

Customizing models in Amazon Bedrock with fine-tuning and continued pre-training

Both fine-tuning and continued pre-training tend to be expensive and lengthy processes. Even preparing the data for these can be a challenge. For fine-tuning, the challenge is getting the tagging done within budget. For continued pre-training, the challenge is to find a data set for your domain of interest that doesn’t introduce biases or toxicity of any kind.

amazon bedrock 13 IDG

Amazon Bedrock can create custom models by continued pre-training and/or with fine-tuning. You can manage your models and training jobs from this screen. Note the requirement for purchasing provisioned throughput to deploy your custom model.

amazon bedrock 14 IDG

Creating a fine-tuning job in Amazon Bedrock. Note that only certain models can currently be fine-tuned: four Amazon models, two Cohere models, and two Meta models.

amazon bedrock 15 IDG

You can manage your custom model training jobs as well as your custom models in Amazon Bedrock. Note the three status codes for jobs: failed, stopped, and complete. Only completed jobs will get a link from their custom model name. All jobs get links from their job names.

amazon bedrock 16 IDG

Digging into a training job detail in Amazon Bedrock shows you its source model, when it was started, its status, and various parameters and hyperparameters.

amazon bedrock 17 IDG

Once you have completed customizing your models in Amazon Bedrock you can manage them on the models tab. You can provision them, open them in the playground, delete them, and open their details.

amazon bedrock 18 IDG

Model details look similar to training job details in Amazon Bedrock, with a few differences, such as offering purchase and management of provisioned input.

amazon bedrock 19 IDG

While the setup of a continued pre-training job looks similar to the setup of a fine-tuning job, they have some major differences. Continued pre-training is an unsupervised learning job that needs a lot of untagged data and a lot of time. Fine-tuning is a supervised learning job that needs less data (but tagged!) and less time.

Low-code generative AI using PartyRock

To accompany Amazon Bedrock, AWS has released a mostly free low-code platform for learning generative AI and building small AI apps. The introductory PartyRock blog post is by Jeff Barr, and tells you enough that you can dive in yourself; it also supplies links to PartyRock learning resources near the end of the post. If you don’t want to build an app yourself, you can still play with the apps others have built.

Generative AI app building on Bedrock

Amazon Bedrock is a credible competitor to Azure AI Studio. If you’re already committed to AWS rather than Microsoft Azure or Google Cloud, then Bedrock will certainly be a good choice for building and scaling generative AI applications.  Bedrock offers fewer foundation models than Azure AI Studio, and furthermore lacks access to any OpenAI models, but it should do the job for most generative AI apps. Bedrock is currently a little behind Azure AI Studio when it comes to content filters, but that could easily change in the future.

Note that the cost of deploying generative AI apps tends to dwarf the cost of developing them. The cost of using Amazon Bedrock to do prompt engineering and develop RAG apps tends to be low (ignoring the people costs), and the cost of testing these in the Bedrock playgrounds is usually negligible. The cost of fine-tuning tends to be something that might give small companies pause. The cost of continued pre-training may also give medium-size companies pause. But deploy an app with a customized model at scale sufficient to serve a large audience with low lag for a long period of time, and soon you’re talking about real money.

Cost: Pricing is based on the model, the volume of input tokens and output tokens, and on whether you have purchased provisioned throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console.

Platform: Browser-based, hosted on AWS. API access available.

Next read this:

Posted Under: Tech Reviews
Can MariaDB’s commercial business be saved?

Posted by on 21 February, 2024

This post was originally published on this site

A non-binding proposal to acquire MariaDB, the provider of the relational database management system (RDBMS) of the same name—a fork of the open-source MySQL database, has sparked speculations about the company’s future and what the acquisition would mean for its enterprise customers.  

The proposal was for MariaDB PLC, the firm that provides database services and SaaS offerings built on the core open-source database that is managed by the MariaDB Foundation.

Earlier this month, MariaDB PLC received a proposal of acquisition, to the tune of $37 million, from California-headquartered investment firm K1 Investment Management.

The proposal, which is non-binding and may not result in an actual offer for acquisition, puts a value of $0.55 for each share of MariaDB—a 189% premium over the database firm’s closing share price on February 5.

A rocky ride for the last year

MariaDB PLC has been on a rocky ride for the past year, including laying off staffers, changing leadership, spinning off parts of its business, and filing cautionary statements with the US Securities and Exchange Commission (SEC).

The company’s path to financial difficulty began in December 2022 when it decided to go public via the special purpose acquisition company (SPAC) route with Angel Pond Holdings.

After going public, the company saw its market capitalization plummet from $445 million to just over $10 million by the end of 2023. The sharp drop in value can be attributed to the company’s poor quarterly performance and history of losses since the last quarter of 2022, according to statements filed with the SEC.

While the first quarter of 2023 saw the company lay off staffers, the second quarter saw it file cautionary statements about its financial health via a prospectus. The company said at the time that it was looking at attracting financial investments to keep the lights on.

This was followed by the New York Stock Exchange (NYSE) issuing a warning to the company in September. The stock exchange had warned MariaDB that it was failing to meet listing norms that make it necessary for listed firms to ensure that their market capitalization doesn’t drop below $50 million for a trading period of 30 days.

October brought in more woes for the database-as-a-service provider with the company being forced to lay off 28% of its workforce and shut down two of its products, MariaDB Xpand and MariaDB SkySQL. Three months later, Microsoft announced that Azure Database for MariaDB was scheduled for retirement by September 19, 2025.

In the meantime, the company received a takeover proposal from existing investor Runa Capital, which didn’t work out. Then an associate company of Runa Capital by the name of RP Ventures offered MariaDB a loan of $26.5 million at an interest of 10%.

Commercial business on the brink

The continued losses that forced layoffs, de-listing, and product shutdowns may have also ensured the ruin of MariaDB’s commercial business, experts said.

More enterprises have been switching from the MariaDB Enterprise Server edition to the MariaDB Community Server edition ever since Microsoft announced the end of life in 2025, according to Thomas Spoelstra, a database expert with Dutch database management services firm OptimaData.

OptimaData manages multiple database management systems including Microsoft SQL Server, Oracle Database, Sybase, MySQL, MariaDB, MongoDB, and PostgreSQL.

MariaDB Enterprise always has been less popular than the community edition, Spoelstra said, adding that most of OptimaData’s clients use the latter because technical support for MariaDB Enterprise is “very” expensive.

This results in most enterprises using the community edition along with MariaDB offerings such as the Galera Cluster, the expert explained.

Additionally, Spoelstra said that not only have the financial difficulties forced enterprise customers to look at other options but have forced cloud service providers to slowly steer away from MariaDB’s commercial offerings.

Samsung is one example of an enterprise customer who might face challenges due to the shuttering of MariaDB Xpand and eventually look at other options. This could pose a significant cost for the Korean giant, as the company uses 50 MariaDB Xpand nodes to operate a single database where it hosts data of its smartphone customers.

An email sent to Samsung seeking more details on their MariaDB investment went unanswered.

In addition, the Korean electronics giant’s cloud and technology services division, Samsung SDS, offers MariaDB as a managed database offering. A separate email seeking responses on the availability of the service and its planned continuity also went unanswered.

Customers losing confidence

The discontinuation of offerings by MariaDB has forced enterprise customers to lose confidence in the company, said Tony Baer, principal analyst at dbInsight. These kinds of developments will make any customer enterprise insecure about whether their investments will be next for the chopping block, Baer added.

Although the decisions, such as laying off staffers or discontinuation of offerings, were taken by MariaDB to reduce costs, these decisions have impacted the profile of the company, according to Matt Aslett, director at Ventana Research, an arm of research and advisory firm ISG.

The impact has been profound, especially in relation to the company’s DBaaS and distributed SQL offerings, which were potential areas of long-term growth and innovation, Aslett explained.

Given the current plight of MariaDB’s commercial business, Baer said that he sees any acquisition activity as “nothing more than a Hail Mary pass to protect what are now stranded legacy customer investments.”

In fact, if Spoelstra is correct the acquisition by K1 or any other company will further accelerate the movement of customers from MariaDB to MySQL, ProxySQL, and other options.

Presently, MariaDB, according to data from 6Sense, has a 2.08% share of the relational database market category, which is 0.07% less than what the company had reported in April of last year.

For the quarter ended December, MariaDB PLC posted a net loss of $8.8 million, driven by interest expense, restructuring costs, and costs associated with the discontinuation of products.

Next read this:

Posted Under: Database
Can MariaDB’s enterprise business be saved?

Posted by on 21 February, 2024

This post was originally published on this site

A non-binding proposal to acquire MariaDB, the provider of the relational database management system (RDBMS) of the same name—a fork of the open-source MySQL database, has sparked speculations about the company’s future and what the acquisition would mean for its enterprise customers.  

The proposal was for MariaDB PLC, the firm that provides database services and SaaS offerings built on the core open-source database that is managed by the MariaDB Foundation.

Earlier this month, MariaDB PLC received a proposal of acquisition, to the tune of $37 million, from California-headquartered investment firm K1 Investment Management.

The proposal, which is non-binding and may not result in an actual offer for acquisition, puts a value of $0.55 for each share of MariaDB—a 189% premium over the database firm’s closing share price on February 5.

A rocky ride for the last year

MariaDB PLC has been on a rocky ride for the past year, including laying off staffers, changing leadership, spinning off parts of its business, and filing cautionary statements with the US Securities and Exchange Commission (SEC).

The company’s path to financial difficulty began in December 2022 when it decided to go public via the special purpose acquisition company (SPAC) route with Angel Pond Holdings.

After going public, the company saw its market capitalization plummet from $445 million to just over $10 million by the end of 2023. The sharp drop in value can be attributed to the company’s poor quarterly performance and history of losses since the last quarter of 2022, according to statements filed with the SEC.

While the first quarter of 2023 saw the company lay off staffers, the second quarter saw it file cautionary statements about its financial health via a prospectus. The company said at the time that it was looking at attracting financial investments to keep the lights on.

This was followed by the New York Stock Exchange (NYSE) issuing a warning to the company in September. The stock exchange had warned MariaDB that it was failing to meet listing norms that make it necessary for listed firms to ensure that their market capitalization doesn’t drop below $50 million for a trading period of 30 days.

October brought in more woes for the database-as-a-service provider with the company being forced to lay off 28% of its workforce and shut down two of its products, MariaDB Xpand and MariaDB SkySQL. Three months later, Microsoft announced that Azure Database for MariaDB was scheduled for retirement by September 19, 2025.

In the meantime, the company received a takeover proposal from existing investor Runa Capital, which didn’t work out. Then an associate company of Runa Capital by the name of RP Ventures offered MariaDB a loan of $26.5 million at an interest of 10%.

Commercial business on the brink

The continued losses that forced layoffs, de-listing, and product shutdowns may have also ensured the ruin of MariaDB’s commercial business, experts said.

More enterprises have been switching from the MariaDB Enterprise Server edition to the MariaDB Community Server edition ever since Microsoft announced the end of life in 2025, according to Thomas Spoelstra, a database expert with Dutch database management services firm OptimaData.

OptimaData manages multiple database management systems including Microsoft SQL Server, Oracle Database, Sybase, MySQL, MariaDB, MongoDB, and PostgreSQL.

MariaDB Enterprise always has been less popular than the community edition, Spoelstra said, adding that most of OptimaData’s clients use the latter because technical support for MariaDB Enterprise is “very” expensive.

This results in most enterprises using the community edition along with MariaDB offerings such as the Galera Cluster, the expert explained.

Additionally, Spoelstra said that not only have the financial difficulties forced enterprise customers to look at other options but have forced cloud service providers to slowly steer away from MariaDB’s commercial offerings.

Samsung is one example of an enterprise customer who might face challenges due to the shuttering of MariaDB Xpand and eventually look at other options. This could pose a significant cost for the Korean giant, as the company uses 50 MariaDB Xpand nodes to operate a single database where it hosts data of its smartphone customers.

An email sent to Samsung seeking more details on their MariaDB investment went unanswered.

In addition, the Korean electronics giant’s cloud and technology services division, Samsung SDS, offers MariaDB as a managed database offering. A separate email seeking responses on the availability of the service and its planned continuity also went unanswered.

Customers losing confidence

The discontinuation of offerings by MariaDB has forced enterprise customers to lose confidence in the company, said Tony Baer, principal analyst at dbInsight. These kinds of developments will make any customer enterprise insecure about whether their investments will be next for the chopping block, Baer added.

Although the decisions, such as laying off staffers or discontinuation of offerings, were taken by MariaDB to reduce costs, these decisions have impacted the profile of the company, according to Matt Aslett, director at Ventana Research, an arm of research and advisory firm ISG.

The impact has been profound, especially in relation to the company’s DBaaS and distributed SQL offerings, which were potential areas of long-term growth and innovation, Aslett explained.

Given the current plight of MariaDB’s commercial business, Baer said that he sees any acquisition activity as “nothing more than a Hail Mary pass to protect what are now stranded legacy customer investments.”

In fact, if Spoelstra is correct the acquisition by K1 or any other company will further accelerate the movement of customers from MariaDB to MySQL, ProxySQL, and other options.

Presently, MariaDB, according to data from 6Sense, has a 2.08% share of the relational database market category, which is 0.07% less than what the company had reported in April of last year.

For the quarter ended December, MariaDB PLC posted a net loss of $8.8 million, driven by interest expense, restructuring costs, and costs associated with the discontinuation of products.

Next read this:

Posted Under: Database
Mainframes are dead! Long live cloud computing!

Posted by on 16 February, 2024

This post was originally published on this site

One of the things that has often driven me nuts with the rise of cloud computing is the assumption that the demise of the mainframe is a foregone conclusion. I’ve often written about the reality that not all applications and data sets belong on the cloud, especially ones that reside on mainframes.

Although I have often been laughed out of a meeting for this opinion, adoption patterns have proved me correct. It’s often spun as a pushback on cloud computing itself, but it’s just pragmatism.

The truth is that we’re going to find applications and data sets (more than you think), that are not viable to move to the cloud. I call this the point of saturation, when we’re done moving most of the applications to the cloud that are practical to relocate. I’ve had mainframe applications in mind when saying that, and for good reason.

It can be done

Those who push back on this assertion quickly point out that some tools and technologies allow older mainframe applications to be ported to a cloud provider. These solid products set up emulators and code converters to run mainframe-based applications on public cloud providers.

Yes, you can go this route. But should you? The answer is often no. Consider the additional cost, risk, and the reality that these applications will have value for much longer than many predicted.

A survey from application modernization firm Advanced found that digital transformation is typically a priority. However, that often does not mean the end for the mainframe. Indeed, only 6% of respondents believed alternative technologies would replace the mainframe shortly.

More than half of the companies (52%) plan to maintain or grow their dependency on mainframes. Moreover, half of respondents said mainframes are their preferred platform for core applications (56%).

Coexistence is key

I’ve always seen an integration and coexistence policy as a better path, depending on how an enterprise uses the mainframe. Even with the explosive interest in generative AI where the mainframe won’t be a preferred platform (see the same report), mainframes become a primary server of training data. They sometimes have historical data that goes back 50 years. That’s invaluable for building large language models, which should learn from old as well as new data.

I don’t mean that we should prefer a mainframe platform over any other platform; it should be fairly considered—as all platform options should be. Sometimes the cloud will be a better host, sometimes edge computing, and in some cases, mainframes will continue to provide value. This “it depends” response drives everyone nuts, but it is usually the correct answer to these problems.

This approach creates a digital ecosystem of many different platforms, all of which are the best platforms for the specific use cases. Thus, we also need to improve at managing complexity and heterogeneity, which enterprises are not excelling at today. Enterprises are unable to find value in cloud deployment due to too much operational complexity and no finops oversight. Whether you use mainframes or not, you need to address that problem.

I’m not advocating for mainframe platforms. They come with issues, including the big one that mainframe developers and operators are retiring, and there is a shortage of mainframe talent. Many younger IT pros are not attracted to the mainframe space due to its lack of “coolness.”

However, those who understand mainframes and cloud-based platforms are in high demand, typically commanding salaries 20%-30% above their peers. Even cloud architects who know how to interact with mainframes are often paid a premium. Are you noticing a pattern?

I’m a pragmatist. We will use the platforms that can return the most value to the business. I don’t care what that is if it is the most optimized solution. That should be an architect’s main objective.

Next read this:

Posted Under: Database
Oracle Autonomous Database adds AI conversation support

Posted by on 15 February, 2024

This post was originally published on this site

Oracle has updated its Autonomous Database offering in an effort to maintain its lead over competing cloud-based database services from rivals such as AWS, Google Cloud, IBM, and Snowflake. Oracle Autonomous Database is an Oracle Cloud Infrastructure (OCI) service.

Oracle Autonomous Database, which is based on Oracle’s proprietary relational database management system (RDBMS), currently Oracle Database 23c, supports both transactional and analytical workloads.

The key differentiator for Autonomous Database is that its underlying management system automates patching, upgrades, and tuning, handling all routine database maintenance tasks without any manual intervention.

Autonomous Database supports four distinct workloads, including transaction processing, analytics and data warehousing, transactions and analytics on JSON data, and APEX application development, which is a managed low-code application development platform for building and deploying data-driven applications.

The latest updates include support for conversations in Select AI, new spatial enhancement in Oracle Machine Learning, a no-code model monitoring interface, and a new interface for Autonomous Database Graph Studio.

Select AI gets support for conversations

Oracle has added support for conversations to Select AI. Select AI, which was introduced in September last year, allows enterprise users to analyze their data using natural language, and with the help of large language models (LLMs) accessed via the OCI Generative AI service.

However, prior to this update, Select AI didn’t have the ability to remember previous questions, or allow users to ask follow-up questions.

“Select AI now makes that chat history available to the LLM so that it can interpret the context of follow-up questions. Users can now have a ‘conversation’ with their database to explore and narrow down the answers they need,” George Lumpkin, vice president of product management at Oracle, wrote in a blog post.

Enterprise users also can ask Select AI to produce the generated SQL and a description of the query processing, Lumpkin said.

The new capabilities of Select AI, according to research and advisory firm ISG’s executive director David Menninger, will help to ease the burden on developers and improve productivity.

“Without the context of a previous query, it means an enterprise developer would have to repeat a problem or request, modifying it as appropriate. That would quickly become frustrating and defeat the whole purpose,” Menninger said, adding that previously Select AI could just generate a basic structure of the SQL query from a natural language input.

dbInsight’s principal analyst Tony Baer believes that the new update takes AI copilots, or AI systems for coding, to a new level. “In addition to generating code, the language model must also ‘understand’ the logical structure of the database plus the unstructured text that is descriptive metadata. It must have a thorough understanding of query optimization as well,” Baer said.

Although Oracle’s Select AI competes with a growing number of natural language query services, including Microsoft Copilot, Github Copilot, Amazon Q, Snowflake Copilot, and Databricks IQ, Baer believes that Select AI’s differentiator is that it can understand highly complex schemas that are typical of Oracle Database deployments.

Select AI is accessible to any SQL application and is available as an integrated feature within Autonomous Database, the company said.

No-code interface for model monitoring

In order to help enterprise staff handling machine learning operations (MLOps), Oracle also has added a new no-code interface for monitoring machine learning models.

The new model monitoring interface, according to Oracle’s Lumpkin, not only will allow enterprise users to monitor models but also tweak them if required.

“For developers, improvement in model performance has long been sought and is a top priority. Examples include climate and weather modeling and improving public safety responsiveness,” said Ron Westfall, research director at The Futurum Group.

“The new capability enables Oracle’s Autonomous Database to streamline the modeling process in relation to competing offers,” Westfall added.

In addition, the company has introduced a new spatial enhancement in Oracle Machine Learning for Python, included as part of the Autonomous Database offering, that enables enterprises to include location relationships in machine learning models for improved model accuracy.

“Data scientists can detect spatial patterns through a quantitative approach—such as spatial clustering, regression, classification, and anomaly detection—without moving the data outside the database or writing complex algorithms on their own,” Lumpkin wrote.

In order to help enterprises gain more insights from their data, the company has added a new user interface for Autonomous Database’s Graph Studio that will allow enterprise to create property graph views on resource description framework (RDF) knowledge graphs using a drag-and-drop method.

“RDF knowledge graphs help apply meaning to data relationships by capturing complex associations across data in institutional silos. Enterprises can get additional insights from the data within the knowledge graphs,” Lumpkin wrote.

Oracle ahead of its database rivals?

The new updates, which are focused on managing data, generating insights from data, and speeding application development with the help of machine learning and AI, might give Oracle a leg up against rivals, according to ISG’s Menninger.

“Oracle made bold claims years ago when it introduced Autonomous Database. At the time it was probably ahead of itself in terms of what was realistic, but the technology is catching up and making more autonomous capabilities realistic. As a result of Oracle’s early investment, I think it’s safe to say they have established a lead in providing autonomous database capabilities,” Menninger said.

ISG’s executive director further said that any database that has the potential to offload administration and management drudgery would attract enterprises, as no one wants to spend their time in these activities. “As long as the features work and are priced appropriately, they make sense for all enterprises,” he added.

These features and updates also follow the trend of software providers building AI-based capabilities inside their database offerings, thereby curtailing the need for enterprises to move their data to a separate database or data platform to develop AI or generative AI-based applications.  

“The next wave of disruption in the DBMS market will be the emergence of the data ecosystem as the overall data platform,” market research firm Gartner said in a report.

Westfall from The Futurum Group believes that Select AI has put Oracle’s Autonomous Database at the forefront of data platform innovations.

“With Select AI, I see that Oracle is breaking AI ground with a generally available capability for organizations to have a contextual dialogue with their private, proprietary data—intuitively. From the demos it is simple to use so that enterprises of all sizes can use it immediately,” the research director said.

Next read this:

Posted Under: Database
Steampipe unbundled: From cloud APIs to your database

Posted by on 8 February, 2024

This post was originally published on this site

We’ve seen how Steampipe can unify access to APIs, drive metasearch, enforce KPIs as code, and detect configuration drift. The enabling plugins were, until recently, tightly bound to the Steampipe binary and to the instance of Postgres that Steampipe launches and controls. That led members of Steampipe’s open-source community to ask, “Can we use the plugins in our own Postgres databases?” Now the answer is yes—and more. But let’s focus on Postgres first.

Using a Steampipe plugin as a Postgres foreign data wrapper

Visit Steampipe downloads to find the installer for your OS, and run it to acquire the Postgres foreign data wrapper (FDW) distribution of a plugin—in this case, the GitHub plugin.

$ sudo /bin/sh -c "$(curl -fsSL https://steampipe.io/install/postgres.sh)"
Enter the plugin name: github
Enter the version (latest): 

Discovered:
- PostgreSQL version:   14
- PostgreSQL location:  /usr/lib/postgresql/14
- Operating system:     Linux
- System architecture:  x86_64

Based on the above, steampipe_postgres_github.pg14.linux_amd64.tar.gz will be downloaded, extracted and installed at: /usr/lib/postgresql/14

Proceed with installing Steampipe PostgreSQL FDW for version 14 at /usr/lib/postgresql/14?
- Press 'y' to continue with the current version.
- Press 'n' to customize your PostgreSQL installation directory and select a different version. (Y/n): 

Downloading steampipe_postgres_github.pg14.linux_amd64.tar.gz...
########################################################################################### 100.0%
steampipe_postgres_github.pg14.linux_amd64/
steampipe_postgres_github.pg14.linux_amd64/steampipe_postgres_github.so
steampipe_postgres_github.pg14.linux_amd64/steampipe_postgres_github.control
steampipe_postgres_github.pg14.linux_amd64/steampipe_postgres_github--1.0.sql
steampipe_postgres_github.pg14.linux_amd64/install.sh
steampipe_postgres_github.pg14.linux_amd64/README.md

Download and extraction completed.

Installing steampipe_postgres_github in /usr/lib/postgresql/14...

Successfully installed steampipe_postgres_github extension!

Files have been copied to:
- Library directory: /usr/lib/postgresql/14/lib
- Extension directory: /usr/share/postgresql/14/extension/  

Now connect to your server as usual, using psql or another client, most typically as the postgres user. Then run these commands which are typical for any Postgres foreign data wrapper. As with all Postgres extensions, you start like this:

CREATE EXTENSION steampipe_postgres_fdw_github;

To use a foreign data wrapper, you first create a server:

CREATE SERVER steampipe_github FOREIGN DATA WRAPPER steampipe_postgres_github OPTIONS (config 'token="ghp_..."');

Use OPTIONS to configure the extension to use your GitHub access token. (Alternatively, the standard environment variables used to configure a Steampipe plugin—it’s just GITHUB_TOKEN in this case—will work if you set them before starting your instance of Postgres.)

The tables provided by the extension will live in a schema, so define one:

CREATE SCHEMA github;

Now import the schema defined by the foreign server into the local schema you just created:

IMPORT FOREIGN SCHEMA github FROM SERVER steampipe_github INTO github;

Now run a query!

The foreign tables provided by the extension live in the github schema, so by default you’ll refer to tables like github.github_my_repository. If you set search_path = 'github', though, the schema becomes optional and you can write queries using unqualified table names.

select
  count(*)
from
  github_my_repository;

 count
-------
   468

If you have a lot of repos, the first run of that query will take a few seconds. The second run will return results instantly, though, because the extension includes a powerful and sophisticated cache.

And that’s all there is to it! Every Steampipe plugin is now also a foreign data wrapper that works exactly like this one. You can load multiple extensions in order to join across APIs. Of course you can join any of these API-sourced foreign tables with your own Postgres tables. And to save the results of any query, you can prepend create table NAME as or create materialized view NAME as to a query to persist results as a table or view.

Using a Steampipe plugin as a SQLite extension that provides virtual tables

Visit Steampipe downloads to find the installer for your OS, and run it to acquire the SQLite distribution of the same plugin.

$ sudo /bin/sh -c "$(curl -fsSL https://steampipe.io/install/sqlite.sh)"
Enter the plugin name: github
Enter version (latest): 
Enter location (current directory): 

Downloading steampipe_sqlite_github.linux_amd64.tar.gz...
############################################################################ 100.0%
steampipe_sqlite_github.so

steampipe_sqlite_github.linux_amd64.tar.gz downloaded and extracted successfully at /home/jon/steampipe-sqlite.

Here’s the setup. You can place this code in ~/.sqliterc if you want to run it every time you start sqlite.

.load /home/jon/steampipe-sqlite/steampipe_sqlite_github.so

select steampipe_configure_github('
  token="ghp_..."
');

Now you can run the same query as above.

sqlite> select count(*) from github_my_repository;
count(*)
468  

What about the differences between Postgres-flavored and SQLite-flavored SQL? The Steampipe hub is your friend! For example, here are variants of a query that accesses a field inside a JSON column in order to tabulate the languages associated with your gists.

pg and sqlite gist summary IDG

Here too you can load multiple extensions in order to join across APIs. You can join any of these API-sourced foreign tables with your own SQLite tables. And you can prepend create table NAME as to a query to persist results as a table.

Using a Steampipe plugin as a stand-alone export tool

Visit Steampipe downloads to find the installer for your OS, and run it to acquire the export distribution of a plugin. Again, we’ll illustrate using the GitHub plugin.

$ sudo /bin/sh -c "$(curl -fsSL https://steampipe.io/install/export.sh)"
Enter the plugin name: github
Enter the version (latest): 
Enter location (/usr/local/bin): 
Created temporary directory at /tmp/tmp.48QsUo6CLF.

Downloading steampipe_export_github.linux_amd64.tar.gz...
############################################################################## 100.0%
Deflating downloaded archive
steampipe_export_github
Installing
Applying necessary permissions
Removing downloaded archive
steampipe_export_github was installed successfully to /usr/local/bin
$ steampipe_export_github -h
Export data using the github plugin.

Find detailed usage information including table names, column names, and 
examples at the Steampipe Hub: https://hub.steampipe.io/plugins/turbot/github

Usage:
  steampipe_export_github TABLE_NAME [flags]

Flags:
      --config string       Config file data
  -h, --help                help for steampipe_export_github
      --limit int           Limit data
      --output string       Output format: csv, json or jsonl (default "csv")
      --select strings      Column data to display
      --where stringArray   where clause data

There’s no SQL engine in the picture here; this tool is purely an exporter. To export all your gists to a JSON file:

steampipe_export_github github_my_gist --output json > gists.json

To select only some columns and export to a CSV file:

steampipe_export_github github_my_gist --output csv --select "description,created_at,html_url" > gists.csv

You can use --limit to limit the rows returned, and --where to filter them, but mostly you’ll use this tool to quickly and easily grab data that you’ll massage elsewhere, for example in a spreadsheet.

Tap into the Steampipe plugin ecosystem

Steampipe plugins aren’t just raw interfaces to underlying APIs. They use tables to model those APIs in useful ways. For example, the github_my_repository table exemplifies a design pattern that applies consistently across the suite of plugins. From the GitHub plugin’s documentation:

You can own repositories individually, or you can share ownership of repositories with other people in an organization. The github_my_repository table will list repos that you own, that you collaborate on, or that belong to your organizations. To query ANY repository, including public repos, use the github_repository table.

Other plugins follow the same pattern. For example, the Microsoft 365 plugin provides both microsoft_my_mail_message and microsoft_mail_message, and the Google Workspace plugin provides googleworkspace_my_gmail_message and googleworkspace_gmail. Where possible, plugins consolidate views of resources from the perspective of an authenticated user.

While plugins typically provide tables with fixed schemas, that’s not always the case. Dynamic schemas, implemented by the Airtable, CSV, Kubernetes, and Salesforce plugins (among others), are another key pattern. Here’s a CSV example using a stand-alone Postgres FDW.

IMPORT FOREIGN SCHEMA csv FROM SERVER steampipe_csv INTO csv OPTIONS(config 'paths=["/home/jon/csv"]');

Now all the .csv files in /home/jon/csv will automagically be Postgres foreign tables. Suppose you keep track of valid owners of EC2 instances in a file called ec2_owner_tags. Here’s a query against the corresponding table.

select * from csv.ec2_owner_tags;
     owner      |            _ctx
----------------+----------------------------
 Pam Beesly     | {"connection_name": "csv"}
 Dwight Schrute | {"connection_name": "csv"}

You could join that table with the AWS plugin’s aws_ec2_instance table to report owner tags on EC2 instances that are (or are not) listed in the CSV file.

select 
    ec2.owner,
    case 
        when csv.owner is null then 'false'
        else 'true'
    end as is_listed
from 
    (select distinct tags ->> 'owner' as owner from aws.aws_ec2_instance) ec2
left join 
    csv.ec2_owner_tags csv on ec2.owner = csv.owner;
     owner      | is_listed
----------------+-----------
 Dwight Schrute | true
 Michael Scott  | false

Across the suite of plugins there are more than 2,300 pre-defined fixed-schema tables that you can use in these ways, plus an unlimited number of dynamic tables. And new plugins are constantly being added by Turbot and by Steampipe’s open-source community. You can tap into this ecosystem using Steampipe or Turbot Pipes, from your own Postgres or SQLite database, or directly from the command line.

Next read this:

Posted Under: Database
How to choose the right NoSQL database

Posted by on 5 February, 2024

This post was originally published on this site

NoSQL databases arose in response to the limitations of using SQL (Structured Query Language) for database queries. NoSQL databases store and manage data in ways that enable high operational speed and a level of flexibility not found in traditional relational database management systems (RDBMSs).

A recent report by Allied Market Research notes the demand for NoSQL databases is on the rise. In 2022, the worldwide NoSQL market generated $7.3 billion in sales, and is estimated to generate $86.3 billion by 2032—a compound annual growth rate of 28 percent for that period. Key factors driving global NoSQL market growth, according to the report, are the exploding demand for big data analytics, a need for more scalable and flexible enterprise database solutions, and the ubiquity of cloud computing platforms and technology.

If your enterprise is considering migrating to NoSQL, you may wonder how to choose the best NoSQL database for your data storage needs. With more than two dozen open source and commercial NoSQL databases available, you have plenty of options to choose from.

This article presents five questions to help guide your NoSQL database buying decision. See the end of the article for an overview of the leading NoSQL databases on the market today.

5 questions to ask before choosing a NoSQL database

  • Is NoSQL the right choice?
  • Which NoSQL data model do we need?
  • What is the latency requirement?
  • How important are scalability and data consistency?
  • How do we want to deploy it?

Is NoSQL the right choice?

Before choosing a NoSQL database, it’s important to be certain that NoSQL is the best choice for your needs. Carl Olofson, research vice president at International Data Corp. (IDC), says “back office transaction processing, high-touch interactive application data management, and streaming data capture” are all good reasons for choosing NoSQL.

Even with these needs in mind, it is important to rule out the possibility that NoSQL is not the right fit for your enterprise, especially because there are tradeoffs to choosing NoSQL over a traditional RDBMS. “The first decision you need to make is why do you need a NoSQL database system,” says Craig Mullins, president and principal consultant at Mullins Consulting. “You need to first understand why an existing relational DBMS cannot fulfill your use case. Relational/SQL database systems are widely installed and most organizations have existing systems and applications deployed on RDBMS with skilled technicians to manage them.”

An alternative to replacing the RDBMS, says Mullins, is polyglot persistence—employing multiple data storage technologies within a single system so as to meet different data storage needs. Rather than “force-fitting everything into a relational mindset,” polyglot persistence lets developers and administrators “choose the appropriate data technology for each use case,” he says.

NoSQL’s core strength is likely its decentralized, scalable, fault-tolerant design, Mullins says. “Most NoSQL database technology is implemented to scale and survive outages,” he says. “Additionally, most NoSQL options are lightweight and require less overhead than a relational DBMS, in terms of CPU and support.”

Which NoSQL data model do we need?

The four main types of NoSQL data models are key-value, document, column store, and graph. Each one fits a different use case. Mullins summarized the strengths of each type as follows:

  • A key-value database is designed to be good for the high-availability, low-latency requirements of applications such as retail and mobile.
  • A document database is best suited for event logging, online shopping, content management, and in-depth analytical processing.
  • A column store database is good for event logging, content management, and counting and/or categorizing for analytics. Column stores can also be set up to automatically expire data.
  • A graph database is well-suited for applications where data elements are interconnected and the number of relationships between them is undetermined. Examples in this use case include social media networks, recommendation engines, logistics and routing, location-aware systems, public transportation links, and network topologies.

“Choosing the right model is essential,” says Noel Yuhanna, vice president and principal analyst at Forrester Research. “The document model is the most popular, including the ability to store JSON documents optimally. The graph model focuses on interconnected data, while the key-value model focuses on a simple key-value pair retrieval, which is not as widely used.”

What data will be stored and how it will be accessed are essential in deciding which data model to choose, Yuhanna says. “Also, some vendor products support all models, which is the multi-model database, offering the flexibility of having multiple models.”

What is the latency requirement?

Is the latency requirement millisecond, subsecond, seconds, minutes, or more?

“If the latency requirement is extremely small, as for a streaming data capture or real-time data-sharing application, one should look at a key-value store,” Olofson says. “Likewise if the data is a simple list or matrix.”

If the data is highly changeable in form and includes defined fields, a JSON document database might be more appropriate, Olofson says. This is also true for a high-touch interactive application, which is typically changed frequently to adjust for shifting requirements of the application and user.

“If the latency requirement is not so great and complex combinations must be supported, including bill-of-materials structures or complex groups of interrelated data, then one might consider a graph DBMS,” Olofson says.

How important are scalability and data consistency?

NoSQL databases can break down data into segments—or shards—which can be useful for large deployments running hundreds of terabytes, Yuhanna says.

“Sharding is an essential capability for NoSQL to scale databases,” Yuhanna says. “Customers often look for NoSQL solutions that can automatically expand and shrink nodes in horizontally scaled clusters, allowing applications to scale dynamically.”

Unlike relational databases, which focus on ensuring data consistency for every transaction using ACID compliance, with NoSQL, “you can choose data consistency to be eventually consistent or even relaxed,” Yuhanna says. “With eventual consistency, you can scale quickly and deliver high performance.”

How do we want to deploy it?

Some NoSQL databases can run on-premises, some only in the cloud, while others in a hybrid cloud environment, Yuhanna says.

“Also, some NoSQL has native integration with cloud architectures, such as running on serverless and Kubernetes environments,” Yuhanna says. “We have seen serverless as an essential factor for customers, especially those who want to deliver good performance and scale for their applications, but also want to simplify infrastructure management through automation.”

The leading NoSQL databases

Asking yourself and your organization the five questions introduced here will help you choose the right NoSQL database for your needs. Now, let’s look at some of the leading NoSQL databases on the market today.

Aerospike

Aerospike is an open source distributed, real-time, high-performance NoSQL database designed for applications that cannot tolerate downtime and need high read and write throughput.

Aerospike is a multi-model NoSQL and graph database that supports simultaneous data models, has unlimited scale, and enables organizations to act in real-time across billions of transactions. According to the product documentation, Aerospike uses massive parallelism and a unified storage model to ensure the smallest possible server footprint.

The platform ingests and acts on streaming data at the edge and can combine edge data with data from systems of record, third-party sources, data warehouses, or data lakes for operational, transactional, or analytical workloads. Aerospike can run on premises or as a cloud-managed service.

AWS DynamoDB

Amazon DynamoDB is a serverless, NoSQL, fully managed database service that provides single-digit millisecond response times at any scale. A strong selling point of this database is that it enables organizations to develop and run applications while only paying for what they use.

This cloud-based service offers encryption at rest to protect sensitive data. It also enables users to create database tables that can store and retrieve any amount of data and serve any level of request traffic. Users can scale a table’s throughput capacity up or down without downtime or performance degradation, according to AWS. Developers and admins can use the AWS Management Console to monitor resource utilization and performance metrics.

DynamoDB also provides on-demand backup capability, allowing users to create full backups of tables for long-term retention and for regulatory compliance needs.

Couchbase

Couchbase Server, distributed by Couchbase Inc., is a multi-model JSON document support database platform. It’s an open source NoSQL key-value and document database with built-in cache. It’s suitable for enterprises that need a database that can deliver performance, multi-model, scale, and automation.

Organizations use the platform to support social media and mobile applications, content and metadata stores, e-commerce transactions, and other applications. It provides full support for documents, flexible data model, indexing, full-text search, and MapReduce for real-time analytics.

DataStax

DataStax Astra DB is a fully managed, cloud-native, database-as-a-service built on Apache Cassandra. It scales dynamically and accelerates application development via a range of APIs and programming language options, so developers can build real-time applications fast and scale them without limits, according to the company.

Developers can readily ensure data security with Astra DB’s built-in security mechanisms such as Private Link, IP access controls, single sign-on, application tokens, and data encryption. Astra DB’s serverless architecture (built on microservices and API-first principles) scales automatically based on demand.

Google BigTable

Bigtable from Google is an enterprise-grade NoSQL database service with low single-digit millisecond latency, limitless scale, and 99.999% availability, according to the company. It supports multi-tenant, mixed operational, and real-time analytical workloads.

Google says Bigtable is a key-value and wide-column store, ideal for fast access to structured, semi-structured, or unstructured data. Latency-sensitive workloads such as personalization are also a good fit for the platform. Bigtable automatically scales resources to adapt to server traffic, handling the associated sharding, replication, and query processing as needed.

MarkLogic

MarkLogic Server is a multi-model database that combines document, semantic graph, geospatial, and relational models into a single, scalable, operational database, according to MarkLogic. It provides native storage for JSON, XML, text, RDF triples, geospatial, and binaries, with unified search-and-query interface capabilities.

The database has a search engine built into its core, providing a single platform to load data from silos and search across all the data. As such, it does not require a bolt-on search engine for full-text search. MarkLogic Server also offers enterprise data security controls such as data loss prevention.

Microsoft Azure Cosmos

Azure Cosmos DB is a Microsoft Azure database service that supports multiple NoSQL models and a variety of data formats including JSON and binary data. Microsoft says the database is also fully managed, with Microsoft Azure handling all the underlying infrastructure so that developers can focus on their applications and data.

Azure Cosmos DB offers security tools such as data encryption and data access controls. It features automatic and instant scalability, and open source APIs for MongoDB, Cassandra, and other NoSQL engines.

MongoDB

MongoDB, maintained by MongoDB Inc. and published under a combination of the Gnu Affero General Public License and the Apache License, is a free and open source, cross-platform, document-oriented database.

It uses JSON-like documents with schemas, and incorporates operational best practices learned from optimizing thousands of deployments at organizations of all sizes. The cloud-based offering can handle database management, setup and configuration, software patching, monitoring, and backups. It operates as a distributed database cluster. Key features and capabilities include fully managed backup, point-in-time recovery, a real-time performance panel, and customizable alerting.

Redis

Redis Enterprise, sponsored by Redis Labs, is an open source, key-value NoSQL in-memory database that supports both relaxed and strong consistency, a flexible schema-less model, high availability, and ease of deployment.

The platform supports key-value; a variety of data structures such as lists, sets, bitmaps, and hashes; and a variety of models through pluggable modules such as search, graph, JSON, and XML. Redis Enterprise includes a real-time indexing, querying, and full-text search engine available on-premises and as a managed service in the cloud.

Next read this:

Posted Under: Database
Why developers should put the database first

Posted by on 5 February, 2024

This post was originally published on this site

Developers of a certain age are used to beginning their application development journey by choosing an operating system. Younger developers, by contrast, might start by picking a cloud. One of the most respected voices in tech suggests a different starting point, one that focuses the attention on arguably the most important component of the application stack: the database. As luminary Kelsey Hightower writes, “Early on I made the mistake of only focusing on operating systems and ignored what I now consider the most important element of computing: data.”

He’s not alone. Gatsby.js Founder Kyle Mathews has reached similar conclusions: “I’ve shifted 100% to [database]-first when prototyping.” In a world where data is the heart of the user experience, it makes sense to take a data-first approach rather than picking a language framework, for example, and taking whichever databases come with it. This approach may be easier said than done for some, however, which makes cloud API platforms like Neurelo a nice on-ramp for developers who want to put the database first but may not know how.

Data, data, everywhere

The big news from all the recent cloud earnings calls is AI and how it drives consumption of cloud services. (See all the mentions of AI in Microsoft’s latest earnings call.) At the heart of AI, of course, is data. Lots and lots of data. Ever since we started calling it “big data,” data has driven cloud adoption and usage.

This is true no matter what we call it or which technologies we use to store and process it. As recent O’Reilly trends show, even as technologies such as Apache Hadoop, Apache Spark, and data warehouses show declines in interest (being “legacy” technologies), interest in data just keeps booming. Interestingly, O’Reilly sees specialized databases like stand-alone vector databases as remaining relatively niche, even as more general-purpose databases such as MySQL add vector capabilities and continue to grow.

This flux in data technologies makes it even harder for developers to keep pace, constantly having to learn new technologies or new ways to use old technologies. Yet developers like Hightower suggest it’s time to make data the first choice in the technology stack, not an afterthought.

For Hightower, one way to remove the complexity from a database focus is to start simple with SQLite, rather than a more complicated database like MySQL. As he says, he’d rather “learn the fundamentals of data and how to manage it.” In other words, “I’d rather spend time learning SQL, not how to administrate a database server, which is a useful skill, but presents a huge barrier to entry.” The ease of SQLite, concurs developer Simon Willison, is that “you don’t have to run a server and you don’t need to figure out authentication.”

That’s one approach for developers new to databases, but it’s not the only one. So-called NoSQL databases can be more accessible to developers. One of the reasons developers love MongoDB is that it maps closely to the object-oriented programming that developers already know, rather than making them use ORMs (object-relational mappers) to nudge their data into a relational model. (Disclosure: I work for MongoDB.) Another option is Supabase, which provides managed PostgreSQL to enable developers to spend less time worrying about database operations.

What if you don’t want to think about the database at all, or much? Well, Neurelo might be your answer.

Fighting the ‘necessary evil’

Neurelo offers a database abstraction platform that allows developers to work with a database without having to construct complicated SQL queries to create, retrieve, update, or delete (CRUD) records in relational databases like PostgreSQL or MySQL or to build queries using the MongoDB Query API for MongoDB. Instead, Neurelo auto-generates APIs that create both REST and GraphQL endpoints directly from the developers’ data models and schemas.

This is the heart of how Neurelo fights against the “necessary evil” of ORMs that developers have assumed they had to embrace, as Neurelo cofounder and CEO Chirag Shah says in an interview. “It’s an uphill battle” working with an ORM, he explains. “You have to go through a bunch of things, and you have to create those harnesses, and you have to maintain it, because any time your schema changes or your requirement changes you have to recalibrate the whole thing.”

It’s a pain. But there’s hope.

Traditional ORMs obscure the SQL layer, but Neurelo gives full query visibility, as the company notes on its site. Developers can inspect and improve database interactions as their comfort level with the database grows. Neurelo also helps resolve an ORM’s “N+1” problem, whereby the database makes queries in a loop, unnecessarily multiplying the number of database round trips. Neurelo combats this by using eager loading, which retrieves related data in a single query using joins. This minimizes the number of queries hitting the database and improves performance. Neurelo also goes beyond basic CRUD operations to offer advanced join read/write tasks that go across multiple entities.

In these and other ways, Neurelo allows developers—whether new to databases or experienced—to spend less time figuring out how to work with their database and more time writing their application. “You virtually get everything instantly,” Shah argues, “and you don’t have to raise a finger.” Instead of hours or weeks, “you go from zero to writing your code in minutes.”

This brings us back to Hightower’s assessment that data should be a developer’s first concern. If he’s right, tools like Neurelo can make that first concern less…concerning, without all the trade-offs that ORMs impose or the hardwiring of code that using a database driver might create. It’s a way to keep data at the heart of an application and much more approachable for developers.

Next read this:

Posted Under: Database

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue