Monthly Archives: February 2024

Google delivers Gemini LLM support to BigQuery data warehouse

Posted by on 29 February, 2024

This post was originally published on this site

Google is integrating its Gemini 1.0 Pro large language model with its AI and machine learning platform, Vertex AI, to help enterprises unlock new capabilities of large language models (LLMs), including analysis of text, image and video.

The Gemini API, which has been made generally available, can also be used in Google’s data warehouse, BigQuery, to develop generative AI-based analytical applications.

“The Gemini 1.0 Pro model is designed for higher input-output scale and better result quality across a wide range of tasks like text summarization and sentiment analysis. You can now access it using simple SQL statements or BigQuery’s embedded DataFrame API from right inside the BigQuery console,” Gerrit Kazmaier, general manager of data analytics at Google Cloud, said in a statement.

The company is also expected to integrate the vision version of the Gemini Pro model in the coming months.

In addition, Google is extending Vertex AI’s document processing and speech-to-text APIs to BigQuery to help enterprises analyze unstructured data, such as documents and audio.

Earlier this month, the company announced the preview of BigQuery vector search, which when integrated with Vertex AI can enable vector similarity search on data inside BigQuery along with other features such as retrieval augmented generation (RAG), text clustering and summarization.

Hyoun Park, principal analyst at Amalgam Insights, sees RAG support as table stakes for data warehouse vendors these days.

“Retrieval augmented generation is a capability every data warehouse will need to support, as it refers to accessing data from a third party source when someone asks a question,” Park said. “For instance, if someone asks an HR question, the RAG would also ask the employee’s HR system for relevant and current data to contextualize the question. The relevant capability here is in accessing a real-time update of a specific table or data source when someone asks a question to an LLM.”

Other companies are moving in a similar direction. Steven Dickens, vice president and practice lead at The Futurum Group, said that warehouse stalwarts such as Teradata and Cloudera are also adding vector capabilities alongside players such as Oracle and Elastic.

Next read this:

Posted Under: Database
Google Cloud adds vector support to all its database offerings

Posted by on 29 February, 2024

This post was originally published on this site

Google Cloud on Thursday said it is adding vector support and integrating LangChain with all of its database offerings in an effort to outdo rival cloud service providers, such as Amazon Web Services (AWS), Microsoft, and Oracle.

Cloud service providers have been locked in a race to add generative AI and AI-related capabilities to their database offerings to have the first mover advantage in order to garner a bigger pie of the growing AI and generative AI market.

The new updates to database offerings include the addition of vector support for relational, key value, document, and in-memory databases such as CloudSQL, Spanner, Firestore, Bigtable, and Memorystore for Redis.

Nearest neighbor search is a key differentiator

The vector capabilities added to the databases feature search capabilities including the approximate nearest neighbor search (ANN) and exact nearest neighbor search (KNN).

While ANN is used to optimize search, in other words, reduce latency, for large datasets, KNN is used to return more specific or precise search results on smaller datasets, said David Menninger, executive director at ISG’s Ventana Research.

“Support for ANN and KNN reflects that there isn’t a one-size-fits-all approach to vector search and that different use cases require different indexing algorithms to provide the required level of accuracy and performance,” Menninger explained, adding that this highlights that it is incumbent for developers to understand the nature of their data and application, and experiment with various databases to identify the capabilities that best fit the requirements of an individual project.  

The other advantage from Google’s standpoint, according to Forrester’s principal analyst Noel Yuhanna, is that most database vendors don’t offer both ANN and KNN.

“Some vendors support KNN, while others support the ANN approach. ANN is more popular since it is scalable and performs well for large datasets and high-dimensional vectors,” Yuhanna said.

All the vector capabilities added to the database offerings are currently in preview. In July last year, Google launched support for the popular pgvector extension in AlloyDB and Cloud SQL to support building generative AI applications. 

The addition of vector capabilities across multiple database offerings since July last year at regular intervals, seemingly, makes Google Cloud “more aggressive” than rival hyperscalers, according to Menninger.

However, he did point out that almost all database vendors are adding support for vector and vector search capabilities.

Microsoft, AWS, and Oracle, according to Yuhanna, have some level of vector support capabilities in the works in their respective database offerings.

The announcements by Google Cloud might just give it an edge over its rivals as it seems to be a bit further ahead in the journey than others in terms of making these capabilities generally available to enterprises, Yuhanna said.

Both analysts also pointed out that adding support for vector capabilities will soon become table stakes for data platform vendors to support the development of generative AI applications by complementing large language models (LLMs) with approved enterprise data to improve accuracy and trust.

ISG, according to Menninger, believes that almost all enterprises developing applications based on generative AI will explore the use of vector search and retrieval-augmented generation to complement foundation models with proprietary data and content by the end of 2026.

Rivalry between vector databases and traditional databases

The addition of vector capabilities by hyperscalers and other database vendors to their offerings has resulted in a growing rivalry between vector databases and traditional databases, according to analysts.

While traditional databases have been adding vector capabilities to make their case to enterprises, vector databases have been capabilities to make their products more easily consumable by non-experts, they added.

However, ISG’s Menninger believes that more than 50% of enterprises will use traditional database offerings with vector support by 2026, given their reliance on these traditional databases.

Specialized vector databases will still continue to exist, though only for more complex and sophisticated use cases, Menninger said. Pinecone, Chroma, Weaviate, Milvus, and Qdrant are examples of specialized databases.

Explaining further, Menninger said that whether vector search is best performed using a specialist vector database or a general-purpose database will depend on a variety of factors, including the relative reliance of an enterprise on an existing database, developer skills, the size of the dataset, and specific application requirements.

Integration of LangChain with all Google database offerings

Google Cloud is adding LangChain integrations for all of its databases. “We will support three LangChain Integrations that include vector stores, document loaders, and chat messages memory,” said Andi Gutmans, vice president of engineering for Google Cloud’s databases division.

LangChain is a framework for developing applications powered by LLMs and the integration into databases will allow developers built-in Retrieval Augmented Generation (RAG) workflows across their preferred data source, Gutmans added.

While the LangChain vector stores integration is available for AlloyDB, Cloud SQL for PostgreSQL, Cloud SQL for MySQL, Memorystore for Redis, and Spanner, the document loaders and chat messages memory integration is available for all databases, including Firestore, Bigtable, and SQL Server.

Analysts see the addition of LangChain integrations as an “assertive” move from Google.

“LangChain is currently the most popular framework for connecting LLMs to private sources of enterprise data, providing vendor-neutral integration with enterprise databases, as well as commercial machine learning development and deployment environments, such as SageMaker Studio and Vertex AI Studio,” Menninger explained. 

AlloyDB AI made generally available

Google has made its AlloyDB AI offering generally available. It can be used via AlloyDB and AlloyDB Omni.

AlloyDB AI, which was moved into preview last year in August, is a suite of integrated capabilities that allow developers to build generative AI-based applications using real-time data. 

It builds on the basic vector support available with standard PostgreSQL and can introduce a simple PostgreSQL function to generate embeddings on data.  

AlloyDB AI is an integral part of AlloyDB and AlloyDB Omni, and is available at no additional charge, the company said.

Next read this:

Posted Under: Database
Amazon Bedrock: A solid generative AI foundation

Posted by on 27 February, 2024

This post was originally published on this site

Amazon Web Services’ fully managed service for building, deploying, and scaling generative AI applications, Amazon Bedrock offers a catalog of foundation models, implements retrieval-augmented generation (RAG) and vector embeddings, hosts knowledge bases, implements fine-tuning of foundation models, and allows continued pre-training of selected foundation models.

Amazon Bedrock complements the almost 30 other Amazon machine learning services available, including Amazon Q, the AWS generative AI assistant.

There are currently six major features in Amazon Bedrock:

  • Experiment with different models: Use the API or GUI in the console to test various prompts and configurations with different foundation models.
  • Integrate external data sources: Improve response generation by incorporating external data sources into knowledge bases, which can be queried to augment the responses from foundation models.
  • Develop customer support applications: Build applications that use foundation models, API calls, and knowledge bases to reason and execute tasks for customers.
  • Customize models: Tailor a foundation model for particular tasks or domains by providing training data for fine-tuning or additional pretraining.
  • Boost application efficiency: Optimize the performance of foundation model-based applications by purchasing provisioned throughput.
  • Choose the most suitable model: Compare the outputs of various models using standard or custom prompt data sets to choose the model that best aligns with the requirements of your application.

One major competitor to Amazon Bedrock is Azure AI Studio, which, while still in preview and somewhat under construction, checks most of the boxes for a generative AI application builder. Azure AI Studio is a nice system for picking generative AI models, grounding them with RAG using vector embeddings, vector search, and data, and fine-tuning them, all to create what Microsoft calls copilots, or AI agents.

Another major competitor is Google Vertex AI’s Generative AI Studio, which allows you to tune foundation models with your own data, using tuning options such as adapter tuning and reinforcement learning from human feedback (RLHF), or style and subject tuning for image generation. Generative AI Studio complements the Vertex AI model garden and foundation models as APIs.

Other possible competitors include LangChain (and LangSmith), Poe, and the ChatGPT GPT Builder. LangChain does require you to do some programming.

Amazon Bedrock model setup

There are two setup tasks for Bedrock: model setup and API setup. You need to request access to models before you can use them. If you want to use the AWS command line interface or any of the AWS SDKs, you also need to install and configure the CLI or SDK.

I didn’t bother with API setup, as I’m concentrating on using the console for the purposes of this review. Completing the model access request form was easier than it looked, and I was granted access to models faster than I expected.

amazon bedrock 02 IDG

You can’t use a model in Amazon Bedrock until you’ve requested and received permission to use it. Most vendors grant access immediately. Anthropic takes a few minutes, and requires you to fill out a short questionnaire about your planned usage. This screenshot was taken just before my Claude access requests were granted.

Amazon Bedrock model inference parameters

Amazon Bedrock uses slightly different parameters to control the response of models than, say, OpenAI. Bedrock controls randomness and diversity using the temperature of the probability distribution, the top K, and the top P. It controls the length of the output with the response length, penalties, and stop sequences.

Temperature modulates the probability for the next token. A lower temperature leads to more deterministic responses, and a higher temperature leads to more random responses. In other words, choose a lower temperature to increase the likelihood of higher-probability tokens and decrease the likelihood of lower-probability tokens; choose a higher temperature to increase the likelihood of lower-probability tokens and decrease the likelihood of higher-probability tokens. For example, a high temperature would allow the completion of “I hear the hoof beats of” to include unlikely beasts like unicorns, while a low temperature would weight the output to likely ungulates like horses.

Top K is the number of most-likely candidates that the model considers for the next token. Lower values limit the options to more likely outputs, like horses. Higher values allow the model to choose less likely outputs, like unicorns.

Top P is the percentage of most-likely candidates that the model considers for the next token. As with top K, lower values limit the options to more likely outputs, and higher values allow the model to choose less likely outputs.

Response length controls the number of tokens in the generated response. Penalties can apply to length, repeated tokens, frequency of tokens, and type of tokens in a response. Stop sequences are sequences of characters that stop the model from generating further tokens.

Amazon Bedrock prompts, examples, and playgrounds

Amazon Bedrock currently displays 33 examples of generative AI model usage, and offers three playgrounds. Playgrounds provide a console environment to experiment with running inference on different models and with different configurations. You can start with one of the playgrounds (chat, text, or image), select a model, construct a prompt, and set the metaparameters. Or you can start with an example and open it in the appropriate playground with the model and metaparameters pre-selected and the prompt pre-populated. Note that you need to have been granted access to a model before you can use it in a playground.

Amazon Bedrock examples demonstrate prompts and parameters for various supported models and tasks. Tasks include summarization, question answering, problem solving, code generation, text generation, and image generation. Each example shows a model, prompt, parameters, and response, and presents a button you can press to open the example in a playground. The results you get in the playground may or may not match what is shown in the example, especially if the parameters allow for lower-probability tokens.

Our first example shows arithmetic word problem solving using a chain-of-thought prompt and the Llama 2 Chat 70B v1 model. There are several points of interest in this example. First, it works with a relatively small open-source chat model. (As an aside, there’s a related example that uses a 7B (billion) parameter model instead of the 70B parameter model used here; it also works.) Second, the chain-of-thought action is triggered by a simple addition to the prompt, “Let’s think step by step.” Note that if you remove that line, the model often goes off the rails and generates a wrong answer.

amazon bedrock 03 IDG

The chain-of-thought problem-solving example uses a Llama 2 chat model and presents a typical 2nd or 3rd grade arithmetic word problem. Note the [INST]You are a…[/INST] block at the beginning of the prompt. This seems to be specific to Llama. You’ll see other models respond to different formats for defining instructions or system prompts.

amazon bedrock 04 IDG

The chain-of-thought problem-solving example running in the Amazon Bedrock Chat playground. This particular set of prompts and hyperparameters usually gives correct answers, although not in the exact same format every time. If you remove the “Let’s think step by step” part of the prompt it usually gives wrong answers. The temperature setting of 0.5 asks for moderate randomness in the probability mass function, and the top P setting of 0.9 allows the model to consider less likely outputs.

Our second example shows contract entity extraction using Cohere’s Command text generation model. Text LLMs (large language models) often allow for many different text processing functions.

amazon bedrock 05 IDG

Amazon Bedrock contract entity extraction example using Cohere’s Command text generation model. Note that the instruction here is on the first line followed by a colon, and then the contract body follows.

amazon bedrock 06 IDG

Contract entity extraction example running in the Amazon Bedrock text playground. Note that there was an opportunity for additional interaction in the playground, which didn’t show up in the example. While the temperature of this run was 0.9, Cohere’s Command model takes temperature values up to 5. The top p value is set to 1 (and displayed at 0.99) and the top k parameter is not set. These allow for high randomness in the generated text.

Our final example shows image inpainting, an application of image generation that uses a reference image, a mask, and prompts to produce a new image. Up until now, I’ve only done AI image inpainting in Adobe Photoshop, which has had the capability for awhile.

amazon bedrock 07 IDG

Amazon Bedrock’s image inpainting example uses the Titan Image Generator G1 model. Note the reference image and mask image in the image configuration.

amazon bedrock 08 IDG

In order to actually select the flowers for inpainting, I had to move the mask from the default selection of the backpack to the area containing the white flowers in the reference image. When I didn’t do that, orange flowers were generated in front of the backpack.

amazon bedrock 09 IDG

Successful inpainting in Amazon Bedrock. Note that I could have used the mask prompt to refine the mask for complex mask selections in noncontiguous areas, for example selecting the flowers and the books. You can use the Info links to see explanations of individual hyperparameters.

Amazon Bedrock orchestration

Amazon Bedrock orchestration currently includes importing data sources into knowledge bases that you can then use for setting up RAG, and creating agents that can execute actions. These are two of the most important techniques available for building generative AI applications, falling between simple prompt engineering and expensive and time-consuming continued pre-training or fine-tuning.

Using knowledge bases takes multiple steps. Start by importing your data sources into an Amazon S3 bucket. When you do that, specify the chunking you’d like for your data. The default is approximately 300 tokens per chunk, but you can set your own size. Then set up your vector store and embeddings model in the database you prefer, or allow AWS to use its default of Amazon OpenSearch Serverless. Then create your knowledge base from the Bedrock console, ingest your data sources, and test your knowledge base. Finally, you can connect your knowledge base to a model for RAG, or take the next step and connect it to an agent. There’s a good one-hour video about this by Mani Khanuja, recorded at AWS re:Invent 2023.

Agents orchestrate interactions between foundation models, data sources, software applications, and prompts, and call APIs to take actions. In addition to the components of RAG, agents can follow instructions, use an OpenAPI schema to define the APIs that the agent can invoke, and/or invoke a Lambda function.

amazon bedrock 10 IDG

Amazon Bedrock knowledge base creation and testing starts with this screen. There are several more steps.

Amazon Bedrock model assessment and deployment

The Assessment and Deployment panel in Amazon Bedrock contains functionality for model evaluation and provisioned throughput.

Model evaluation supports automatic evaluation of a single model, manual evaluation of up to two models using your own work team, and manual evaluation of as many models as you wish using an AWS-managed work team. Automatic evaluation uses recommended metrics, which vary depending on the type of task being evaluated, and can either use your own prompt data or built-in curated prompt data sets.

Provisioned throughput allows you to purchase dedicated capacity to deploy your models. Pricing varies depending on the model that you use and the level of commitment you choose.

amazon bedrock 11 IDG

Automatic model evaluation selection in Amazon Bedrock. Bedrock can also set up human model evaluations. The metrics and data sets used vary with the task type being evaluated.

amazon bedrock 12 IDG

Amazon Bedrock’s provisioning throughput isn’t cheap, and it isn’t available for every model. Here we see an estimated monthly cost of provisioning five model units of the Llama 2 Chat 13B model for one month. It’s $77.3K. Upping the term to six months drops the monthly cost to $47.7K. You can’t edit the provisioned model units or term once you’ve purchased the throughput.

Model customization methods

It’s worth discussing ways of customizing models in general at this point. Below we’ll talk specifically about the customization methods implemented in Amazon Bedrock.

Prompt engineering, as shown above, is one of the simplest ways to customize a generative AI model. Typically, models accept two prompts, a user prompt and a system or instruction prompt, and generate an output. You normally change the user prompt all the time, and use the system prompt to define the general characteristics you want the model to take on. Prompt engineering is often sufficient to define the way you want a model to respond for a well-defined task, such as generating text in specific styles by presenting sample text or question-and-answer pairs. You can easily imagine creating a prompt for “Talk Like a Pirate Day.” Ahoy, matey.

1

2



Page 2

Retrieval-augmented generation helps to ground LLMs with specific sources, often sources that weren’t included in the models’ original training. As you might guess, RAG’s three steps are retrieval from a specified source (the knowledge base in Amazon Bedrock parlance), augmentation of the prompt with the context retrieved from the source, and then generation using the model and the augmented prompt.

RAG procedures often use embedding to limit the length and improve the relevance of the retrieved context. Essentially, an embedding function takes a word or phrase and maps it to a vector of floating point numbers; these are typically stored in a database that supports a vector search index. The retrieval step then uses a semantic similarity search, typically using the cosine of the angle between the query’s embedding and the stored vectors, to find “nearby” information to use in the augmented prompt. Search engines usually do the same thing to find their answers.

Agents, aka conversational retrieval agents, expand on the idea of conversational LLMs with some combination of tools, running code, embeddings, and vector stores. In other words, they are RAG plus additional steps. Agents often help to specialize LLMs to specific domains and to tailor the output of the LLM. Azure Copilots are usually agents; Google and Amazon use the term agents. LangChain and LangSmith simplify building RAG pipelines and agents.

Fine-tuning large language models is a supervised learning process that involves adjusting the model’s parameters to a specific task. It’s done by training the model on a smaller, task-specific data set that’s labeled with examples relevant to the target task. Fine-tuning often takes hours or days using many server-level GPUs and requires hundreds or thousands of tagged exemplars. It’s still much faster than extended pre-training.

Pre-training is the unsupervised learning process on huge text data sets that teaches LLMs the basics of language and creates a generic base model. Extended or continued pre-training adds unlabeled domain-specific or task-specific data sets to the base model to specialize the model, for example to add a language, add terms for a specialty such as medicine, or add the ability to generate code. Continued pre-training (using unsupervised learning) is often followed by fine-tuning (using supervised learning).

Customizing models in Amazon Bedrock with fine-tuning and continued pre-training

Both fine-tuning and continued pre-training tend to be expensive and lengthy processes. Even preparing the data for these can be a challenge. For fine-tuning, the challenge is getting the tagging done within budget. For continued pre-training, the challenge is to find a data set for your domain of interest that doesn’t introduce biases or toxicity of any kind.

amazon bedrock 13 IDG

Amazon Bedrock can create custom models by continued pre-training and/or with fine-tuning. You can manage your models and training jobs from this screen. Note the requirement for purchasing provisioned throughput to deploy your custom model.

amazon bedrock 14 IDG

Creating a fine-tuning job in Amazon Bedrock. Note that only certain models can currently be fine-tuned: four Amazon models, two Cohere models, and two Meta models.

amazon bedrock 15 IDG

You can manage your custom model training jobs as well as your custom models in Amazon Bedrock. Note the three status codes for jobs: failed, stopped, and complete. Only completed jobs will get a link from their custom model name. All jobs get links from their job names.

amazon bedrock 16 IDG

Digging into a training job detail in Amazon Bedrock shows you its source model, when it was started, its status, and various parameters and hyperparameters.

amazon bedrock 17 IDG

Once you have completed customizing your models in Amazon Bedrock you can manage them on the models tab. You can provision them, open them in the playground, delete them, and open their details.

amazon bedrock 18 IDG

Model details look similar to training job details in Amazon Bedrock, with a few differences, such as offering purchase and management of provisioned input.

amazon bedrock 19 IDG

While the setup of a continued pre-training job looks similar to the setup of a fine-tuning job, they have some major differences. Continued pre-training is an unsupervised learning job that needs a lot of untagged data and a lot of time. Fine-tuning is a supervised learning job that needs less data (but tagged!) and less time.

Low-code generative AI using PartyRock

To accompany Amazon Bedrock, AWS has released a mostly free low-code platform for learning generative AI and building small AI apps. The introductory PartyRock blog post is by Jeff Barr, and tells you enough that you can dive in yourself; it also supplies links to PartyRock learning resources near the end of the post. If you don’t want to build an app yourself, you can still play with the apps others have built.

Generative AI app building on Bedrock

Amazon Bedrock is a credible competitor to Azure AI Studio. If you’re already committed to AWS rather than Microsoft Azure or Google Cloud, then Bedrock will certainly be a good choice for building and scaling generative AI applications.  Bedrock offers fewer foundation models than Azure AI Studio, and furthermore lacks access to any OpenAI models, but it should do the job for most generative AI apps. Bedrock is currently a little behind Azure AI Studio when it comes to content filters, but that could easily change in the future.

Note that the cost of deploying generative AI apps tends to dwarf the cost of developing them. The cost of using Amazon Bedrock to do prompt engineering and develop RAG apps tends to be low (ignoring the people costs), and the cost of testing these in the Bedrock playgrounds is usually negligible. The cost of fine-tuning tends to be something that might give small companies pause. The cost of continued pre-training may also give medium-size companies pause. But deploy an app with a customized model at scale sufficient to serve a large audience with low lag for a long period of time, and soon you’re talking about real money.

Cost: Pricing is based on the model, the volume of input tokens and output tokens, and on whether you have purchased provisioned throughput for the model. For more information, see the Model providers page in the Amazon Bedrock console.

Platform: Browser-based, hosted on AWS. API access available.

Next read this:

Posted Under: Tech Reviews
Can MariaDB’s commercial business be saved?

Posted by on 21 February, 2024

This post was originally published on this site

A non-binding proposal to acquire MariaDB, the provider of the relational database management system (RDBMS) of the same name—a fork of the open-source MySQL database, has sparked speculations about the company’s future and what the acquisition would mean for its enterprise customers.  

The proposal was for MariaDB PLC, the firm that provides database services and SaaS offerings built on the core open-source database that is managed by the MariaDB Foundation.

Earlier this month, MariaDB PLC received a proposal of acquisition, to the tune of $37 million, from California-headquartered investment firm K1 Investment Management.

The proposal, which is non-binding and may not result in an actual offer for acquisition, puts a value of $0.55 for each share of MariaDB—a 189% premium over the database firm’s closing share price on February 5.

A rocky ride for the last year

MariaDB PLC has been on a rocky ride for the past year, including laying off staffers, changing leadership, spinning off parts of its business, and filing cautionary statements with the US Securities and Exchange Commission (SEC).

The company’s path to financial difficulty began in December 2022 when it decided to go public via the special purpose acquisition company (SPAC) route with Angel Pond Holdings.

After going public, the company saw its market capitalization plummet from $445 million to just over $10 million by the end of 2023. The sharp drop in value can be attributed to the company’s poor quarterly performance and history of losses since the last quarter of 2022, according to statements filed with the SEC.

While the first quarter of 2023 saw the company lay off staffers, the second quarter saw it file cautionary statements about its financial health via a prospectus. The company said at the time that it was looking at attracting financial investments to keep the lights on.

This was followed by the New York Stock Exchange (NYSE) issuing a warning to the company in September. The stock exchange had warned MariaDB that it was failing to meet listing norms that make it necessary for listed firms to ensure that their market capitalization doesn’t drop below $50 million for a trading period of 30 days.

October brought in more woes for the database-as-a-service provider with the company being forced to lay off 28% of its workforce and shut down two of its products, MariaDB Xpand and MariaDB SkySQL. Three months later, Microsoft announced that Azure Database for MariaDB was scheduled for retirement by September 19, 2025.

In the meantime, the company received a takeover proposal from existing investor Runa Capital, which didn’t work out. Then an associate company of Runa Capital by the name of RP Ventures offered MariaDB a loan of $26.5 million at an interest of 10%.

Commercial business on the brink

The continued losses that forced layoffs, de-listing, and product shutdowns may have also ensured the ruin of MariaDB’s commercial business, experts said.

More enterprises have been switching from the MariaDB Enterprise Server edition to the MariaDB Community Server edition ever since Microsoft announced the end of life in 2025, according to Thomas Spoelstra, a database expert with Dutch database management services firm OptimaData.

OptimaData manages multiple database management systems including Microsoft SQL Server, Oracle Database, Sybase, MySQL, MariaDB, MongoDB, and PostgreSQL.

MariaDB Enterprise always has been less popular than the community edition, Spoelstra said, adding that most of OptimaData’s clients use the latter because technical support for MariaDB Enterprise is “very” expensive.

This results in most enterprises using the community edition along with MariaDB offerings such as the Galera Cluster, the expert explained.

Additionally, Spoelstra said that not only have the financial difficulties forced enterprise customers to look at other options but have forced cloud service providers to slowly steer away from MariaDB’s commercial offerings.

Samsung is one example of an enterprise customer who might face challenges due to the shuttering of MariaDB Xpand and eventually look at other options. This could pose a significant cost for the Korean giant, as the company uses 50 MariaDB Xpand nodes to operate a single database where it hosts data of its smartphone customers.

An email sent to Samsung seeking more details on their MariaDB investment went unanswered.

In addition, the Korean electronics giant’s cloud and technology services division, Samsung SDS, offers MariaDB as a managed database offering. A separate email seeking responses on the availability of the service and its planned continuity also went unanswered.

Customers losing confidence

The discontinuation of offerings by MariaDB has forced enterprise customers to lose confidence in the company, said Tony Baer, principal analyst at dbInsight. These kinds of developments will make any customer enterprise insecure about whether their investments will be next for the chopping block, Baer added.

Although the decisions, such as laying off staffers or discontinuation of offerings, were taken by MariaDB to reduce costs, these decisions have impacted the profile of the company, according to Matt Aslett, director at Ventana Research, an arm of research and advisory firm ISG.

The impact has been profound, especially in relation to the company’s DBaaS and distributed SQL offerings, which were potential areas of long-term growth and innovation, Aslett explained.

Given the current plight of MariaDB’s commercial business, Baer said that he sees any acquisition activity as “nothing more than a Hail Mary pass to protect what are now stranded legacy customer investments.”

In fact, if Spoelstra is correct the acquisition by K1 or any other company will further accelerate the movement of customers from MariaDB to MySQL, ProxySQL, and other options.

Presently, MariaDB, according to data from 6Sense, has a 2.08% share of the relational database market category, which is 0.07% less than what the company had reported in April of last year.

For the quarter ended December, MariaDB PLC posted a net loss of $8.8 million, driven by interest expense, restructuring costs, and costs associated with the discontinuation of products.

Next read this:

Posted Under: Database
Can MariaDB’s enterprise business be saved?

Posted by on 21 February, 2024

This post was originally published on this site

A non-binding proposal to acquire MariaDB, the provider of the relational database management system (RDBMS) of the same name—a fork of the open-source MySQL database, has sparked speculations about the company’s future and what the acquisition would mean for its enterprise customers.  

The proposal was for MariaDB PLC, the firm that provides database services and SaaS offerings built on the core open-source database that is managed by the MariaDB Foundation.

Earlier this month, MariaDB PLC received a proposal of acquisition, to the tune of $37 million, from California-headquartered investment firm K1 Investment Management.

The proposal, which is non-binding and may not result in an actual offer for acquisition, puts a value of $0.55 for each share of MariaDB—a 189% premium over the database firm’s closing share price on February 5.

A rocky ride for the last year

MariaDB PLC has been on a rocky ride for the past year, including laying off staffers, changing leadership, spinning off parts of its business, and filing cautionary statements with the US Securities and Exchange Commission (SEC).

The company’s path to financial difficulty began in December 2022 when it decided to go public via the special purpose acquisition company (SPAC) route with Angel Pond Holdings.

After going public, the company saw its market capitalization plummet from $445 million to just over $10 million by the end of 2023. The sharp drop in value can be attributed to the company’s poor quarterly performance and history of losses since the last quarter of 2022, according to statements filed with the SEC.

While the first quarter of 2023 saw the company lay off staffers, the second quarter saw it file cautionary statements about its financial health via a prospectus. The company said at the time that it was looking at attracting financial investments to keep the lights on.

This was followed by the New York Stock Exchange (NYSE) issuing a warning to the company in September. The stock exchange had warned MariaDB that it was failing to meet listing norms that make it necessary for listed firms to ensure that their market capitalization doesn’t drop below $50 million for a trading period of 30 days.

October brought in more woes for the database-as-a-service provider with the company being forced to lay off 28% of its workforce and shut down two of its products, MariaDB Xpand and MariaDB SkySQL. Three months later, Microsoft announced that Azure Database for MariaDB was scheduled for retirement by September 19, 2025.

In the meantime, the company received a takeover proposal from existing investor Runa Capital, which didn’t work out. Then an associate company of Runa Capital by the name of RP Ventures offered MariaDB a loan of $26.5 million at an interest of 10%.

Commercial business on the brink

The continued losses that forced layoffs, de-listing, and product shutdowns may have also ensured the ruin of MariaDB’s commercial business, experts said.

More enterprises have been switching from the MariaDB Enterprise Server edition to the MariaDB Community Server edition ever since Microsoft announced the end of life in 2025, according to Thomas Spoelstra, a database expert with Dutch database management services firm OptimaData.

OptimaData manages multiple database management systems including Microsoft SQL Server, Oracle Database, Sybase, MySQL, MariaDB, MongoDB, and PostgreSQL.

MariaDB Enterprise always has been less popular than the community edition, Spoelstra said, adding that most of OptimaData’s clients use the latter because technical support for MariaDB Enterprise is “very” expensive.

This results in most enterprises using the community edition along with MariaDB offerings such as the Galera Cluster, the expert explained.

Additionally, Spoelstra said that not only have the financial difficulties forced enterprise customers to look at other options but have forced cloud service providers to slowly steer away from MariaDB’s commercial offerings.

Samsung is one example of an enterprise customer who might face challenges due to the shuttering of MariaDB Xpand and eventually look at other options. This could pose a significant cost for the Korean giant, as the company uses 50 MariaDB Xpand nodes to operate a single database where it hosts data of its smartphone customers.

An email sent to Samsung seeking more details on their MariaDB investment went unanswered.

In addition, the Korean electronics giant’s cloud and technology services division, Samsung SDS, offers MariaDB as a managed database offering. A separate email seeking responses on the availability of the service and its planned continuity also went unanswered.

Customers losing confidence

The discontinuation of offerings by MariaDB has forced enterprise customers to lose confidence in the company, said Tony Baer, principal analyst at dbInsight. These kinds of developments will make any customer enterprise insecure about whether their investments will be next for the chopping block, Baer added.

Although the decisions, such as laying off staffers or discontinuation of offerings, were taken by MariaDB to reduce costs, these decisions have impacted the profile of the company, according to Matt Aslett, director at Ventana Research, an arm of research and advisory firm ISG.

The impact has been profound, especially in relation to the company’s DBaaS and distributed SQL offerings, which were potential areas of long-term growth and innovation, Aslett explained.

Given the current plight of MariaDB’s commercial business, Baer said that he sees any acquisition activity as “nothing more than a Hail Mary pass to protect what are now stranded legacy customer investments.”

In fact, if Spoelstra is correct the acquisition by K1 or any other company will further accelerate the movement of customers from MariaDB to MySQL, ProxySQL, and other options.

Presently, MariaDB, according to data from 6Sense, has a 2.08% share of the relational database market category, which is 0.07% less than what the company had reported in April of last year.

For the quarter ended December, MariaDB PLC posted a net loss of $8.8 million, driven by interest expense, restructuring costs, and costs associated with the discontinuation of products.

Next read this:

Posted Under: Database

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue