Monthly Archives: March 2023

MariaDB SkySQL adds serverless analytics, cost management features

Posted by on 30 March, 2023

This post was originally published on this site

MariaDB is adding features such as serverless analytics and cost management to the new release of its managed database-as-a-service (DBaaS) SkySQL, it said Thursday.

SkySQL, which is a managed instance of the MariaDB platform, offers OLAP (online analytical processing) and OLTP (online transaction processing) along with enterprise features like sharding, load balancing, and auto-failover via a combination of MariaDB Xpand, MariaDB Enterprise Server, and MariaDB ColumnStore.

In order to help enterprises bring down the cost of databases and to manage expenditure better, MariaDB has introduced an autoscaling feature for both compute and storage.

“Rules specify when autoscaling is triggered, for example, when CPU utilization is above 75% over all replicas sustained for 30 minutes, then a new replica or node will be added to handle the increase,” the company said in a statement.

“Similarly, when CPU utilization is less than 50% over all replicas for an hour, nodes or a replica is removed. Users always specify the top and bottom threshold so there are never any cost surprises,” it explained, adding that enterprises only pay for the resources used.

In addition to autoscaling, the company has added serverless analytics capabilities eliminating the need for running extract, transform, load (ETL) tasks.

“SkySQL enables operational analytics on active transactional data as well as external data sources using a serverless analytics layer powered by Apache Spark SQL,” the company said, adding that this approach removes inconsistencies between an analytical view and a transactional view.

Further, it said that enterprises will pay for the compute used for analytics without the need to provision for processing power.

Additional features in the new release includes access for data scientists to a specific version of Apache Zeppelin notebooks.

“The notebook is pre-loaded with examples that demonstrate ways to run analytics on data stored in SkySQL. It can also be used to discover database schemas, running queries on data stored in Amazon S3 and federating queries to join data across SkySQL databases and S3 object storage,” the company said.

The new release of SkySQL has been made generally available on AWS and Google cloud. It includes updated MariaDB Xpand 6.1.1, Enterprise Server 10.6.12 and ColumnStore 6.3.1.

New customers signing up for the DBaaS can claim $500 in credits, MariaDB said.

Posted Under: Database
MariaDB’s new SkySQL release gets serverless analytics, cost management features

Posted by on 30 March, 2023

This post was originally published on this site

MariaDB is adding features such as serverless analytics and cost management to the new release of its managed database-as-a-service (DBaaS) SkySQL, it said Thursday.

SkySQL, which is a managed instance of the MariaDB platform, offers OLAP (online analytical processing) and OLTP (online transaction processing) along with enterprise features like sharding, load balancing, and auto-failover via a combination of MariaDB Xpand, MariaDB Enterprise Server, and MariaDB ColumnStore.

In order to help enterprises bring down the cost of databases and to manage expenditure better, MariaDB has introduced an autoscaling feature for both compute and storage.

“Rules specify when autoscaling is triggered, for example, when CPU utilization is above 75% over all replicas sustained for 30 minutes, then a new replica or node will be added to handle the increase,” the company said in a statement.

“Similarly, when CPU utilization is less than 50% over all replicas for an hour, nodes or a replica is removed. Users always specify the top and bottom threshold so there are never any cost surprises,” it explained, adding that enterprises only pay for the resources used.

In addition to autoscaling, the company has added serverless analytics capabilities eliminating the need for running extract, transform, load (ETL) tasks.

“SkySQL enables operational analytics on active transactional data as well as external data sources using a serverless analytics layer powered by Apache Spark SQL,” the company said, adding that this approach removes inconsistencies between an analytical view and a transactional view.

Further, it said that enterprises will pay for the compute used for analytics without the need to provision for processing power.

Additional features in the new release includes access for data scientists to a specific version of Apache Zeppelin notebooks.

“The notebook is pre-loaded with examples that demonstrate ways to run analytics on data stored in SkySQL. It can also be used to discover database schemas, running queries on data stored in Amazon S3 and federating queries to join data across SkySQL databases and S3 object storage,” the company said.

The new release of SkySQL has been made generally available on AWS and Google cloud. It includes updated MariaDB Xpand 6.1.1, Enterprise Server 10.6.12 and ColumnStore 6.3.1.

New customers signing up for the DBaaS can claim $500 in credits, MariaDB said.

Posted Under: Database
Google ambushes on-prem PostgreSQL with AlloyDB Omni

Posted by on 29 March, 2023

This post was originally published on this site

Google is developing a self-managed and downloadable version of its PostgreSQL-compatible AlloyDB fully managed database-as-a-service (DBaaS) in order to further help enterprises to modernize their legacy databases. It is now inviting applications for the private preview, it said Wednesday.

Dubbed AlloyDB Omni, the new offering uses the same underlying engine as AlloyDB and can be downloaded and run on premises, at the edge, across clouds, or even on developer laptops, Andi Gutmans, general manager of databases at Google Cloud, wrote in a blog post.

This means that enterprises using AlloyDB Omni will get AlloyDB’s improved transactional processing performance and memory management compared with standard PostgreSQL, and an index advisor to optimize frequently run queries.

“The AlloyDB Omni index advisor helps alleviate the guesswork of tuning query performance by conducting a deep analysis of the different parts of a query including subqueries, joins, and filters,” Gutmans said, adding that it periodically analyzes the database workload to identify queries that can benefit from indexes, and recommends new indexes that can increase query performance.

In order to reduce latency for query results, Omni uses AlloyDB’s columnar engine that keeps frequently queried data in an in-memory columnar format for faster scans, joins, and aggregations, the company said, adding that AlloyDB Omni uses machine learning to automatically organize data between row-based and columnar formats, convert the data when needed, and choose between columnar and row-based execution plans.

“This delivers excellent performance for a wide range of queries, with minimal management overhead,” Gutmans said.

How does AlloyDB Omni help enterprises?

Self-managed AlloyDB Omni provides a pathway to modernize legacy databases on-premises before moving to the cloud, analysts said.

“Database migrations can be complex and costly, especially when combined with migration from on-premises infrastructure to cloud. AlloyDB Omni provides a pathway for organizations to modernize those workloads in-place by migrating to AlloyDB Omni on-premises,” said Matt Aslett, research director at Ventana Research.

“This move can be seen as one step prior to a potential move to the AlloyDB managed service, or with a view to retaining the workloads in on-premises data centers or on edge infrastructure due to sovereignty or performance requirements,” he added.

According to Omdia’s Chief Analyst Bradley Shimmin and dbInsight’s Principal Analyst Tony Baer, AlloyDB Omni combines the best of open-source PostgreSQL and Google Cloud’s architecture, making it more appealing than rival services such as AWS Aurora for PostgreSQL and Microsoft’s CitiusDB, among others.

Shimmin said that for larger customers or those looking to modernize and transform sizable, mission-critical databases, “Sticking with an open-source solution like PostgreSQL can be limiting in terms of providing modern data architectures or features, especially in supporting multi or hybrid-deployment requirements.” AlloyDB Omni could overcome those limitations, he said.

For Baer, “The appeal of AlloyDB Omni is that it is one of the few PostgreSQL implementations optimized for both scale and mixed transaction or analytic workloads that is not solely tethered to a specific hyperscaler.”

What is Google’s strategy with AlloyDB Omni?

Google plans to use AlloyDB Omni as another offering in its plan to gain more share in the PostgreSQL-led legacy database migration market at a time when PostgreSQL has seen rise in popularity, the analysts said.

Shimmin noted that, “For many customers, PostgreSQL is a relational lingua-franca and therefore a means of modernizing legacy databases by porting them to a cloud-native rendition on AWS, GCP or any other hyperscaler.”

According to data from relational databases knowledge platform db-engines.com, PostgreSQL has been steadily rising in popularity and is currently the fourth-most-popular RDBMS (relational database management system) and fourth-most-popular product cited among all databases in their rankings.

Another reason for PostgreSQL’s rise in popularity is that the database management system offers better transactional and analytical capabilities than MySQL along with other features such as extended support for spatial data, broader SQL support, enhanced security and governance, and expanded support for programming languages.

Google’s Gutmans said the company has received “huge” interest from customers for database modernization since the launch of AlloyDB.

And according to Aslett, AlloyDB Omni builds on AlloyDB’s momentum for Google to gain share in the PostgreSQL market.

“AlloyDB was launched to enable organizations to modernize applications with high-end performance and reliability requirements that have previously been deployed on-premises on enterprise operational databases including Oracle, IBM and Microsoft, as well as PostgreSQL,” he said.

“By 2025, two-thirds of organizations will re-examine their current operational database suppliers with a view to improving fault tolerance and supporting the development of new intelligent operational applications,” he added.

According to a report from market research firm Gartner, the race to modernize databases is accelerating due to enterprises’ need to run analytics for business strategy and growth.

How to access AlloyDB Omni?

Google is currently offering the free developer version of AlloyDB Omni for non-production use, which can be downloaded on developers’ laptops.

“When it’s time to move an application to a production-ready environment, it will run unchanged on AlloyDB Omni in any environment, or on the AlloyDB for PostgreSQL service in Google Cloud,” Gutmans said.

“If needed, you can use standard open-source PostgreSQL tools to migrate or replicate their data. You can also use standard open-source PostgreSQL tools for database operations like backup and replication,” he added.

Google said AlloyDB Omni supports existing PostgreSQL applications as it uses standard PostgreSQL drivers. In addition, the software provides compatibility with PostgreSQL extensions and configuration flags.

Further, Google said that it will provide full enterprise support, including 24/7 technical support and software updates for security patches, features, when AlloyDB Omni is made generally available.

Although Google hasn’t yet set a date for that, enterprises can already get access to the technical preview of the offering by submitting a request to the search giant.

Posted Under: Database
Google offers modernization path for PostgreSQL with on-premises AlloyDB Omni

Posted by on 29 March, 2023

This post was originally published on this site

Google is developing a self-managed and downloadable version of its PostgreSQL-compatible AlloyDB fully managed database-as-a-service (DBaaS) in order to further help enterprises to modernize their legacy databases. It is now inviting applications for the private preview, it said Wednesday.

Dubbed AlloyDB Omni, the new offering uses the same underlying engine as AlloyDB and can be downloaded and run on premises, at the edge, across clouds, or even on developer laptops, Andi Gutmans, general manager of databases at Google Cloud, wrote in a blog post.

This means that enterprises using AlloyDB Omni will get AlloyDB’s improved transactional processing performance and memory management compared with standard PostgreSQL, and an index advisor to optimize frequently run queries.

“The AlloyDB Omni index advisor helps alleviate the guesswork of tuning query performance by conducting a deep analysis of the different parts of a query including subqueries, joins, and filters,” Gutmans said, adding that it periodically analyzes the database workload to identify queries that can benefit from indexes, and recommends new indexes that can increase query performance.

In order to reduce latency for query results, Omni uses AlloyDB’s columnar engine that keeps frequently queried data in an in-memory columnar format for faster scans, joins, and aggregations, the company said, adding that AlloyDB Omni uses machine learning to automatically organize data between row-based and columnar formats, convert the data when needed, and choose between columnar and row-based execution plans.

“This delivers excellent performance for a wide range of queries, with minimal management overhead,” Gutmans said.

How does AlloyDB Omni help enterprises?

Self-managed AlloyDB Omni provides a pathway to modernize legacy databases on-premises before moving to the cloud, analysts said.

“Database migrations can be complex and costly, especially when combined with migration from on-premises infrastructure to cloud. AlloyDB Omni provides a pathway for organizations to modernize those workloads in-place by migrating to AlloyDB Omni on-premises,” said Matt Aslett, research director at Ventana Research.

“This move can be seen as one step prior to a potential move to the AlloyDB managed service, or with a view to retaining the workloads in on-premises data centers or on edge infrastructure due to sovereignty or performance requirements,” he added.

According to Omdia’s Chief Analyst Bradley Shimmin and dbInsight’s Principal Analyst Tony Baer, AlloyDB Omni combines the best of open-source PostgreSQL and Google Cloud’s architecture, making it more appealing than rival services such as AWS Aurora for PostgreSQL and Microsoft’s CitiusDB, among others.

Shimmin said that for larger customers or those looking to modernize and transform sizable, mission-critical databases, “Sticking with an open-source solution like PostgreSQL can be limiting in terms of providing modern data architectures or features, especially in supporting multi or hybrid-deployment requirements.” AlloyDB Omni could overcome those limitations, he said.

For Baer, “The appeal of AlloyDB Omni is that it is one of the few PostgreSQL implementations optimized for both scale and mixed transaction or analytic workloads that is not solely tethered to a specific hyperscaler.”

What is Google’s strategy with AlloyDB Omni?

Google plans to use AlloyDB Omni as another offering in its plan to gain more share in the PostgreSQL-led legacy database migration market at a time when PostgreSQL has seen rise in popularity, the analysts said.

Shimmin noted that, “For many customers, PostgreSQL is a relational lingua-franca and therefore a means of modernizing legacy databases by porting them to a cloud-native rendition on AWS, GCP or any other hyperscaler.”

According to data from relational databases knowledge platform db-engines.com, PostgreSQL has been steadily rising in popularity and is currently the fourth-most-popular RDBMS (relational database management system) and fourth-most-popular product cited among all databases in their rankings.

Another reason for PostgreSQL’s rise in popularity is that the database management system offers better transactional and analytical capabilities than MySQL along with other features such as extended support for spatial data, broader SQL support, enhanced security and governance, and expanded support for programming languages.

Google’s Gutmans said the company has received “huge” interest from customers for database modernization since the launch of AlloyDB.

And according to Aslett, AlloyDB Omni builds on AlloyDB’s momentum for Google to gain share in the PostgreSQL market.

“AlloyDB was launched to enable organizations to modernize applications with high-end performance and reliability requirements that have previously been deployed on-premises on enterprise operational databases including Oracle, IBM and Microsoft, as well as PostgreSQL,” he said.

“By 2025, two-thirds of organizations will re-examine their current operational database suppliers with a view to improving fault tolerance and supporting the development of new intelligent operational applications,” he added.

According to a report from market research firm Gartner, the race to modernize databases is accelerating due to enterprises’ need to run analytics for business strategy and growth.

How to access AlloyDB Omni?

Google is currently offering the free developer version of AlloyDB Omni for non-production use, which can be downloaded on developers’ laptops.

“When it’s time to move an application to a production-ready environment, it will run unchanged on AlloyDB Omni in any environment, or on the AlloyDB for PostgreSQL service in Google Cloud,” Gutmans said.

“If needed, you can use standard open-source PostgreSQL tools to migrate or replicate their data. You can also use standard open-source PostgreSQL tools for database operations like backup and replication,” he added.

Google said AlloyDB Omni supports existing PostgreSQL applications as it uses standard PostgreSQL drivers. In addition, the software provides compatibility with PostgreSQL extensions and configuration flags.

Further, Google said that it will provide full enterprise support, including 24/7 technical support and software updates for security patches, features, when AlloyDB Omni is made generally available.

Although Google hasn’t yet set a date for that, enterprises can already get access to the technical preview of the offering by submitting a request to the search giant.

Posted Under: Database
Working with Azure’s Data API builder

Posted by on 29 March, 2023

This post was originally published on this site

Microsoft’s platform-based approach to cloud development has allowed it to offer managed versions of many familiar elements of the tech stack, especially within its data platform. As well as its own SQL Server (as Azure SQL) and the no-SQL Cosmos DB, it has managed versions of familiar open source databases, including PostgreSQL and MySQL.

Using these familiar databases and APIs makes it easy to migrate data from on premises to Azure, or to build new cloud-native applications without a steep learning curve. Once your data is stored on Azure, you can use familiar tools and techniques to use it from your code, especially if you’re working with .NET and Java which have plenty of official and unofficial data SDKs. But what if you’re taking advantage of newer development models like Jamstack and using tools like Azure Static Web Apps to add API-driven web front ends to your applications?

Although you could use tools such as Azure Functions or App Service to build your own data API layer, it adds inefficiencies and increases your maintenance and testing requirements. Instead, you can now use Microsoft’s own Data API builder tool. It’s simple to configure and gives a database either REST or GraphQL endpoints that can quickly be consumed by JavaScript or any other REST-aware language. It’s also possibly the fastest way to start turning Azure-hosted databases into applications.

Introducing Data API builder

Designed to run on premises, at the edge, and in the cloud, Data API builder is an open source tool targeting five different databases: Azure SQL, SQL Server, PostgreSQL, MySQL, and Cosmos DB. You can work with your own installations as well as with Microsoft’s own managed services, so you can develop and run in your own data center and migrate code to the cloud as needed.

If you’re using Data API builder as part of your own code, it’s a .NET tool that’s available as a Nuget package. You need .NET 6 or 7 to run it, and it runs on any .NET-compatible system, including Linux. Once it’s installed, you can use its CLI to build the appropriate endpoints for your databases, ready for use in your applications. Alternatively, you can use a ready-to-run container image from Microsoft’s container registry. This approach works well if you’re targeting edge container environments, such as the new Azure Kubernetes Service (AKS) Edge Essentials, which gives you a limited managed Kubernetes platform.

Installation is quick and you can use the tool with the dab command from any command line. Help is relatively basic, but as this is a very focused tool, you shouldn’t find it hard to use. Single-purpose command-line tools like this are an increasingly important part of the .NET ecosystem, and it’s worth being familiar with them as they can save a lot of work and time.

Building APIs at the command line

It’s a good idea to be familiar with ADO.NET to use Data API builder. That’s not surprising; it’s the standard way of accessing data services in .NET and, at heart, this is a .NET tool, even if you’re using it to build web applications.

To make a connection, you’ll need to know the structure of your database and which elements you want to expose. At the same time, you also need any ADO connection strings so you can make the initial connection to your database. For Azure resources, these can be found in the Azure Portal as part of your resource properties. You don’t need to store the connection data in the clear; you have the option of using environment variables to hold data outside your code at runtime, so you can use tools like Azure Key Vault to keep any secrets safe.

Data API builder uses a JSON configuration file to store details of any APIs you build. Create this by defining the database type, along with a connection string. Be sure to use an account with appropriate permissions for your application. The configuration file details the supported API types, so you can enable either REST, GraphQL, or both. Other parts of the configuration file specify the mode, whether cross-origin scripting is allowed, and the authentication type used for the connection. While the CLI tool creates and updates configuration data, you can edit it yourself using the GitHub-hosted documentation.

Once you have defined a connection, you can set up the APIs for your data. Using familiar database entities, give the API a name and tie it to a source, like a table or a query, and give it permissions associated with users and database operations. The name is used to build the API path for both REST and GraphQL.

With a connection defined and entities added to the configuration file, you’re now ready to build and serve the API. The Data API builder is perhaps best thought of as a simple broker that takes REST and GraphQL connections, maps them to prebuilt ADO statements, and runs them on the source before returning results and remapping them into the appropriate format. The REST API supports common verbs that map to standard CRUD (create, read, update, delete) operations; for example, GET will retrieve data and POST will write it.

Each REST verb has additional query parameters to help manage your data. You can filter data, order it, and apply select statements. Unfortunately, even though you can pick the first however many items to display, there doesn’t seem to be a way to paginate data at present. Hopefully, this will be added in a future release as it would simplify building web content from the query data.

Using GraphQL with Data API builder

If you’re planning to use GraphQL, it’s worth using a tool such as Postman to help build and test requests. GraphQL can do a lot more than a basic REST query, but it can be hard to build queries by hand. Having a tool to explore the API and test queries can save a lot of time. For more complex GraphQL queries, you will need to build relationships into your configuration. Here it helps to have an entity diagram of your data source with defined relationships that you can describe by the type of relationship, the target entity for the query, and how the relationship is stored in your database.

The process of making an API is the same for all the supported databases, with one difference for Cosmos DB. As it already has a REST API, there’s no need to generate another. However, you can still use it to create a GraphQL API.

If you’re using this approach with Azure Static Web Apps, first use the Azure Portal to add your source database to your site configuration. You then need to import an existing Data API builder configuration file. You can use both the Database API builder and the Azure Static Web Apps CLI to create the files needed. The Static Web Apps CLI creates a stub file for the configuration, which you can either edit by hand or paste in the contents of a Database API builder file.

Being able to add GraphQL support to any database is important; it’s a much more efficient way to query data than traditional APIs, simplifying complex queries. By supporting both REST and GraphQL APIs, Data API builder can help migrate between API types, allowing you to continue using familiar queries at the same time as you learn how to structure GraphQL. As an added bonus, while this is a tool that works for any application framework, it’s well worth using with Azure Static Web Apps to build data-connected Jamstack apps.

Posted Under: Database
YugabyteDB Managed adds managed command line interface

Posted by on 27 March, 2023

This post was originally published on this site

Yugabyte on Monday said that it was adding a new managed command line interface along with other features to the managed version of its open source distributed SQL database, dubbed YugabyteDB Managed.

The new Managed Command Line Interface (CLI), according to the company, allows developers to benefit from the advantages of automation while writing code without needing to learn new skills.

“Developers of all levels can easily create and manage clusters from their terminal or Integrated Development Environment (IDE) and make use of the most advanced set of tools available for optimizing database performance and driving the business forward,” Karthik Ranganathan, CTO and co-founder, Yugabyte, said in a statement.

This means that developers can create and manage clusters hosted in YugabyteDB Managed from their IDE or terminal without requiring REST API or Terraform skills, added Ranganathan.

In addition, the new Managed CLI can automate repetitive tasks and has an auto-completion feature that makes it easy for developers, database administrators, and DevOps engineers to discover new features, the company said.

The new CLI also comes with support for multiple platforms such as Mac, Windows and Linux, the company added. The Windows version can be downloaded from GitHub.

The latest update to YugabyteDB Managed also features enhanced observability features with the company adding over 125 new SQL and storage layer metrics.

“With these new metrics, enterprises will gain even deeper insights into their database’s performance, making it easier to identify and resolve performance issues quickly,” the company wrote in a blog post.

The cloud-based user interface for observability inside YugabyteDB Managed includes new visualization options to reorder metrics for a custom dashboard and new synchronized tooltips in charts for easier troubleshooting, the company added.

Further, the company has added support for AWS PrivateLink, a service that provides private connectivity between virtual private clouds, supported AWS services, and on-premises networks.

“This feature, now in private preview, is available for dedicated clusters created in YugabyteDB Managed on AWS, as an alternative to VPC peering, for secure access to your databases over a private network,” the company said.

The support for AWS PrivateLink also provides more secure access to an enterprise’s databases, it added.

Enterprises that already are a user of YugabyteDB can get access to a free trial of YugbyteDB Managed with all features on request.

Posted Under: Database
Oracle adds machine learning features to MySQL HeatWave

Posted by on 23 March, 2023

This post was originally published on this site

Oracle is adding new machine learning features to its data analytics cloud service MySQL HeatWave.

MySQL HeatWave combines OLAP (online analytical processing), OLTP (online transaction processing), machine learning, and AI-driven automation in a single MySQL database.

The new machine learning capabilities will be added to the service’s AutoML and MySQL Autopilot components, the company said when it announced the update on Thursday.

While AutoML allows developers and data analysts to build, train and deploy machine learning models within MySQL HeatWave without moving to a separate service for machine learning, MySQL Autopilot provides machine learning-based automation to HeatWave and OLTP such as auto provisioning, auto encoding, auto query plan, auto shape prediction and auto data placement, among other features.

AutoML augments time series forecasting via machine learning

The new machine learning-based capabilities added to AutoML include multivariate time series forecasting, unsupervised anomaly detection, and recommender systems, Oracle said, adding that all the new features were generally available.

“Multivariate time series forecasting can predict multiple time-ordered variables, where each variable depends both on its past value and the past values of other dependent variables. For example, it is used to build forecasting models to predict electricity demand in the winter considering the various sources of energy used to generate electricity,” said Nipun Agarwal, senior vice president of research at Oracle.

In contrast to the regular practice of having a statistician trained in time-series analysis or forecasting to select the right algorithm for the desired output, AutoML’s multivariate time series forecasting automatically preprocesses the data to select the best algorithm for the ML model and automatically tunes the model, the company said.

“The HeatWave AutoML automated forecasting pipeline uses a patented technique that consists of stages including advanced time-series preprocessing, algorithm selection and hyperparameter tuning,” said Agarwal, adding that this automation can help enterprises save time and effort as they don’t need to have trained statisticians on staff.

The multivariate time series forecasting feature, according to Constellation Research Principal Analyst Holger Muller, is unique to Oracle’s MySQL HeatWave.

“Time series forecasting, multivariate or otherwise, is not currently offered as part of a single database that offers machine learning-augmented analytics. AWS, for example, offers a separate database for time series,” Muller said.

HeatWave enhances anomaly detection

Along with multivariate time series forecasting, Oracle is adding machine-learning based “unsupervised” anomaly detection to MySQL HeatWave.

In contrast to the practice of using specific algorithms to detect specific anomalies in data, AutoML can detect different types of anomalies from unlabeled data sets, the company said, adding that this feature helps enterprise users when they don’t know what anomaly types are in the dataset.

“The model generated by HeatWave AutoML provides high accuracy for all types of anomalies — local, cluster, and global. The process is completely automated, eliminating the need for data analysts to manually determine which algorithm to use, which features to select, and the optimal values of the hyperparameters,” said Agarwal.

In addition, AutoML has added a recommendation engine, which it calls recommender systems, that underpins automation for algorithm selection, feature selection, and hyperparameter optimization inside MySQL HeatWave.

“With MySQL HeatWave, users can invoke the ML_TRAIN procedure, which automatically trains the model that is then stored in the MODEL_CATALOG. To predict a recommendation, users can invoke ML_PREDICT_ROW or ML_PREDICT_TABLE,” said Agarwal.

Business users get MySQL HeatWave AutoML console

In addition, Oracle is adding an interactive console for business users inside HeatWave.

“The new interactive console lets business analysts build, train, run, and explain ML models using the visual interface — without using SQL commands or any coding,” Agarwal said, adding that the console makes it easier for business users to explore conditional scenarios for their enterprise.

“The addition of the interactive console is in line with enterprises trying to make machine learning accountable. The console will help business users dive into the deeper end of the pool as they want to evolve into ‘citizen data scientists’ to avoid getting into too much hot water,” said Tony Baer, principal analyst at dbInsight.

The console has been made initially available for MySQL HeatWave on AWS.

Oracle also said that it would be adding support for storage on Amazon S3 for HeatWave on AWS to reduce cost as well improve the availability of the service.

“When data is loaded from MySQL (InnoDB storage engine) into HeatWave, a copy is made to the scale-out data management layer built on S3. When an operation requires reloading of data to HeatWave, such as during error recovery, data can be accessed in parallel by multiple HeatWave nodes and the data can be directly loaded into HeatWave without the need for any transformation,” said Agarwal.

MySQL Autopilot updates

The new features added to MySQL HeatWave include two new additions to MySQL Autopilot —  Auto Shape prediction advisor integration with the interactive console and auto unload.

“Within the interactive console, database users can now access the MySQL Autopilot Auto shape prediction advisor that continuously monitors the OLTP workload to recommend with an explanation the right compute shape at any given time — allowing customers to always get the best price-performance,” Agarwal said.

The auto unload feature, according to the company, can recommend which tables to be unloaded based on workload history.

“Freeing up memory reduces the size of the cluster required to run a workload and saves cost,” Agarwal said, adding that both the features were in general availability.

HeatWave targets smaller data volumes

Oracle is offering a smaller shape HeatWave to attract customers with smaller sizes of data.

In contrast to the earlier size of 512GB for a standard HeatWave node, the smaller shape will have a size of 32GB with the ability to process up to 50GB for a price of $16 per month, the company said.

In addition, the company said that data processing capability for its standard 512GB HeatWave Node has been increased from 800GB to 1TB.

“With this increase and other query performance improvements, the price performance benefit of HeatWave has further increased by 15%,” said Agarwal.

Posted Under: Database
Tailscale: Fast and easy VPNs for developers

Posted by on 15 March, 2023

This post was originally published on this site

Networking can be an annoying problem for software developers. I’m not talking about local area networking or browsing the web, but the much harder problem of ad hoc, inbound, wide area networking.

Suppose you create a dazzling website on your laptop and you want to share it with your friends or customers. You could modify the firewall on your router to permit incoming web access on the port your website uses and let your users know the current IP address and port, but that could create a potential security vulnerability. Plus, it would only work if you have control over the router and you know how to configure firewalls for port redirection.

Alternatively, you could upload your website to a server, but that’s an extra step that can often become time-consuming, and maintaining dedicated servers can be a burden, both in time and money. You could spin up a small cloud instance and upload your site there, but that is also an extra step that can often become time-consuming, even though it’s often fairly cheap.

Another potential solution is Universal Plug and Play (UPnP), which enables devices to set port forwarding rules by themselves. UPnP needs to be enabled on your router, but it’s only safe if the modem and router are updated and secure. If not, it creates serious security risks on your whole network. The usual advice from security vendors is not to enable it, since the UPnP implementations on many routers are still dangerous, even in 2023. On the other hand, if you have an Xbox in the house, UPnP is what it uses to set up your router for multiplayer gaming and chat.

A simpler and safer way is Tailscale, which allows you to create an encrypted, peer-to-peer virtual network using the secure WireGuard protocol without generating public keys or constantly typing passwords. It can traverse NAT and firewalls, span subnets, use UPnP to create direct connections if it’s available, and connect via its own network of encrypted TCP relay servers if UPnP is not available.

In some sense, all VPNs (virtual private networks) compete with Tailscale. Most other VPNs, however, route traffic through their own servers, which tends to increase the network latency. One major use case for server-based VPNs is to make your traffic look like it’s coming from the country where the server is located; Tailscale doesn’t help much with this. Another use case is to penetrate corporate firewalls by using a VPN server inside the firewall. Tailscale competes for this use case, and usually has a simpler setup.

Besides Tailscale, the only other peer-to-peer VPN is the free open source WireGuard, on which Tailscale builds. Wireguard doesn’t handle key distribution and pushed configurations. Tailscale takes care of all of that.

What is Tailscale?

Tailscale is an encrypted point-to-point VPN service based on the open source WireGuard protocol. Compared to traditional VPNs based on central servers, Tailscale often offers higher speeds and lower latency, and it is usually easier and cheaper to set up and use.

Tailscale is useful for software developers who need to set up ad hoc networking and don’t want to fuss with firewalls or subnets. It’s also useful for businesses that need to set up VPN access to their internal networks without installing a VPN server, which can often be a significant expense.

Installing and using Tailscale

Signing up for a Tailscale Personal plan was free and quick; I chose to use my GitHub ID for authentication. Installing Tailscale took a few minutes on each machine I tried: an M1 MacBook Pro, where I installed it from the macOS App Store; an iPad Pro, installed from the iOS App Store; and a Pixel 6 Pro, installed from the Google Play Store. Installing on Windows starts with a download from the Tailscale website, and installing on Linux can be done using a curl command and shell script, or a distribution-specific series of commands.

tailscale 01 IDG

You can install Tailscale on macOS, iOS, Windows, Linux, and Android. This tab shows the instructions for macOS.

Tailscale uses IP addresses in the 100.x.x.x range and automatically assigns DNS names, which you can customize if you wish. You can see your whole “tailnet” from the Tailscale site and from each machine that is active on the tailnet.

In addition to viewing your machines, you can view and edit the services available, the users of your tailnet, your access controls (ACL), your logs, your tailnet DNS, and your tailnet settings.

tailscale 02 IDG

Once the three devices were running Tailscale, I could see them all on my Tailscale login page. I chose to use my GitHub ID for authentication, as I was testing just for myself. If I were setting up Tailscale for a team I would use my team email address.

tailscale 06 IDG

Tailscale pricing.

Tailscale installs a CLI on desktop and laptop computers. It’s not absolutely necessary to use this command line, but many software developers will find it convenient.

How Tailscale works

Tailscale, unlike most VPNs, sets up peer-to-peer connections, aka a mesh network, rather than a hub-and-spoke network. It uses the open source WireGuard package (specifically the userspace Go variant, wireguard-go) as its base layer.

For public key distribution, Tailscale does use a hub-and-spoke configuration. The coordination server is at login.tailscale.com. Fortunately, public key distribution takes very little bandwidth. Private keys, of course, are never distributed.

You may be familiar with generating public-private key pairs manually to use with ssh, and including a link to the private key file as part of your ssh command line. Tailscale does all of that transparently for its network, and ties the keys to whatever login or 2FA credentials you choose.

The key pair steps are:

  1. Each node generates a random public/private key pair for itself, and associates the public key with its identity.
  2. The node contacts the coordination server and leaves its public key and a note about where that node can currently be found, and what domain it’s in.
  3. The node downloads a list of public keys and addresses in its domain, which have been left on the coordination server by other nodes.
  4. The node configures its WireGuard instance with the appropriate set of public keys.

Tailscale doesn’t handle user authentication itself. Instead, it always outsources authentication to an OAuth2, OIDC (OpenID Connect), or SAML provider, including Gmail, G Suite, and Office 365. This avoids the need to maintain a separate set of user accounts or certificates for your VPN.

tailscale 07 IDG

Tailscale CLI help. On macOS, the CLI executable lives inside the app package. A soft link to this executable doesn’t seem to work on my M1 MacBook Pro, possibly because Tailscale runs in a sandbox.

NAT traversal is a complicated process, one that I personally tried unsuccessfully to overcome a decade ago. NAT (network address translation) is one of the ways firewalls work: Your computer’s local address of, say, 192.168.1.191, gets translated in the firewall, as a packet goes from your computer to the internet, to your current public IP address and a random port number, say 173.76.179.155:9876, and remembers that port number as yours. When a site returns a response to your request, your firewall recognizes the port and translates it back to your local address before passing you the response.

tailscale 08 IDG

Tailscale status, Tailscale pings to two devices, and plain pings to the same devices using the native network. Notice that the Tailscale ping to the Pixel device first routes via a DERP server (see below) in NYC, and then manages to find the LAN connection.

Where’s the problem? Suppose you have two firewall clients trying to communicate peer-to-peer. Neither can succeed until someone or something tells both ends what port to use.

This arbitrator will be a server when you use the STUN (Session Traversal Utilities for NAT) protocol; while STUN works on most home routers, it unfortunately doesn’t work on most corporate routers. One alternative is the TURN (Traversal Using Relays around NAT) protocol, which uses relays to get around the NAT deadlock issue; the trouble with that is that TURN is a pain in the neck to implement, and there aren’t many existing TURN relay servers.

Tailscale implements a protocol of its own for this, called DERP (Designated Encrypted Relay for Packets). This use of the term DERP has nothing to do with being goofy, but it does suggest that someone at Tailscale has a sense of humor.

Tailscale has DERP servers around the world to keep latency low; these include nine servers in the US. If, for example, you are trying to use Tailscale to connect your smartphone from a park to your desktop at your office, the chances are good that the connection will route via the nearest DERP server. If you’re lucky, the DERP server will only be used as a side channel to establish the connection. If you’re not, the DERP server will carry the encrypted WireGuard traffic between your nodes.

Tailscale vs. other VPNs

Tailscale offers a reviewer’s guide. I often look at such documents and then do my own thing because I’ve been around the block a couple of times and recognize when a company is putting up straw men and knocking them down, but this one is somewhat helpful. Here are some key differentiators to consider.

With most VPNs, when you are disconnected you have to log in again. It can be even worse when your company has two internet providers and has two VPN servers to handle them, because you usually have to figure out what’s going on by trial and error or by attempting to call the network administrator, who is probably up to his or her elbows in crises. With Tailscale (and WireGuard), the connection just resumes. Similarly, many VPN servers have trouble with flakey connections such as LTE. Tailscale and WireGuard take the flakiness in stride.

With most VPNs, getting a naive user connected for the first time is an exercise in patience for the network administrator and possibly scary for the user who has to “punch a hole” in her home firewall to enable the connection. With Tailscale it’s a five-minute process that isn’t scary at all.

Most VPNs want to be exclusive. Connecting to two VPN concentrators at once is considered a cardinal sin and a potential security vulnerability, especially if they are at different companies. Tailscale doesn’t care. WireGuard can handle this situation just fine even with hub-and-spoke topologies, and with Tailscale point-to-point connections there is a Zero Trust configuration that exposes no vulnerability.

Tailscale solutions

Tailscale has documented about a dozen solutions to common use cases that can be addressed with its ad hoc networking. These range from wanting to code from your iPad to running a private Minecraft server without paying for hosting or opening up your firewall.

As we’ve seen, Tailscale is simple to use, but also sophisticated under the hood. It’s an easy choice for ad hoc networking, and a reasonable alternative to traditional hub-and-spoke VPNs for companies. The only common VPN function that I can think of that it won’t do is spoof your location so that you can watch geographically restricted video content—but there are free VPNs that handle that.

Cost: Personal, open source, and “friends and family” plans, free. Personal Pro, $48 per year. Team, $5 per user per month (free trial available). Business, $15 per user per month (free trial available). Custom plans, contact sales.

Platform: macOS 10.13 or later, Windows 7 SP1 or later, Linux (most major distros), iOS 15 or later, Android 6 or later, Raspberry Pi, Synology.

Posted Under: Tech Reviews
Tibco’s Spotfire 12.2 release adds streaming and data science tools

Posted by on 14 March, 2023

This post was originally published on this site

Enterprise software provider Tibco is releasing a new version of its data visualization and analytics platform, Spotfire 12.2, with new features that focus on aiding developers and bolstering the software’s ability to act as an end-to-end system combining data science, streaming, and data management tools.  

“With the new release of Spotfire, we are able to combine databases, data science, streaming, real-time analytics and data management, giving Tibco Cloud the capability of an end-to-end platform,” said Michael O’Connell, chief analytics officer at Tibco.

The update, released Tuesday, comes with Tibco’s Cloud Actions, which enables business users to take actions directly from within the business insights windowe, according to O’Connell.

“New Tibco Spotfire Cloud Actions bridge the gap between insight and decision. This no-code capability for writing transactions to any cloud or on-premise application allows you to take action across countless operational systems, spanning all of today’s top enterprise business applications and databases,” the company said in a blog post.

This is made possible by Tibco Cloud Integration, which is an integration platform-as-a-service offered by the company over cloud. Cloud Integration supports all traditional iPaaS use cases and is optimized for REST-based and API use cases, Tibco said, adding that it offers over 800 operating system connectors and works with databases such as Dynamics 365, Amazon Redshift, Google Analytics, Magneto and MySQL connector.

Business users can also use Tibco Cloud Live Apps, which is a no-code interface, to create and automate manual workflows, to enable Cloud Actions, the company said.

Spotfire triggers actions, automation in other apps

Spotfire’s Cloud Actions, according to Constellation Research principal analyst Doug Henschen, enables Spotfire users to harness insights and set criteria that, when met, trigger actions and automation within other apps.

“Customers don’t want insights to end with reports and dashboards that are disconnected from the apps and platforms where people take action and get work done,” Henschen said, adding that leading vendors have been pushing to drive insights into action with workflow and automation options, whereby alerts as well as human and automated triggers can be used to kick off actions and business processes within external systems.

In addition to Cloud Actions, the company is offering data visualization modifications, dubbed mods, which are developed by Tibco and its community of users.

These modifications offer nuanced and different views to generate more insights, the company said, with Connell adding that they can be downloaded from the company website and other community sites.

In addition, Tibco Community, according to the company, provides hands-on enablement, along with galleries of prebuilt mods visualizations, data functions, and Cloud Actions grab-and-go templates, offering point-and-click deployment.

Tibco Streaming, Data Science offer growth opportunities

As part of the 12.2 update, Tibco is offering new features as part of its Tibco Streaming and Tibco Data Science’s Team Studio.

Tibco Streaming now comes with dynamic learning, which analyzes streaming data to automate data management and analytics calculations for real-time events, merging historical and streaming data as part of the same analysis, the company said.

This, according to Tibco, enables business intelligence to expand into low-latency operational use cases, such as IoT and edge sensors, with Spotfire serving as the control and decision hub.

On the data science side, Tibco, has updated its Team Studio to include a new Apache Spark 3 workflow engine to improve performance.

The performance improvement is made possible by a new operator framework that merges core and custom operators, enabling workflows to execute as a single Spark application, the company said.

Data Virtualization enables AI model training

In addition, the company has updated its Tibco Data Virtualization offering, allowing users to control Team Studio data preparation and do AI and analytics model training and inferencing at scale from within the Spotfire interface.

“End user applications can train models, make predictions, summarize data, and apply data science techniques, in context of the business problem at hand,” the company said.

Tibco Data Science’s Team Studio and Tibco Streaming will not only allow the company to offer end-to-end services with Tibco Cloud but also unfurl growth opportunities for the company, analysts said.

Tibco Data Science is about developing and deploying predictive models and managing their complete life cycle, according to Henschen.

“The Team Studio component of Data Science and the integrations with Spotfire and other tools are about making those predictive capabilities accessible to non-data-scientists so they can take proactive action,” Henschen said.

The demand for Tibco’s data science tools and streaming, according to Ventana Research’s David Menninger, will see an increase as more and more business processes involve real time analyses.

“The only way to keep up with real time processes is with AI and machine learning. You can’t expect someone to be monitoring a dashboard in real time to determine what the best action is for the current situation. These decisions need to be made algorithmically and that’s were data science comes in,” Menninger said.

Tibco, according to market research firm IDC, competes with companies including  Microsoft, Tableau, Qlik, IBM and Oracle in the business intelligence market.

Tibco has captured just 1.22% of the market, with installations in 8,160 companies, according to market research firm Enlyft.

The research firm lists Tableau and Microsoft Power BI as the market leaders, with 17% and 14% market share, respectively.

Posted Under: Database
| eBay

Posted by on 14 March, 2023

This post was originally published on this site

Advanced
Posted Under: eBay Store
Page 1 of 212

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue