All posts by Richy George

AWS re:Invent 2023: 7 Takeaways from the big annual event

Posted by on 4 December, 2023

This post was originally published on this site

At the AWS re:Invent conference last week, the spotlight was focused on artificial intelligence, with the new generative AI assistant, Amazon Q, debuting as the star of the show. But there was plenty other news to spark the interest of database managers, data scientists, data engineers, and developers, including new extract, transform, load (ETL) services, a new Cost Optimization Hub, and revamped enterprise pricing tier for AWS’ cloud-based development tool, dubbed Amazon CodeCatalyst.

Here are seven key takeaways from the conference:

The cloud services provider, which has been adding infrastructure capabilities and chips since the last year to support high-performance computing with enhanced energy efficiency, announced the latest iterations of its Graviton and the Trainium chips.

The Graviton4 processor, according to AWS, provides up to 30% better compute performance, 50% more cores, and 75% more memory bandwidth than the current generation Graviton3 processors.

Trainium2, on the other hand, is designed to deliver up to four times faster training than first-generation Trainium chips.

At re:Invent, AWS also extended its partnership with Nvidia, including support for the DGX Cloud, a new GPU project named Ceiba, and new instances for supporting generative AI workloads.

Nvidia also shared plans to integrate its NeMo Retriever microservice into AWS to help users with the development of generative AI tools like chatbots. NeMo Retriever is a generative AI microservice that enables enterprises to connect custom large language models (LLMs) to enterprise data, so the company can generate proper AI responses based on their own data.

Further, AWS said that it will be the first cloud provider to bring Nvidia’s GH200 Grace Hopper Superchips to the cloud.

Updated models added to Bedrock include Anthropic’s Claude 2.1 and Meta Llama 2 70B, both of which have been made generally available. Amazon also has added its proprietary Titan Text Lite and Titan Text Express foundation models to Bedrock.

In addition, the cloud services provider has added a model in preview, Amazon Titan Image Generator, to the AI app-building service.

AWS also has released a new feature within Bedrock that allows enterprises to evaluate, compare, and select the best foundational model for their use case and business needs.

Dubbed Model Evaluation on Amazon Bedrock and currently in preview, the feature is aimed at simplifying several tasks such as identifying benchmarks, setting up evaluation tools, and running assessments, the company said, adding that this saves time and cost.

In order to help enterprises train and deploy large language models efficiently, AWS introduced two new offerings — SageMaker HyperPod and SageMaker Inference — within its Amazon SageMaker AI and machine learning service.

In contrast to the manual model training process — which is prone to delays, unnecessary expenditure and other complications — HyperPod removes the heavy lifting involved in building and optimizing machine learning infrastructure for training models, reducing training time by up to 40%, the company said.

SageMaker Inference, on the other hand, is targeted at helping enterprise reduce model deployment cost and decrease latency in model responses. In order to do so, Inference allows enterprises to deploy multiple models to the same cloud instance to better utilize the underlying accelerators.

AWS has also updated its low code machine learning platform targeted at business analysts,  SageMaker Canvas.

Analysts can use natural language to prepare data inside Canvas in order to generate machine learning models, said Swami Sivasubramanian, head of database, analytics and machine learning services for AWS. The no code platform supports LLMs from Anthropic, Cohere, and AI21 Labs.

SageMaker also now features the Model Evaluation capability, now called SageMaker Clarify, which can be accessed from within the SageMaker Studio.

Last Tuesday, AWS CEO Adam Selipsky premiered the star of the cloud giant’s re:Invent 2023 conference: Amazon Q, the company’s answer to Microsoft’s GPT-driven Copilot generative AI assistant.    

Amazon Q can be used by enterprises across a variety of functions including developing applicationstransforming code, generating business intelligence, acting as a generative AI assistant for business applications, and helping customer service agents via the Amazon Connect offering. 

The cloud services provider has announced a new program, dubbed Amazon Braket Direct, to offer researchers direct, private access to quantum computers.

The program is part of AWS’ managed quantum computing service, named Amazon Braket, which was introduced in 2020. 

Amazon Bracket Direct allows researchers across enterprises to get private access to the full capacity of various quantum processing units (QPUs) without any wait time and also provides the option to receive expert guidance for their workloads from AWS’ team of quantum computing specialists, AWS said.

Currently, the Direct program supports the reservation of IonQ Aria, QuEra Aquila, and Rigetti Aspen-M-3 quantum computers.

The IonQ is priced at $7,000 per hour and the QuEra Aquila is priced at $2,500 per hour. The Aspen-M-3 is priced slightly higher at $3,000 per hour.

The updates announced at re:Invent include a new AWS Billing and Cost Management feature, dubbed AWS Cost Optimization Hub, which makes it easy for enterprises to identify, filter, aggregate, and quantify savings for AWS cost optimization recommendations.

The new Hub, according to the cloud services provider, gathers all cost-optimizing recommended actions across AWS Cloud Financial Management (CFM) services, including AWS Cost Explorer and AWS Compute Optimizer, in one place.

It incorporates customer-specific pricing and discounts into these recommendations, and it deduplicates findings and savings to give a consolidated view of an enterprise’s cost optimization opportunities, AWS added.

The feature is likely to help FinOps or infrastructure management teams understand cost optimization opportunities.

Continuing to build on its efforts toward zero-ETL for data warehousing services, AWS announced new Amazon RedShift integrations with Amazon Aurora PostgreSQL, Amazon DynamoDB, and Amazon RDS for MySQL.

Enterprises, typically, use extract, transform, load (ETL) to integrate data from multiple sources into a single consistent data store to be loaded into a data warehouse for analysis.

However, most data engineers claim that transforming data from disparate sources could be a difficult and time-consuming task as the process involves steps such as cleaning, filtering, reshaping, and summarizing the raw data. Another issue is the added cost of maintaining teams that prepare data pipelines for running analytics, AWS said.

In contrast, the new zero-ETL integrations, according to the company, eliminate the need to perform ETL between Aurora PostgreSQL, DynamoDB, RDS for MySQL, and RedShift as transactional data in these databases can be replicated into RedShift almost immediately and is ready for running analysis.

Other generative AI-related updates at re:Invent include updated support for vector databases for Amazon Bedrock. These databases include Amazon Aurora and MongoDB. Other supported databases include Pinecone, Redis Enterprise Cloud, and Vector Engine for Amazon OpenSearch Serverless.  

The company also added a new enterprise pricing tier to its cloud-based development tool, dubbed Amazon CodeCatalyst.

Next read this:

Posted Under: Database
Key new features and innovations in EDB Postgres 16

Posted by on 1 December, 2023

This post was originally published on this site

PostgreSQL 16, the latest major release of your favorite open source RDBMS, set new standards for database management, data replication, system monitoring, and performance optimization. Like clockwork, EnterpriseDB (EDB), a leading contributor to PostgreSQL code and leading provider of the Postgres database to enterprises, has unveiled its latest portfolio release for Postgres 16.1.

The milestone EDB Postgres 16 portolio release integrates the core advancements of PostgreSQL 16, reaffirming EDB’s dedication to the Postgres community and driving innovation in this technology. Let’s take a look at the key features added to the EDB Postgres 16 portolio release.

Performance and scalability enhancements

The new release boasts significant improvements in parallel processing and faster query execution, elevating Postgres’s status as a sophisticated open-source database. These enhancements are poised to benefit enterprises by facilitating more efficient data processing and quicker response times, crucial in today’s fast-paced business environments​​.

Advanced security features

Security takes a front seat in EDB Postgres 16, with the introduction of flexible cryptographic key support and enhancements to Transparent Data Encryption (TDE), which has been updated to offer options for both AES-128 and AES-256 encryption. This allows customers to select AES-128 for scenarios where performance and energy efficiency are priorities, and AES-256 for instances where compliance with regulatory standards or achieving the highest level of security is essential.

The addition of privilege analysis further strengthens the database by adhering to the principle of least privilege, which involves tracing and documenting all active and inactive privileges assigned to a role. This approach allows customers to tighten their database security by methodically revoking unnecessary privileges, thereby preventing both deliberate and accidental data access or alterations. Additionally, this system facilitates the provision of comprehensive reports on database privileges for each role to auditors.

Oracle compatibility and easier migration

Acknowledging the challenges of migrating from Oracle databases, EDB has enhanced its Oracle compatibility features, prioritizing the most common incompatibilities found in EDB Migration Portal. The results led EDB To expand coverage in Oracle packages such as DBMS_SESSION, DBMS_SQL, and UTL_FILE. This additional coverage is a significant boon for organizations migrating from legacy systems while maintaining familiar workflows and minimizing disruption​​​​.

EDB also has introduced SPL Check, which aims to transform the developer experience for developers working with stored procedures. Instead of writing stored procedures and ensuring complete application suite testing to detect errors, SPL Check helps detect errors not found until runtime despite a successful CREATE PROCEDURE/FUNCTION command.

Additional features compatible with Oracle have been incorporated into the SQL MERGE command, aiming to minimize the discrepancies encountered during runtime between Oracle’s MERGE and PostgreSQL’s MERGE.

Lastly, the update also introduces new NLS Charset functions, namely NLS_CHARSET_ID, NLS_CHARSET_NAME, and NLS_CHARSET_DECL_LEN. 

Enhanced management and administrative control

EDB Postgres 16 introduces sophisticated role membership controls, providing administrators with greater oversight of user activities. This update is crucial for managing complex enterprise databases, ensuring optimal performance even under high-intensity workloads. Additionally, enhanced visibility into table and index usage paves the way for more informed decision-making and efficient database management.

EDB’s latest offering is a testament to its enduring commitment to advancing Postgres. Improved scalability, enhanced security features, and better management tools make EDB Postgres 16 a premier choice for enterprises worldwide. This release not only underscores EDB’s innovation but also solidifies its role in addressing the dynamic needs of modern businesses​​​​.

Adam Wright is the senior product manager of core database, extensions, and backup/restore at EDB.

New Tech Forum provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to doug_dineley@foundryco.com.

Next read this:

Posted Under: Database
AWS updates Bedrock, SageMaker to boost generative AI offerings

Posted by on 29 November, 2023

This post was originally published on this site

At its ongoing re:Invent 2023 conference, AWS unveiled several updates to its SageMaker, Bedrock and database services in order to boost its generative AI offerings.

Taking to the stage on Wednesday, AWS vice president of data and AI, Swami Sivasubramanian, unveiled updates to existing foundation models inside its generative AI application-building service, Amazon Bedrock.

The updated models added to Bedrock include Anthropic’s Claude 2.1 and Meta Llama 2 70B, both of which have been made generally available. Amazon also has added its proprietary Titan Text Lite and Titan Text Express foundation models to Bedrock.

In addition, the cloud services provider has added a model in preview, Amazon Titan Image Generator, to the AI app-building service.

The model, which can be used to rapidly generate and iterate images at low cost, can understand complex prompts and generate relevant images with accurate object composition and limited distortions, AWS said.

Enterprises can use the model in the Amazon Bedrock console either by submitting a natural language prompt to generate an image or by uploading an image for automatic editing, before configuring the dimensions and specifying the number of variations the model should generate.

Invisible watermark identifies AI images

The images generated by Titan have an invisible watermark to help reduce the spread of disinformation by providing a discreet mechanism to identify AI-generated images.

Foundation models that are currently available in Bedrock include large language models (LLMs) from the stables of AI21 Labs, Cohere Command, Meta, Anthropic, and Stability AI.

These models, with the exception of Anthropic’s Claude 2, can be fine-tuned inside Bedrock, the company said, adding that support for fine-tuning Claude 2 was expected to be released soon.

In order to help enterprises generate embeddings for training or prompting foundation models, AWS is also making its Amazon Titan Multimodal Embeddings generally available.

“The model converts images and short text into embeddings — numerical representations that allow the model to easily understand semantic meanings and relationships among data — which are stored in a customer’s vector database,” the company said in a statement.

Evaluating the best foundational model for generative AI apps

Further, AWS has released a new feature within Bedrock that allows enterprises to evaluate, compare, and select the best foundational model for their use case and business needs.

Dubbed Model Evaluation on Amazon Bedrock and currently in preview, the feature is aimed at simplifying several tasks such as identifying benchmarks, setting up evaluation tools, and running assessments, the company said, adding that this saves time and cost.

“In the Amazon Bedrock console, enterprises choose the models they want to compare for a given task, such as question-answering or content summarization,” Sivasubramanian said, explaining that for automatic evaluations, enterprises select predefined evaluation criteria (e.g., accuracy, robustness, and toxicity) and upload their own testing data set or select from built-in, publicly available data sets.

For subjective criteria or nuanced content requiring sophisticated judgment, enterprises can set up human-based evaluation workflows — which leverage an enterprise’s in-house workforce — or use a managed workforce provided by AWS to evaluate model responses, Sivasubramanian said.

Other updates to Bedrock include Guardrails, currently in preview, targeted at helping enterprises adhere to responsible AI principles. AWS has also made Knowledge Bases and Amazon Agents for Bedrock generally available.

SageMaker capabilities to scale large language models

In order to help enterprises train and deploy large language models efficiently, AWS introduced two new offerings — SageMaker HyperPod and SageMaker Inference — within its Amazon SageMaker AI and machine learning service.

In contrast to the manual model training process — which is prone to delays, unnecessary expenditure and other complications — HyperPod removes the heavy lifting involved in building and optimizing machine learning infrastructure for training models, reducing training time by up to 40%, the company said.

The new offering is preconfigured with SageMaker’s distributed training libraries, designed to let users automatically split training workloads across thousands of accelerators, so workloads can be processed in parallel for improved model performance.

HyperPod, according to Sivasubramanian, also ensures customers can continue model training uninterrupted by periodically saving checkpoints.

Helping enterprises reduce AI model deployment cost

SageMaker Inference, on the other hand, is targeted at helping enterprise reduce model deployment cost and decrease latency in model responses. In order to do so, Inference allows enterprises to deploy multiple models to the same cloud instance to better utilize the underlying accelerators.

“Enterprises can also control scaling policies for each model separately, making it easier to adapt to model usage patterns while optimizing infrastructure costs,” the company said, adding that SageMaker actively monitors instances that are processing inference requests and intelligently routes requests based on which instances are available.

AWS has also updated its low code machine learning platform targeted at business analysts,  SageMaker Canvas.

Analysts can use natural language to prepare data inside Canvas in order to generate machine learning models, Sivasubramanian said. The no code platform supports LLMs from Anthropic, Cohere, and AI21 Labs.

SageMaker also now features the Model Evaluation capability, now called SageMaker Clarify, which can be accessed from within the SageMaker Studio.

Other generative AI-related updates include updated support for vector databases for Amazon Bedrock. These databases include Amazon Aurora and MongoDB. Other supported databases include Pinecone, Redis Enterprise Cloud, and Vector Engine for Amazon OpenSearch Serverless.

Next read this:

Posted Under: Database
AWS adds more zero-ETL integrations to Amazon RedShift

Posted by on 29 November, 2023

This post was originally published on this site

Continuing to build on its efforts toward zero-ETL for data warehousing services, AWS at its ongoing re:Invent 2023 conference, announced new Amazon RedShift integrations with Amazon Aurora PostgreSQL, Amazon DynamoDB, and Amazon RDS for MySQL.

Enterprises, typically, use extract, transform, load (ETL) to integrate data from multiple sources into a single consistent data store to be loaded into a data warehouse for analysis.

However, most data engineers claim that transforming data from disparate sources could be a difficult and time-consuming task as the process involves steps such as cleaning, filtering, reshaping, and summarizing the raw data.

Another issue is the added cost of maintaining teams that prepare data pipelines for running analytics, AWS said.

In contrast, the new zero-ETL integrations, according to the company, eliminate the need to perform ETL between Aurora PostgreSQL, DynamoDB, RDS for MySQL, and RedShift as transactional data in these databases can be replicated into RedShift almost immediately and is ready for running analysis.

Currently, all three integrations are in preview.

Last year, AWS announced two new capabilities—Amazon Aurora zero-ETL integration with Amazon Redshift and Amazon Redshift integration for Apache Spark.

In addition, the cloud services provider made the Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service generally available.

This integration will allow data professionals across enterprises to perform a search on their DynamoDB data by automatically replicating and transforming it without custom code or infrastructure, AWS said.

Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service can be availed across any AWS Region where OpenSearch Ingestion is available presently, AWS added.

Next read this:

Posted Under: Database
7 Reasons to Consider Refurbished IT Hardware

Posted by on 26 January, 2015

IT hardware procurement process can be a challenging one for any organization.  If you are an IT professional or a business owner, there are various options available that must be sorted through to meet key priorities and requirements.  When it comes to buying IT hardware, refurbished equipment is a viable option to consider seriously. It provides an array of undeniable benefits including performance, quality and flexibility at great price points.  Following are seven notable benefits your organization can rely on when opting for refurbished IT equipment.

Cost

Companies can procure refurbished IT equipment at a mere fraction of OEMs’ pricing. Opting for refurbished IT hardware can help stretch budget, afford larger projects, and even have extra hardware on hand in case of disaster recovery or if any backup is necessary.

The latest and highest end technology is not always an affordable option for small businesses, schools, and nonprofits. However, by choosing refurbished IT hardware, one can gain access to the latest technology regardless of their budget.

Refurbished hardware is an excellent way for organizations to increase buying power while benefiting substantial cost savings.

Quality

IT refurbishers go above and beyond when it comes to quality control. Experienced, trained and certified technicians rigorously test, diagnose and refurbish all IT hardware to ensure that its performance – both functionally and cosmetically – rivals that of any brand-new computer.

Microsoft registered refurbishers (MRR) are an elite group of refurbishers who take quality to whole new level by following Microsoft’s certified refurbishing processes. The MRR certification enables refurbishers to load and authenticate Windows OS legally on any Windows-based machine.

Sustainability

Refurbished IT hardware is very eco-friendly. If “going green” is a priority in your technology choices, buying refurbished IT hardware is an ideal decision. Refurbishing and reusing not only prevents electronics from ending up in landfills, but also eliminates the need to manufacture new electronics.

Buying and using refurbished equipment is a form of electronic recycling that offers numerous benefits to both the organization using it and the environment.

Flexibility

IT hardware refurbishers will work within and according to a customer’s needs and requirements as well as their limitations. Typically, this much flexibility is not available when buying directly from traditional retailers.

Refurbishers can customize specs to meet exact technology hardware requirements and offer a variety of prices to meet virtually any budget. They also offer flexible warranty, extended coverage options, payment options and terms, such as PayPal, net terms and more.

Warranty

IT refurbishers can offer among the best warranties available today. In many cases, they provide hassle-free advance replacements, which mean replacement product will be shipped out before receiving the product being returned. This system offers a level of convenience and customer service that simply cannot be found when buying directly from OEMs. IT refurbishers offer flexible warranty options and extended warranty coverages as well.

Obtain Hard to Find or Obsolete Equipment

Sometimes, finding legacy equipment can be very challenging. Refurbishers are well-equipped sources of OEM discontinued hardware, which is helpful for companies running proprietary software and hardware that sometimes requires older hardware.

Selection

When compared to OEMs, you’ll find many IT hardware refurbishers offer a much larger inventory pool, including brands such as Apple, Dell, HP, Lenovo and more.

Clearly, these advantages point to one undeniable conclusion: refurbished IT hardware can provide customers with substantial flexibility, service and savings. Whether you are a small business, educational institution, nonprofit or part of any organization that requires IT equipment to function, an IT refurbisher can provide one-stop-shopping for all of your IT needs.

Posted Under: Refurbished IT Hardware
Page 8 of 8« First...45678

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue