All posts by Richy George

OrbitsEdge partners with HPE on orbital datacenter computing and analytics

Posted by on 3 December, 2019

This post was originally published on this site

What kinds of businesses might be able to operate in space? Well datacenter are one potential target you might not have thought of. Space provides an interesting environment for datacenter operations, including advanced analytics operations and even artificial intelligence, due in part to the excellent cooling conditions and reasonable access to renewable power supply (solar). But there are challenges, which is why a new partnership between Florida-based space startup OrbitsEdge and Hewlett Packard Enterprises (HPE) makes a lot of sense.

The partnership will make OrbitsEdge a hardware supplier for HPE’s Edgeline Converged Edge Systems, and basically it means that the space startup will be handling everything required to “harden” the standard HPE micro-datacenter equipment for use in outer space. Hardening is a standard process for getting stuff ready to use in space, and essentially prepares equipment to withstand the increased radiation, extreme temperatures and other stressors that space adds to the mix.

OrbitsEdge, founded earlier this year, has developed a proprietary piece of hardware called the “SatFrame” which is designed to counter the stress of a space-based operating environment, making it relatively easy to take off-the-shelf Earth equipment like the HPE Edgeline system and get it working in space without requiring a huge amount of additional, custom work.

In terms of what this will potentially provide, the partnership will mean it’s more feasible than ever to set up a small-scale datacenter in orbit to handle at least some of the processing of space-based data right near where it’s collected, rather than having to shuttle it back down to Earth. That process can be expensive, and difficult to source in terms of even finding companies and infrastructure to use. As with in-space manufacturing, doing things locally could save a lot of overhead and unlock tons of potential down the line.

Posted Under: Tech News
Verizon and AWS announce 5G Edge computing partnership

Posted by on 3 December, 2019

This post was originally published on this site

Just as Qualcomm was starting to highlight its 5G plans for the coming years, Verizon CEO Hans Vestberg hit the stage at AWS re:Invent to discuss the carrier’s team up with the cloud computing giant.

As part of Verizon’s (TechCrunch’s parent company, disclosure, disclosure, disclosure) upcoming focus on 5G edge computing, the carrier will be the first to use the newly announced AWS Wavelength. The platform is designed to let developers build super low latency apps for 5G devices.

Currently, it’s being piloted in Chicago with a handful of high profile partners, including the NFL and Bethesda, the game developer behind Fallout and Elder Scrolls. No details yet on those specific applications (though remote gaming and live streaming seem like the obvious ones), but potential future uses include things like smart cars, IoT devices, AR/VR — you know, the sorts of things people cite when discussing 5G’s life beyond the smartphone.

“AWS Wavelength provides the same AWS environment — APIs, management console, and tools — that they’re using today at the edge of the 5G network,” AWS CEO Andy Jassy said on-stage. Starting with Verizon’s 5G network locations in the US, customers will be able to deploy the latency-sensitive portions of an application at the edge to provide single-digit millisecond latency to mobile and connected devices.”

As Verizon’s CEO joined Vestberg on stage, CNO Nicki Palmer joined Qualcomm in Hawaii, to discuss the carrier’s Mmwave approach to the next-gen wireless. The technology has raised some questions around its coverage area. Verizon has addressed this to some degree with partnerships with third-parties like Boingo.

The company plans to have coverage in 30 U.S. cities by end of year. That number is currently at 18.

Posted Under: Tech News
AWS announces new enterprise search tool powered by machine learning

Posted by on 3 December, 2019

This post was originally published on this site

Today at AWS re:Invent in Las Vegas, the company announced a new search tool called Kendra, which provides natural language search across a variety of content repositories using machine learning.

Matt Wood, AWS VP of artificial intelligence, said the new search tool uses machine learning, but doesn’t actually require machine learning expertise of any kind. Amazon is taking care of that for customers under the hood.

You start by identifying your content repositories. This could be anything from an S3 storage repository to OneDrive to Salesforce — anywhere you store content. You can use pre-built connectors from AWS, provide your credentials and connect to all of these different tools.

Kendra then builds an index based on the content it finds in the connected repositories, and users can begin to interact with the search tool using natural language queries. The tool understands concepts like time, so if the question is something like “When is the IT Help Desk is open,” the search engine understands that this is about time, checks the index and delivers the right information to the user.

The beauty of this search tool is not only that it uses machine learning, but based on simple feedback from a user, like a smiley face or sad face emoji, it can learn which answers are good and which ones require improvement, and it does this automatically for the search team.

Once you have it set up, you can drop the search on your company intranet or you can use it internally inside an application and it behaves as you would expect a search tool to do, with features like type ahead.

Posted Under: Tech News
AWS’ CodeGuru uses machine learning to automate code reviews

Posted by on 3 December, 2019

This post was originally published on this site

AWS today announced CodeGuru, a new machine learning-based service that automates code reviews based on the data the company has gathered from doing code reviews internally.

Developers write the code and simply add CodeGuru to the pull requests. It supports GitHub and CodeCommit, for the time being. The CodeGuru uses its knowledge of reviews from Amazon and about 10,000 open source projects to find issues and then comments on the pull request as needed. It will obviously identify the issues, but it will also suggest remediations and offer links to the relevant documentation.

Encoded in CodeGuru are AWS’s own best practices. Among other things, it also finds concurrency issues, incorrect handling of resources, and issues with input validation.

AWS and Amazon’s consumer side have used the profiler part of CodeGuru for the last few years to find the ‘most expensive line of code.’ Over the last few years, even as some of the company’s applications grew, some teams were able to increase their CPU utilization by over 325 percent at 36 percent lower cost.

Posted Under: Tech News
AWS AutoPilot gives you more visible AutoML in SageMaker Studio

Posted by on 3 December, 2019

This post was originally published on this site

Today at AWS re:Invent in Las Vegas, the company announced AutoPilot, a tool that gives you greater visibility in automated machine learning model creation. This is available as part of the new SageMaker Studio also announced today.

As AWS CEO Andy Jassy pointed out on stage today, one of the problems with AutoML is that it’s basically a black box. “First they build this OK, simple model initially, but that is a total black box. If you want to improve a mediocre model or just evolve it for your business, you you have no idea how it was built.” he explained.

The idea behind AutoPilot is to give you the ease of model creation you get from and AutoML-generated model, but also giving you much deeper insight into how the system built the model. “AutoMl in a way to create a model automatically but give full visibility and control,” Jassy said.

You can look at the model’s parameters, and see 50 automated model, then the it provides you with a leader board of what models were the best. You can look at the notebook, and also see what trade-offs were made to generate that best model. It may be the most accurate, but sacrifices speed to get that.

Your company may have its own set of unique requirements and you can choose the best model based on whatever parameters you consider to be most important for your company, even though this was generated in an automated fashion.

With this insight and visibility into your automatically generated model, you can pick the best one based on you requirements. Once you have the model, you like best, you can go into SageMaker Studio, select it and launch it with a single click.

Posted Under: Tech News
AWS speeds up Redshift queries 10x with AQUA

Posted by on 3 December, 2019

This post was originally published on this site

At its re:Invent conference, AWS CEO Andy Jassy today announced the launch of AQUA (the Advanced Query Accelerator) for Amazon Redshift, the company’s data warehousing service. As Jassy noted in his keynote, it’s hard to scale data warehouses when you want to do analytics over that data. At some point, as your data warehouse or lake grows, the data starts overwhelming your network or available compute, even with today’s highspeed networks and chips. So to handle this, AQUA is essentially a hardware-accelerated cache and promises up to 10x better query performance than competing cloud-based data warehouses.

“Think about how much data you have to move over the network to get to your compute,” Jassy said. And if that’s not a problem for a company today, he added, it will likely become one soon, given how much data most enterprises now generate.

With this, Jassy explained, you’re bringing the compute power you need directly to the storage layer. The cache sits on top of Amazon’s standard S3 service and can hence scale out as needed across as many nodes as needed.

AWS designed its own analytics processors to power this service and accelerate the data compression and encryption on the fly.

Unsurprisingly, the service is also 100% compatible with the current version of Redshift.

In addition, AWS also today announced next-generation compute instances for Redshift, the RA3 instances, with 48 vCPUs and 384GiB of memory and up to 64 TB of storage. You can build clusters of these with up to 128 instances.

Posted Under: Tech News
CircleCI launches improved AWS support

Posted by on 2 December, 2019

This post was originally published on this site

For about a year now, continuous integration and delivery service CircleCI has offered Orbs, a way to easily reuse commands and integrations with third-party services. Unsurprisingly, some of the most popular Orbs focus on AWS, as that’s where most of the company’s developers are either testing their code or deploying it. Today, right in time for AWS’s annual re:Invent developer conference in Las Vegas, the company announced that it has now added Orb support for the AWS Serverless Application Model (SAM), which makes setting up automated CI/CD platforms for testing and deploying to AWS Lambda significantly easier.

In total, the company says, more than 11,000 organizations started using Orbs since it launched a year ago. Among the AWS-centric Orbs are those for building and updating images for the Amazon Elastic Container Services and the Elastic Container Service for Kubernetes (EKS), for example, as well as AWS CodeDeploy support, an Orb for installing and configuring the AWS command line interface, an Orb for working with the S3 storage service and more.

“We’re just seeing a momentum of more and more companies being ready to adopt [managed services like Lambda, ECS and EKS], so this became really the ideal time to do most of the work with the product team at AWS that manages their serverless ecosystem and to add in this capability to leverage that serverless application model and really have this out of the box CI/CD flow ready for users who wanted to start adding these into to Lambda,” CircleCI VP of business development Tom Trahan told me. “I think when Lambda was in its earlier days, a lot of people would use it and they would use it and not necessarily follow the same software patterns and delivery flow that they might have with their traditional software. As they put more and more into Lambda and are really putting a lot more what I would call ‘production quality code’ out there to leverage. They realize they do want to have that same software delivery capability and discipline for Lambda as well.”

Trahan stressed that he’s still talking about early adopters and companies that started out as cloud-native companies, but these days, this group includes a lot of traditional companies, as well, that are now rapidly going through their own digital transformations.

Posted Under: Tech News
New Amazon tool simplifies delivery of containerized machine learning models

Posted by on 2 December, 2019

This post was originally published on this site

As part of the flurry of announcements coming this week out of AWS re:Invent, Amazon announced the release of Amazon SageMaker Operators for Kubernetes, a way for data scientists and developers to simplify training, tuning and deploying containerized machine learning models.

Packaging machine learning models in containers can help put them to work inside organizations faster, but getting there often requires a lot of extra management to make it all work. Amazon SageMaker Operators for Kubernetes is supposed to make it easier to run and manage those containers, the underlying infrastructure needed to run the models, and the workflows associated with all of it.

“While Kubernetes gives customers control and portability, running ML workloads on a Kubernetes cluster brings unique challenges. For example, the underlying infrastructure requires additional management such as optimizing for utilization, cost and performance; complying with appropriate security and regulatory requirements; and ensuring high availability and reliability,” AWS’ Aditya Bindal wrote in a blog post introducing the new feature.

When you combine that with the workflows associated with delivering a machine learning model inside an organization at scale, it becomes part of a much bigger delivery pipeline, one that is challenging to manage across departments and a variety of resource requirements.

This is precisely what Amazon SageMaker Operators for Kubernetes has been designed to help DevOps teams do. “Amazon SageMaker Operators for Kubernetes bridges this gap, and customers are now spared all the heavy lifting of integrating their Amazon SageMaker and Kubernetes workflows. Starting today, customers using Kubernetes can make a simple call to Amazon SageMaker, a modular and fully-managed service that makes it easier to build, train, and deploy machine learning (ML) models at scale,” Bindal wrote.

The promise of Kubernetes is that it can orchestrate the delivery of containers at the right moment, but if you haven’t automated delivery of the underlying infrastructure, you can over (or under) provision and not provide the correct amount of resources required to run the job. That’s where this new tool combined with SageMaker can help.

“With workflows in Amazon SageMaker, compute resources are pre-configured and optimized, only provisioned when requested, scaled as needed, and shut down automatically when jobs complete, offering near 100% utilization,” Bindal wrote.

Amazon SageMaker Operators for Kubernetes are available today in select AWS regions.

Posted Under: Tech News
Box looks to balance growth and profitability as it matures

Posted by on 27 November, 2019

This post was originally published on this site

Prevailing wisdom states that as an enterprise SaaS company evolves, there’s a tendency to sacrifice profitability for growth — understandably so, especially in the early days of the company. At some point, however, a company needs to become profitable.

Box has struggled to reach that goal since going public in 2015, but yesterday, it delivered a mostly positive earnings report. Wall Street seemed to approve, with the stock up 6.75% as we published this article.

Box CEO Aaron Levie says the goal moving forward is to find better balance between growth and profitability. In his post-report call with analysts, Levie pointed to some positive numbers.

“As we shared in October [at BoxWorks], we are focused on driving a balance of long-term growth and improved profitability as measured by the combination of revenue growth plus free cash flow margin. On this combined metric, we expect to deliver a significant increase in FY ’21 to at least 25% and eventually reaching at least 35% in FY ’23,” Levie said.

Growing the platform

Part of the maturation and drive to profitability is spurred by the fact that Box now has a more complete product platform. While many struggle to understand the company’s business model, it provides content management in the cloud and modernizing that aspect of enterprise software. As a result, there are few pure-play content management vendors that can do what Box does in a cloud context.

Posted Under: Tech News
Xerox tells HP it will bring takeover bid directly to shareholders

Posted by on 26 November, 2019

This post was originally published on this site

Xerox fired the latest volley in the Xerox HP merger letter wars today. Xerox CEO John Visentin wrote to the HP Board that his company planned to take its $33.5 billion offer directly to HP shareholders.

He began his letter with hostile tone befitting a hostile takeover attempt, stating that their refusal to negotiate defied logic. “We have put forth a compelling proposal – one that would allow HP shareholders to both realize immediate cash value and enjoy equal participation in the substantial upside expected to result from a combination. Our offer is neither “highly conditional” nor “uncertain” as you claim,” Visentin wrote in his letter.

He added, “We plan to engage directly with HP shareholders to solicit their support in urging the HP Board to do the right thing and pursue this compelling opportunity.”

The letter was in response to one yesterday from HP in which it turned down Xerox’s latest overture, stating that the deal seemed beyond Xerox’s ability to afford it. It called into question Xerox’s current financial situation, citing Xerox’s own financial reports, and took exception to the way in which Xerox was courting the company.

“It is clear in your aggressive words and actions that Xerox is intent on forcing a potential combination on opportunistic terms and without providing adequate information,” the company wrote.

Visentin fired back in his letter, “While you may not appreciate our “aggressive” tactics, we will not apologize for them. The most efficient way to prove out the scope of this opportunity with certainty is through mutual due diligence, which you continue to refuse, and we are obligated to require.”

He further pulled no punches writing that he believes the deal is good for both companies and good for the shareholders. “The potential benefits of a combination between HP and Xerox are self-evident. Together, we could create an industry leader – with enhanced scale and best-in-class offerings across a complete product portfolio — that will be positioned to invest more in innovation and generate greater returns for shareholders.”

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategies, thinks HP ultimately has the upper hand in this situation. “I feel like we have seen this movie before when Carl Icahn meddled with Dell in a similar way. Xerox is a third of the size HP Inc., has been steadily declining in revenue, is running out of options, and needs HP more than HP needs it.”

It would seem Xerox has chosen a no-holds barred approach to the situation. The pen is now in HP’s hands as we await the next letter and see how the printing giant intends to respond to the latest missive from Xerox.

View this document on Scribd

Posted Under: Tech News
Page 2 of 8212345...102030...Last »

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue