All posts by Richy George

Cove.Tool wants to solve climate change one efficient building at a time

Posted by on 4 December, 2018

This post was originally published on this site

As the fight against climate change heats up, Cove.Tool is looking to help tackle carbon emissions one building at a time.

The Atlanta-based startup provides an automated big-data platform that helps architects, engineers and contractors identify the most cost-effective ways to make buildings compliant with energy efficiency requirements. After raising an initial round earlier this year, the company completed the final close of a $750,000 seed round. Since the initial announcement of the round earlier this month, Urban Us, the early-stage fund focused on companies transforming city life, has joined the syndicate comprised of Tech Square Labs and Knoll Ventures.

Helping firms navigate a growing suite of energy standards and options

Cove.Tool software allows building designers and managers to plug in a variety of building conditions, energy options, and zoning specifications to get to the most cost-effective method of hitting building energy efficiency requirements (Cove.Tool Press Image / Cove.Tool / https://covetool.com).

In the US, the buildings we live and work in contribute more carbon emissions than any other sector. Governments across the country are now looking to improve energy consumption habits by implementing new building codes that set higher energy efficiency requirements for buildings. 

However, figuring out the best ways to meet changing energy standards has become an increasingly difficult task for designers. For one, buildings are subject to differing federal, state and city codes that are all frequently updated and overlaid on one another. Therefore, the specific efficiency requirements for a building can be hard to understand, geographically unique and immensely variable from project to project.

Architects, engineers and contractors also have more options for managing energy consumption than ever before – equipped with tools like connected devices, real-time energy-management software and more-affordable renewable energy resources. And the effectiveness and cost of each resource are also impacted by variables distinct to each project and each location, such as local conditions, resource placement, and factors as specific as the amount of shade a building sees.

With designers and contractors facing countless resource combinations and weightings, Cove.Tool looks to make it easier to identify and implement the most cost-effective and efficient resource bundles that can be used to hit a building’s energy efficiency requirements.

Cove.Tool users begin by specifying a variety of project-specific inputs, which can include a vast amount of extremely granular detail around a building’s use, location, dimensions or otherwise. The software runs the inputs through a set of parametric energy models before spitting out the optimal resource combination under the set parameters.

For example, if a project is located on a site with heavy wind flow in a cold city, the platform might tell you to increase window size and spend on energy efficient wall installations, while reducing spending on HVAC systems. Along with its recommendations, Cove.Tool provides in-depth but fairly easy-to-understand graphical analyses that illustrate various aspects of a building’s energy performance under different scenarios and sensitivities.

Cove.Tool users can input granular project-specifics, such as shading from particular beams and facades, to get precise analyses around a building’s energy performance under different scenarios and sensitivities.

Democratizing building energy modeling

Traditionally, the design process for a building’s energy system can be quite painful for architecture and engineering firms.

An architect would send initial building designs to engineers, who then test out a variety of energy system scenarios over the course a few weeks. By the time the engineers are able to come back with an analysis, the architects have often made significant design changes, which then gets sent back to the engineers, forcing the energy plan to constantly be 1-to-3 months behind the rest of the building. This process can not only lead to less-efficient and more-expensive energy infrastructure, but the hectic back-and-forth can lead to longer project timelines, unexpected construction issues, delays and budget overruns.

Cove.Tool effectively looks to automate the process of “energy modeling.” The energy modeling looks to ease the pains of energy design in the same ways Building Information Modeling (BIM) has transformed architectural design and construction. Just as BIM creates predictive digital simulations that test all the design attributes of a project, energy modeling uses building specs, environmental conditions, and various other parameters to simulate a building’s energy efficiency, costs and footprint.

By using energy modeling, developers can optimize the design of the building’s energy system, adjust plans in real-time, and more effectively manage the construction of a building’s energy infrastructure. However, the expertise needed for energy modeling falls outside the comfort zones of many firms, who often have to outsource the task to expensive consultants.

The frustrations of energy system design and the complexities of energy modeling are ones the Cove.Tool team knows well. Patrick Chopson and Sandeep Ajuha, two of the company’s three co-founders, are former architects that worked as energy modeling consultants when they first began building out the Cove.Tool software.

After seeing their clients’ initial excitement over the ability to quickly analyze millions of combinations and instantly identify the ones that produce cost and energy savings, Patrick and Sandeep teamed up with CTO Daniel Chopson and focused full-time on building out a comprehensive automated solution that would allow firms to run energy modeling analysis without costly consultants, more quickly, and through an interface that would be easy enough for an architectural intern to use.

So far there seems to be serious demand for the product, with the company already boasting an impressive roster of customers that includes several of the country’s largest architecture firms, such as HGA, HKS and Cooper Carry. And the platform has delivered compelling results – for example, one residential developer was able to identify energy solutions that cost $2 million less than the building’s original model. With the funds from its seed round, Cove.Tool plans further enhance its sales effort while continuing to develop additional features for the platform.

Changing decision-making and fighting climate change

The value proposition Cove.Tool hopes to offer is clear – the company wants to make it easier, faster and cheaper for firms to use innovative design processes that help identify the most cost-effective and energy-efficient solutions for their buildings, all while reducing the risks of redesign, delay and budget overruns.

Longer-term, the company hopes that it can help the building industry move towards more innovative project processes and more informed decision-making while making a serious dent in the fight against emissions.

“We want to change the way decisions are made. We want decisions to move away from being just intuition to become more data-driven.” The co-founders told TechCrunch.

“Ultimately we want to help stop climate change one building at a time. Stopping climate change is such a huge undertaking but if we can change the behavior of buildings it can be a bit easier. Architects and engineers are working hard but they need help and we need to change.”

Posted Under: Tech News
Microsoft and Docker team up to make packaging and running cloud-native applications easier

Posted by on 4 December, 2018

This post was originally published on this site

Microsoft and Docker today announced a new joint open-source project, the Cloud Native Application Bundle (CNAB), that aims to make the lifecycle management of cloud-native applications easier. At its core, the CNAB is nothing but a specification that allows developers to declare how an application should be packaged and run. With this, developers can define their resources and then deploy the application to anything from their local workstation to public clouds.

The specification was born inside Microsoft, but as the team talked to Docker, it turns out that the engineers there were working on a similar project. The two decided to combine forces and launch the result as a single open-source project. “About a year ago, we realized we’re both working on the same thing,” Microsoft’s Gabe Monroy told me. “We decided to combine forces and bring it together as an industry standard.”

As part of this launch, Microsoft is launching its own reference implementation of a CNAB client today. Duffle, as it’s called, allows users to perform all the usual lifecycle steps (install, upgrade, uninstall), create new CNAB bundles and sign them cryptographically. Docker is working on integrating CNAB into its own tools, too.

Microsoft also today launched  Visual Studio extension for building and hosting these bundles, as well as an example implementation of a bundle repository server and an Electron installer that lets you install a bundle with the help of a GUI.

Now it’s worth noting that we’re talking about a specification and reference implementations here. There is obviously a huge ecosystem of lifecycle management tools on the market today that all have their own strengths and weaknesses. “We’re not going to be able to unify that tooling,” said Monroy. “I don’t think that’s a feasible goal. But what we can do is we can unify the model around it, specifically the lifecycle management experience as well as the packaging and distribution experience. That’s effectively what Docker has been able to do with the single-workload case.”

Over time, Microsoft and Docker would like for the specification to end up in a vendor-neutral foundation. Which one, remains to be seen, though the Open Container Initiative seems like the natural home for a project like this.

Posted Under: Tech News
FortressIQ raises $12M to bring new AI twist to process automation

Posted by on 4 December, 2018

This post was originally published on this site

FortressIQ, a startup that wants to bring a new kind of artificial intelligence to process automation called imitation learning, emerged from stealth this morning and announced it has raised $12 million.

The Series A investment came entirely from a single venture capital firm, Light Speed Venture Partners. Today’s funding comes on top of $4 million in seed capital the company raised previously from Boldstart Ventures, Comcast Ventures and Eniac Ventures.

Pankaj Chowdhry, founder & CEO of FortressIQ says that his company basically replaces high-cost consultants, who are paid to do time and motion studies and automates that process in a fairly creative way. It’s a bit like Robotics Process Automation (RPA), a space that is attracting a lot of investment right now, but instead of simply recording what’s happening on the desktop, and reproducing that digitally, it takes it a step further, a process called “imitation learning.”

“We want to be able to replicate human behavior through observation. We’re targeting this idea of how can we help people understand their processes. But imitation learning is I think the most interesting area of artificial intelligence because it focuses not on what AI can do, but how can AI learn and adapt,” he explained

They start by capturing a low-bandwidth movie of the process. “So we build virtual processors. And basically the idea is we have an agent that gets deployed by your enterprise IT group, and it integrates into the video card,” Chowdhry explained.

He points out that it’s not actually using a camera, but it captures everything going on, as a person interacts with a Windows desktop. In that regard it’s similar to RPA. “The next component is our AI models and computer vision. And we build these models that can literally watch the movie and transcribe the movie into what we call a series of software interactions,” he said.

Another key differentiator here is that they have built a data mining component on top of this, so if the person in the movie is doing something like booking an invoice, and stops to check email or Slack, FortressIQ can understand when an activity isn’t part of the process and filters that out automatically.

The product will be offered as a cloud service. Chowdhry’s previous company Third Pillar Systems was acquired by Genpact in 2013.

Posted Under: Tech News
Fivetran announces $15M Series A to build automated data pipelines

Posted by on 4 December, 2018

This post was originally published on this site

Fivetran, a startup that builds automated data pipelines between data repositories and cloud data warehouses and analytics tools, announced a $15 million Series A investment led by Matrix Partners.

Fivetran helps move data from source repositories like Salesforce and NetSuite to data warehouses like Snowflake or analytics tools like Looker. Company CEO and co-founder George Frasier says that the automation is the key differentiator here between his company and competitors like Informatica and SnapLogic.

“What makes Fivetran different is that it’s an automated data pipeline to basically connect all your sources. You can access your data warehouse, and all of the data just appears and gets kept updated automatically,” Frasier explained. While he acknowledges that there is a great deal of complexity behind the scenes to drive that automation, he stresses that his company is hiding that complexity from the customer.

The company launched out Y Combinator in 2012, and other than $4 million in seed funding along the way, it has relied solely on revenue up until now. That’s a rather refreshing approach to running an enterprise startup, which typically requires piles of cash to build out sales and marketing organizations to compete with the big guys they are trying to unseat.

One of the key reasons that they’ve been able to take this approach has been the company’s partner strategy. Having the ability to get data into another company’s solution with a minimum of fuss and expense has attracted data-hungry applications. In addition to the previously mentioned Snowflake and Looker, the company counts Google BigQuery, Microsoft Azure, Amazon Redshift, Tableau, Periscope Data, Salesforce, NetSuite and PostgreSQL as partners.

Ilya Sukhar, general partner at Matrix Partners, who will be joining the Fivetran board under the terms of deal sees a lot of potential here. “We’ve gone from companies talking about the move to the cloud to preparing to execute their plans, and the most sophisticated are making Fivetran, along with cloud data warehouses and modern analysis tools, the backbone of their analytical infrastructure,” Sukhar said in a statement.

They currently have 100 employees spread out across four offices in Oakland, Denver, Bangalore and Dublin. They boast 500 customers using their product including Square, WeWork, Vice Media and Lime Scooters, among others.

Posted Under: Tech News
Forethought scores $9M Series A in wake of Battlefield win

Posted by on 4 December, 2018

This post was originally published on this site

It’s been a whirlwind few months for Forethought, a startup with a new way of looking at enterprise search that relies on artificial intelligence. In September, the company took home the TechCrunch Disrupt Battlefield trophy in San Francisco, and today it announced a $9 million Series A investment.

It’s pretty easy to connect to the dots between the two events. CEO and co-founder Deon Nicholas said they’ve seen a strong uptick in interest since the win. “Thanks to TechCrunch Disrupt, we have had a lot of things going on including a bunch of new customer interest, but the biggest news is that we’ve raised our $9 million Series A round,” he told TechCrunch.

The investment was led by NEA with K9 Ventures, Village Global and several Angel investors also participating. The Angel crew includes Front CEO Mathilde Collin, Robinhood CEO Vlad Tenev and Learnvest CEO Alexa von Tobel.

Forethought aims to change conventional enterprise search by shifting from the old keyword kind of approach to using artificial intelligence underpinnings to retrieve the correct information from a corpus of documents.

“We don’t work on keywords. You can ask questions without keywords and using synonyms to help understand what you actually mean, we can actually pull out the correct answer [from the content] and deliver it to you,” Nicholas told TechCrunch in September.

He points out that it’s still early days for the company. It had been in stealth for a year before launching at TechCrunch Disrupt in September. Since the event, the three co-founders have brought on six additional employees and they will be looking to hire more in the next year, especially around machine learning and product and UX design.

At launch, they could be embedded in Salesforce and Zendesk, but are looking to expand beyond that.

The company is concentrating on customer service for starters, but with the new money in hand, it intends to begin looking at other areas in the enterprise that could benefit from a smart information retrieval system. “We believe that this can expand beyond customer support to general information retrieval in the enterprise,” Nicholas said.

Posted Under: Tech News
AWS wants to rule the world

Posted by on 2 December, 2018

This post was originally published on this site

AWS, once a nice little side hustle for Amazon’s eCommerce business, has grown over the years into a behemoth that’s on a $27 billion run rate, one that’s still growing at around 45 percent a year. That’s a highly successful business by any measure, but as I listened to AWS executives last week at their AWS re:Invent conference in Las Vegas, I didn’t hear a group that was content to sit still and let the growth speak for itself. Instead, I heard one that wants to dominate every area of enterprise computing.

Whether it was hardware like the new Inferentia chip and Outposts, the new on-prem servers or blockchain and a base station service for satellites, if AWS saw an opportunity they were not ceding an inch to anyone.

Last year, AWS announced an astonishing 1400 new features, and word was that they are on pace to exceed that this year. They get a lot of credit for not resting on their laurels and continuing to innovate like a much smaller company, even as they own gobs of marketshare.

The feature inflation probably can’t go on forever, but for now at least they show no signs of slowing down, as the announcements came at a furious pace once again. While they will tell you that every decision they make is about meeting customer needs, it’s clear that some of these announcements were also about answering competitive pressure.

Going after competitors harder

In the past, AWS kept criticism of competitors to a minimum maybe giving a little jab to Oracle, but this year they seemed to ratchet it up. In their keynotes, AWS CEO Andy Jassy and Amazon CTO Werner Vogels continually flogged Oracle, a competitor in the database market, but hardly a major threat as a cloud company right now.

They went right for Oracle’s market though with a new on prem system called Outposts, which allows AWS customers to operate on prem and in the cloud using a single AWS control panel or one from VMware if customers prefer. That is the kind of cloud vision that Larry Ellison might have put forth, but Jassy didn’t necessarily see it as going after Oracle or anyone else. “I don’t see Outposts as a shot across the bow of anyone. If you look at what we are doing, it’s very much informed by customers,” he told reporters at a press conference last week.

AWS CEO Andy Jassy at a press conference at AWS Re:Invent last week.

Yet AWS didn’t reserve its criticism just for Oracle. It also took aim at Microsoft, taking jabs at Microsoft SQL Server, and also announcing Amazon FSx for Windows File Server, a tool specifically designed to move Microsoft files to the AWS cloud.

Google wasn’t spared either when launching Inferentia and Elastic Inference, which put Google on notice that AWS wasn’t going to yield the AI market to Google’s TPU infrastructure. All of these tools and much more were about more than answering customer demand, they were about putting the competition on notice in every aspect of enterprise computing.

Upward growth trajectory

The cloud market is continuing to grow at a dramatic pace, and as market leader, AWS has been able to take advantage of its market dominance to this point. Jassy, echoing Google’s Diane Greene and Oracle’s Larry Ellison, says the industry as a whole is still really early in terms of cloud adoption, which means there is still plenty of marketshare left to capture.

“I think we’re just in the early stages of enterprise and public sector adoption in the US. Outside the US I would say we are 12-36 months behind. So there are a lot of mainstream enterprises that are just now starting to plan their approach to the cloud,” Jassy said.

Patrick Moorhead, founder and principal analyst at Moor Insights & Strategy says that AWS has been using its market position to keep expanding into different areas. “AWS has the scale right now to do many things others cannot, particularly lesser players like Google Cloud Platform and Oracle Cloud. They are trying to make a point with the thousands of new products and features they bring out. This serves as a disincentive longer-term for other players, and I believe will result in a shakeout,” he told TechCrunch.

As for the frenetic pace of innovation, Moorhead believes it can’t go on forever. “To me, the question is, when do we reach a point where 95% of the needs are met, and the innovation rate isn’t required. Every market, literally every market, reaches a point where this happens, so it’s not a matter of if but when,” he said.

Certainly areas like the AWS Ground Station announcement, showed that AWS was willing to expand beyond the conventional confines of enterprise computing and into outer space to help companies process satellite data. This ability to think beyond traditional uses of cloud computing resources shows a level of creativity that suggests there could be other untapped markets for AWS that we haven’t yet imagined.

As AWS moves into more areas of the enterprise computing stack, whether on premises or in the cloud, they are showing their desire to dominate every aspect of the enterprise computing world. Last week they demonstrated that there is no area that they are willing to surrender to anyone.

more AWS re:Invent 2018 coverage

Posted Under: Tech News
DoJ charges Autonomy founder with fraud over $11BN sale to HP

Posted by on 30 November, 2018

This post was originally published on this site

UK entrepreneur turned billionaire investor, Mike Lynch, has been charged with fraud in US over the 2011 sale of his enterprise software company.

Lynch sold Autonomy, the big data company he founded back in 1996, to computer giant HP for around $11BN some seven years ago.

But within a year around three-quarters of the value of the business had been written off, with HP accusing Autonomy’s management of accounting misrepresentations and disclosure failures.

Lynch has always rejected the allegations, and after HP sought to sue him in UK courts he countersued in 2015.

Meanwhile the UK’s own Serious Fraud Office dropped an investigation into the Autonomy sale in 2015 — finding “insufficient evidence for a realistic prospect of conviction”.

But now the DoJ has filed charges in a San Francisco court, accusing Lynch and other senior Autonomy executives of making false statement that inflated the value of the company.

They face 14 counts of conspiracy and fraud, according to Reuters — a charge which carries a maximum penalty of 20 years in prison.

We’ve reached out to Lynch’s fund, Invoke Capital, for comment on the latest development.

The BBC has obtained a statement from his lawyers, Chris Morvillo of Clifford Chance and Reid Weingarten of Steptoe & Johnson, which describes the indictment as “a travesty of justice”.

The statement also claims Lynch is being made a scapegoat for HP’s failures, framing the allegations as a business dispute over the application of UK accounting standards. 

Two years ago we interviewed Lynch on stage at TechCrunch Disrupt London and he mocked the morass of allegations still swirling around the acquisition as “spin and bullshit”.

Following the latest developments, the BBC reports that Lynch has stepped down as a scientific adviser to the UK government.

“Dr. Lynch has decided to resign his membership of the CST [Council for Science and Technology] with immediate effect. We appreciate the valuable contribution he has made to the CST in recent years,” a government spokesperson told it.

Posted Under: Tech News
Enterprise AR is an opportunity to “do well by doing good”, says General Catalyst

Posted by on 30 November, 2018

This post was originally published on this site

A founder-investor panel on augmented reality (AR) technology here at TechCrunch Disrupt Berlin suggests growth hopes for the space have regrouped around enterprise use-cases, after the VR consumer hype cycle landed with yet another flop in the proverbial ‘trough of disillusionment’.

Matt Miesnieks, CEO of mobile AR startup 6d.ai, conceded the space has generally been on another downer but argued it’s coming out of its third hype cycle now with fresh b2b opportunities on the horizon.

6d.ai investor General Catalyst‘s Niko Bonatsos was also on stage, and both suggested the challenge for AR startups is figuring out how to build for enterprises so the b2b market can carry the mixed reality torch forward.

“From my point of view the fact that Apple, Google, Microsoft, have made such big commitments to the space is very reassuring over the long term,” said Miesnieks. “Similar to the smartphone industry ten years ago we’re just gradually seeing all the different pieces come together. And as those pieces mature we’ll eventually, over the next few years, see it sort of coalesce into an iPhone moment.”

“I’m still really positive,” he continued. “I don’t think anyone should be looking for some sort of big consumer hit product yet but in verticals in enterprise, and in some of the core tech enablers, some of the tool spaces, there’s really big opportunities there.”

Investors shot the arrow over the target where consumer VR/AR is concerned because they’d underestimated how challenging the content piece is, Bonatsos suggested.

“I think what we got wrong is probably the belief that we thought more indie developers would have come into the space and that by now we would probably have, I don’t know, another ten Pokémon-type consumer massive hit applications. This is not happening yet,” he said.

“I thought we’d have a few more games because games always lead the adoption to new technology platforms. But in the enterprise this is very, very exciting.”

“For sure also it’s clear that in order to have the iPhone moment we probably need to have much better hardware capabilities,” he added, suggesting everyone is looking to the likes of Apple to drive that forward in the future. On the plus side he said current sentiment is “much, much much better than what it was a year ago”.

Discussing potential b2b applications for AR tech one idea Miesnieks suggested is for transportation platforms that want to link a rider to the location of an on-demand and/or autonomous vehicle.

Another area of opportunity he sees is working with hardware companies — to add spacial awareness to devices such as smartphones and drones to expand their capabilities.

More generally they mentioned training for technical teams, field sales and collaborative use-cases as areas with strong potential.

“There are interesting applications in pharma, oil & gas where, with the aid of the technology, you can do very detailed stuff that you couldn’t do before because… you can follow everything on your screen and you can use your hands to do whatever it is you need to be doing,” said Bonatsos. “So that’s really, really exciting.

“These are some of the applications that I’ve seen. But it’s early days. I haven’t seen a lot of products in the space. It’s more like there’s one dev shop is working with the chief innovation officer of one specific company that is much more forward thinking and they want to come up with a really early demo.

“Now we’re seeing some early stage tech startups that are trying to attack these problems. The good news is that good dollars is being invested in trying to solve some of these problems — and whoever figures out how to get dollars from the… bigger companies, these are real enterprise businesses to be built. So I’m very excited about that.”

At the same time, the panel delved into some of the complexities and social challenges facing technologists as they try to integrate blended reality into, well, the real deal.

Including raising the spectre of Black Mirror style dystopia once smartphones can recognize and track moving objects in a scene — and 6d.ai’s tech shows that’s coming.

Miesnieks showed a brief video demo of 3D technology running live on a smartphone that’s able to identify cars and people moving through the scene in real time.

“Our team were able to solve this problem probably a year ahead of where the rest of the world is at. And it’s exciting. If we showed this to anyone who really knows 3D they’d literally jump out of the chair. But… it opens up all of these potentially unintended consequences,” he said.

“We’re wrestling with what might this be used for. Sure it’s going to make Pokémon game more fun. It could also let a blind person walk down the street and have awareness of cars and people and they may not need a cane or something.

“But it could let you like tap and literally have people be removed from your field of view and so you only see the type of people that you want to look at. Which can be dystopian.”

He pointed to issues being faced by the broader technology industry now, around social impacts and areas like privacy, adding: “We’re seeing some of the social impacts of how this stuff can go wrong, even if you assume good intentions.

“These sort of breakthroughs that we’re having are definitely causing us to be aware of the responsibility we have to think a bit more deeply about how this might be used for the things we didn’t expect.”

From the investor point of view Bonatsos said his thesis for enterprise AR has to be similarly sensitive to the world around the tech.

“It’s more about can we find the domain experts, people like Matt, that are going to do well by doing good. Because there are a tonne of different parameters to think about here and have the credibility in the market to make it happen,” he suggested, noting: “It‘s much more like traditional enterprise investing.”

“This is a great opportunity to use this new technology to do well by doing good,” Bonatsos continued. “So the responsibility is here from day one to think about privacy, to think about all the fake stuff that we could empower, what do we want to do, what do we want to limit? As well as, as we’re creating this massive, augmented reality, 3D version of the world — like who is going to own it, and share all this wealth? How do we make sure that there’s going to be a whole new ecosystem that everybody can take part of it. It’s very interesting stuff to think about.”

“Even if we do exactly what we think is right, and we assume that we have good intentions, it’s a big grey area in lots of ways and we’re going to make lots of mistakes,” conceded Miesnieks, after discussing some of the steps 6d.ai has taken to try to reduce privacy risks around its technology — such as local processing coupled with anonymizing/obfuscating any data that is taken off the phone.

“When [mistakes] happen — not if, when — all that we’re going to be able to rely on is our values as a company and the trust that we’ve built with the community by saying these are our values and then actually living up to them. So people can trust us to live up to those values. And that whole domain of startups figuring out values, communicating values and looking at this sort of abstract ‘soft’ layer — I think startups as an industry have done a really bad job of that.

“Even big companies. There’d only a handful that you could say… are pretty clear on their values. But for AR and this emerging tech domain it’s going to be, ultimately, the core that people trust us.”

Bonatsos also pointed to rising political risk as a major headwind for startups in this space — noting how China’s government has decided to regulate the gaming market because of social impacts.

“That’s unbelievable. This is where we’re heading with the technology world right now. Because we’ve truly made it. We’ve become mainstream. We’re the incumbents. Anything we build has huge, huge intended and unintended consequences,” he said.

“Having a government that regulates how many games that can be built or how many games can be released — like that’s incredible. No company had to think of that before as a risk. But when people are spending so many hours and so much money on the tech products they are using every day. This is the [inevitable] next step.”

Posted Under: Tech News
New AWS tool helps customers understand best cloud practices

Posted by on 29 November, 2018

This post was originally published on this site

Since 2015, AWS has had a team of solution architects working with customers to make sure they are using AWS services in a way that meets best practices around a set of defined criteria. Today, the company announced a new Well Architected tool that helps customers do this themselves in an automated way without the help of a human consultant.

As Amazon CTO Werner Vogels said in his keynote address at AWS re:Invent in Las Vegas, it’s hard to scale a human team inside the company to meet the needs of thousands of customers, especially when so many want to be sure they are complying with these best practices. He indicated that they even brought on a network of certified partners to help, but it still has not been enough to meet demand.

In typical AWS fashion, they decided to create a service to help customers measure how well they are doing in terms of operations, security, reliability, cost optimization and performance efficiency. Customers can run this tool against the AWS services they are using and get a full report of how they measure up against these five factors.

“I think of it as a way to make sure that you are using the cloud right, and that you are using it well,” Jeff Barr wrote in a blog post introducing the new service.

Instead of working with a human to analyze your systems, you answer a series of questions and then generate a report based on those answers. When the process is complete you generate a pdf report with all of the recommendations for your particular situation.

Image: AWS

While it’s doubtful that such an approach can be as comprehensive as a conversation between client and consultant, it is a starting point to at least get you on the road to thinking about such things, and as a free service, you have little to lose by at least trying the tool and seeing what it tells you.

more AWS re:Invent 2018 coverage

Posted Under: Tech News
AWS announces a slew of new Lambda features

Posted by on 29 November, 2018

This post was originally published on this site

AWS launched Lambda in 2015 and with it helped popularize serverless computing. You simply write code (event triggers) and AWS deals with whatever compute, memory and storage you need to make that work. Today at AWS re:Invent in Las Vegas, the company announced several new features to make it more developer friendly, while acknowledging that even while serverless reduced complexity, it still requires more sophisticated tools as it matures

It’s called serverless because you don’t have to worry about the underlying servers. The cloud vendors take care of all that for you, serving whatever resources you need to run your event and no more. It means you no longer have to worry about coding for all your infrastructure and you only pay for the computing you need at any given moment to make the application work.

The way AWS works is that it tends to release something, then builds more functionality on top of a base service as it sees increasing requirements as customers use it. As Werner Vogels pointed out in his keynote on Thursday, developers debate about tools and everyone has their own idea of what tools they bring to the task every day.

For starters, they decided to please the language folks introducing support for new languages. Those developers who use Ruby can now use Ruby Support for AWS Lambda. “Now it’s possible to write Lambda functions as idiomatic Ruby code, and run them on AWS. The AWS SDK for Ruby is included in the Lambda execution environment by default,” Chris Munns from AWS wrote in a blog post introducing the new language support.

If C++ is your thing, AWS announced C++ Lambda Runtime. If neither of those match your programming language tastes, AWS opened it up for just about any language with the new Lambda Runtime API, which Danilo Poccia from AWS described in a blog post as “a simple interface to use any programming language, or a specific language version, for developing your functions.”

AWS didn’t want to stop with languages though. They also recognize that even though Lambda (and serverless in general) is designed to remove a level of complexity for developers, that doesn’t mean that all serverless applications consist of simple event triggers. As developers build more sophisticated serverless apps, they have to bring in system components and compose multiple pieces together, as Amazon CTO Werner Vogels explained in his keynote today.

To address this requirement, the company introduced Lambda Layers, which they describe as “a way to centrally manage code and data that is shared across multiple functions.” This could be custom code used by multiple functions or a way to share code used to simplify business logic.

As Lambda matures, developer requirements grow and these announcements and others are part of trying to meet those needs.

more AWS re:Invent 2018 coverage

Posted Under: Tech News
Page 4 of 132« First...23456...102030...Last »

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue