Category Archives: Tech News

Diving into Google Cloud Next and the future of the cloud ecosystem

Posted by on 14 April, 2019

This post was originally published on this site

Extra Crunch offers members the opportunity to tune into conference calls led and moderated by the TechCrunch writers you read every day. This week, TechCrunch’s Frederic Lardinois and Ron Miller offered up their analysis on the major announcements that came out of Google’s Cloud Next conference this past week, as well as their opinions on the outlook for the company going forward.

Google Cloud announced a series of products, packages and services that it believes will improve the company’s competitive position and differentiate itself from AWS and other peers. Frederic and Ron discuss all of Google’s most promising announcements, including its product for managing hybrid clouds, its new end-to-end AI platform, as well as the company’s heightened effort to improve customer service, communication, and ease-of-use.

“They have all of these AI and machine learning technologies, they have serverless technologies, they have containerization technologies — they have this whole range of technologies.

But it’s very difficult for the average company to take these technologies and know what to do with them, or to have the staff and the expertise to be able to make good use of them. So, the more they do things like this where they package them into products and make them much more accessible to the enterprise at large, the more successful that’s likely going to be because people can see how they can use these.

…Google does have thousands of engineers, and they have very smart people, but not every company does, and that’s the whole idea of the cloud. The cloud is supposed to take this stuff, put it together in such a way that you don’t have to be Google, or you don’t have to be Facebook, you don’t have to be Amazon, and you can take the same technology and put it to use in your company”

Image via Bryce Durbin / TechCrunch

Frederic and Ron dive deeper into how the new offerings may impact Google’s market share in the cloud ecosystem and which verticals represent the best opportunity for Google to win. The two also dig into the future of open source in cloud and how they see customer use cases for cloud infrastructure evolving.

For access to the full transcription and the call audio, and for the opportunity to participate in future conference calls, become a member of Extra Crunch. Learn more and try it for free. 

OpenStack Stein launches with improved Kubernetes support

Posted by on 12 April, 2019

This post was originally published on this site

The OpenStack project, which powers more than 75 public and thousands of private clouds, launched the 19th version of its software this week. You’d think that after 19 updates to the open-source infrastructure platform, there really isn’t all that much new the various project teams could add, given that we’re talking about a rather stable code base here. There are actually a few new features in this release, though, as well as all the usual tweaks and feature improvements you’d expect.

While the hype around OpenStack has died down, we’re still talking about a very active open-source project. On average, there were 155 commits per day during the Stein development cycle. As far as development activity goes, that keeps OpenStack on the same level as the Linux kernel and Chromium.

Unsurprisingly, a lot of that development activity focused on Kubernetes and the tools to manage these container clusters. With this release, the team behind the OpenStack Kubernetes installer brought the launch time for a cluster down from about 10 minutes to five, regardless of the number of nodes. To further enhance Kubernetes support, OpenStack Stein also includes updates to Neutron, the project’s networking service, which now makes it easier to create virtual networking ports in bulk as containers are spun up, and Ironic, the bare-metal provisioning service.

All of that is no surprise, given that according to the project’s latest survey, 61 percent of OpenStack deployments now use both Kubernetes and OpenStack in tandem.

The update also includes a number of new networking features that are mostly targeted at the many telecom users. Indeed, over the course of the last few years, telcos have emerged as some of the most active OpenStack users as these companies are looking to modernize their infrastructure as part of their 5G rollouts.

Besides the expected updates, though, there are also a few new and improved projects here that are worth noting.

“The trend from the last couple of releases has been on scale and stability, which is really focused on operations,” OpenStack Foundation executive director Jonathan Bryce told me. “The new projects — and really most of the new projects from the last year — have all been pretty oriented around real-world use cases.”

The first of these is Placement. “As people build a cloud and start to grow it and it becomes more broadly adopted within the organization, a lot of times, there are other requirements that come into play,” Bryce explained. “One of these things that was pretty simplistic at the beginning was how a request for a resource was actually placed on the underlying infrastructure in the data center.” But as users get more sophisticated, they often want to run specific workloads on machines with certain hardware requirements. These days, that’s often a specific GPU for a machine learning workload, for example. With Placement, that’s a bit easier now.

It’s worth noting that OpenStack had some of this functionality before. The team, however, decided to uncouple it from the existing compute service and turn it into a more generic service that could then also be used more easily beyond the compute stack, turning it more into a kind of resource inventory and tracking tool.

Then, there is also Blazer, a reservation service that offers OpenStack users something akin to AWS Reserved Instances. In a private cloud, the use case for a feature is a bit different, though. But as some of the private clouds got bigger, some users found that they needed to be able to guarantee resources to run some of their regular, overnight batch jobs or data analytics workloads, for example.

As far as resource management goes, it’s also worth highlighting Sahara, which now makes it easier to provision Hadoop clusters on OpenStack.

In previous releases, one of the focus areas for the project was to improve the update experience. OpenStack is obviously a very complex system, so bringing it up to the latest version is also a bit of a complex undertaking. These improvements are now paying off. “Nobody even knows we are running Stein right now,” Vexxhost CEO Mohammed Nasar, who made an early bet on OpenStack for his service, told me. “And I think that’s a good thing. You want to be least impactful, especially when you’re in such a core infrastructure level. […] That’s something the projects are starting to become more and more aware of but it’s also part of the OpenStack software in general becoming much more stable.”

As usual, this release launched only a few weeks before the OpenStack Foundation hosts its bi-annual Summit in Denver. Since the OpenStack Foundation has expanded its scope beyond the OpenStack project, though, this event also focuses on a broader range of topics around open-source infrastructure. It’ll be interesting to see how this will change the dynamics at the event.

Homeland Security warns of security flaws in enterprise VPN apps

Posted by on 12 April, 2019

This post was originally published on this site

Several enterprise virtual private networking apps are vulnerable to a security bug that can allow an attacker to remotely break into a company’s internal network, according to a warning issued by Homeland Security’s cybersecurity division.

An alert was published Friday by the government’s Cybersecurity and Infrastructure Security Agency following a public disclosure by CERT/CC, the vulnerability disclosure center at Carnegie Mellon University.

The VPN apps built by four vendors — Cisco, Palo Alto Networks, Pulse Secure, and F5 Networks — improperly store authentication tokens and session cookies on a user’s computer. These aren’t your traditional consumer VPN apps used to protect your privacy, but enterprise VPN apps that are typically rolled out by a company’s IT staff to allow remote workers to access resources on a company’s network.

The apps generate tokens from a user’s password and stored on their computer to keep the user logged in without having to reenter their password every time. But if stolen, these tokens can allow access to that user’s account without needing their password.

But with access to a user’s computer — such as through malware — an attacker could steal those tokens and use them to gain access to a company’s network with the same level of access as the user. That includes company apps, systems and data.

So far, only Palo Alto Networks has confirmed its GlobalProtect app was vulnerable. The company issued a patch for both its Windows and Mac clients.

Neither Cisco nor Pulse Secure have patched their apps. F5 Networks is said to have known about storing since at least 2013 but advised users to roll out two-factor authentication instead of releasing a patch.

CERT warned that hundreds of other apps could be affected — but more testing was required.

Google Cloud makes some strong moves to differentiate itself from AWS and Microsoft

Posted by on 11 April, 2019

This post was originally published on this site

Google Cloud held its annual customer conference, Google Cloud Next, this week in San Francisco. It had a couple of purposes. For starters it could introduce customers to new CEO Thomas Kurian for the first time since his hiring at the end of last year. And secondly, and perhaps more importantly, it could demonstrate that it could offer a value proposition that is distinct from AWS and Microsoft.

Kurian’s predecessor, Diane Greene, was fond of saying that it was still early days for the cloud market, and she’s still right, but while the pie has continued to grow substantially, Google’s share of the market has stayed stubbornly in single digits. It needed to use this week’s conference as at least a springboard to showcase its strengths .

Its lack of commercial cloud market clout has always been a bit of a puzzler. This is Google after all. It runs Google Search and YouTube and Google Maps and Google Docs. These are massive services that rarely go down. You would think being able to run these massive services would translate into massive commercial success, but so far it hasn’t.

Missing ingredients

Even though Greene brought her own considerable enterprise cred to GCP, having been a co-founder at VMware, the company that really made the cloud possible by popularizing the virtual machine, she wasn’t able to significantly change the company’s commercial cloud fortunes.

In a conversation with TechCrunch’s Frederic Lardinois, Kurian talked about missing ingredients like having people to talk to (or maybe a throat to choke). “A number of customers told us ‘we just need more people from you to help us.’ So that’s what we’ll do,” Kurian told Lardinois.

But of course, it’s never one thing when it comes to a market as complex as cloud infrastructure. Sure, you can add more bodies in customer support or sales, or more aggressively pursue high value enterprise customers, or whatever Kurain has identified as holes in GCP’s approach up until now, but it still requires a compelling story and Google took a big step toward having the ingredients for a new story this week.

Changing position

Google is trying to position itself in the same way as any cloud vendor going after AWS. They are selling themselves as the hybrid cloud company that can help with your digital transformation. It’s a common strategy, but Google did more than throw out the usual talking points this week. It walked the walk too.

For starters, it introduced Anthos, a single tool to manage your workloads wherever they live, even in a rival cloud. This is a big deal, and if it works as described it does give that new beefed-up sales team at Google Cloud a stronger story to tell around integration. As my colleague, Frederic Lardinois described it:

So with Anthos, Google will offer a single managed service that will let you manage and deploy workloads across clouds, all without having to worry about the different environments and APIs. That’s a big deal and one that clearly delineates Google’s approach from its competitors’. This is Google, after all, managing your applications for you on AWS and Azure, he wrote

AWS hasn’t made made many friends in the open source community of late and Google reiterated that it was going to be the platform that is friendly to open source projects. To that end, it announced a number of major partnerships.

Finally, the company took a serious look at verticals, trying to put together packages of Google Cloud services designed specifically for a given vertical. As an example, it put together a package for retailers that included special services to help keep you up and running during peak demand, tools to suggest if you like this, you might be interested in these items, contact center AI and other tools specifically geared toward the retail market. You can expect the company will be doing more of this to make the platform more attractive to a given market space.

Photo: Michael Short/Bloomberg via Getty Images

All of this and more, way too much to summarize in one article, was exactly what Google Cloud needed to do this week. Now comes the hard part. They have come up with some good ideas and they have to go out and sell it.

Nobody has ever denied that Google lacked good technology. That has always been an inherently obvious strength, but it has struggled to translate that into substantial market share. That is Kurian’s challenge. As Greene used to say, in baseball terms, it’s still early innings. And it really still is, but the game is starting to move along, and Kurian needs to get the team moving in the right direction if it expects to be competitive.

Much to Oracle’s chagrin, Pentagon names Microsoft and Amazon as $10B JEDI cloud contract finalists

Posted by on 11 April, 2019

This post was originally published on this site

Yesterday, the Pentagon announced two finalists in the $10 billion, decade-long JEDI cloud contract process — and Oracle was not one of them. In spite of lawsuits, official protests and even back-channel complaining to the president, the two finalists are Microsoft and and Amazon.

“After evaluating all of the proposals received, the Department of Defense has made a competitive range determination for the Joint Enterprise Defense Infrastructure Cloud request for proposals, in accordance with all applicable laws and regulations. The two companies within the competitive range will participate further in the procurement process,” Elissa Smith, DoD spokesperson for Public Affairs Operations told TechCrunch. She added that those two finalists were in fact Microsoft and Amazon Web Services (AWS, the cloud computing arm of Amazon).

This contract procurement process has caught the attention of the cloud computing market for a number of reasons. For starters, it’s a large amount of money, but perhaps the biggest reason it had cloud companies going nuts was that it is a winner-take-all proposition.

It is important to keep in mind that whether it’s Microsoft or Amazon who is ultimately chosen for this contract, the winner may never see $10 billion, and it may not last 10 years because there are a number of points where the DoD could back out, but the idea of a single winner has been irksome for participants in the process from the start.

Over the course of the last year, Google dropped out of the running, while IBM and Oracle have been complaining to anyone who will listen that the contract unfairly favored Amazon. Others have questioned the wisdom of even going with with a single-vendor approach. Even at $10 billion, an astronomical sum to be sure, we have pointed out that in the scheme of the cloud business, it’s not all that much money, but there is more at stake here than money.

There is a belief here that the winner could have an upper hand in other government contracts, that this is an entree into a much bigger pot of money. After all, if you are building the cloud for the Department of Defense and preparing it for a modern approach to computing in a highly secure way, you would be in a pretty good position to argue for other contracts with similar requirements.

In the end, in spite of the protests of the other companies involved, the Pentagon probably got this right. The two finalists are the most qualified to carry out the contract’s requirements. They are the top two cloud infrastructure vendors on the market, although Microsoft is far behind with around 13 or 14 percent marketshare. Amazon is far head with around 33 percent, according to several companies who track such things.

Microsoft in particular has tools and resources that would be very appealing, especially Azure Stack, a mini private version of Azure, that you can stand up anywhere, an approach that would have great appeal to the military, but both companies have experience with government contracts, and both bring strengths and weaknesses to the table. It will undoubtedly be a tough decision.

In February, the contract drama took yet another turn when the department reported it was investigating new evidence of conflict of interest by a former Amazon employee, who was involved in the RFP process for a time before returning to the company. Smith reports that the department found no such conflict, but there could be some ethical violations they are looking into.

“The department’s investigation has determined that there is no adverse impact on the integrity of the acquisition process. However, the investigation also uncovered potential ethical violations, which have been further referred to DOD IG,” Smith explained.

The DoD is supposed to announce the winner this month, but the drama has continued non-stop.

Rasa raises $13M led by Accel for its developer-friendly open source approach to chatbots

Posted by on 11 April, 2019

This post was originally published on this site

Conversational AI and the use of chatbots have been through multiple cycles of hype and disillusionment in the tech world. You know the story: first you get a launch from the likes of Apple, Facebook, Microsoft, Amazon, Google or any number of other companies, and then you get the many examples of how their services don’t work as intended at the slightest challenge. But time brings improvements and more focused expectations, and today a startup that has been harnessing all those learnings is announcing funding to take its own approach to conversational AI to the next level.

Rasa, which has built an open source platform for third parties to design and manage their own conversational (text or voice) AI chatbots, is today announcing that it has raised $13 million in a Series A round of funding led by Accel, with participation also from Basis Set Ventures, Greg Brockman (Co-founder & CTO OpenAI), Daniel Dines (Founder & CEO UiPath) and Mitchell Hashimoto (Co-founder & CTO Hashicorp). Rasa was founded in Berlin, but with this round, it be moving its headquarters to San Francisco with a plan to hire more people there in sales, marketing and business development; and to continue its tech development with its roadmap including plans to expand the platform to cover images, too.

The company was founded 2.5 years ago, by co-founder/CEO Alex Weidauer’s own admission “when chatbot hype was at its peak.” Rasa itself was not immune to it, too: “Everyone wanted to automate conversations, and so we set out to build something, too,” he said. “But we quickly realised it was extremely hard to do and that the developer tools were just not there yet.”

Rather than posing an insurmountable roadblock, the shortcomings of chatbots became the problem that Rasa set out to fix.

Alan Nichols, the co-founder who is now the CTO, is an AI PhD, but not in natural language as you might expect, but in machine learning. “What we do is more is address this as a mathematical, machine learning problem rather than one of language,” Weidauer said. Specifically, that means building a model that can be used by any company to tap its own resources to train their bots, in particular with unstructured information, which has been one of the trickier problems to solve in conversational AI.

At a time when many have raised concerns about who might “own” the progress of artificial intelligence, and specifically the data that goes into building these systems, Rasa’s approach is a refreshing one.

Typically, when an organization wants to build an AI chatbot either to interact with customers or to run something in the backend of their business, their developers most commonly opt for third-party cloud APIs that have restrictions on how they can be customized, or they build their own from scratch, but if the organization is not already a large tech company, it will be challenged to have the human or other resources to execute this.

Rasa underscores an emerging trend for a strong third contender. The company has built a stack of tools that it has open sourced, meaning that anyone (and thousands of developers do) use it for free, with a paid enterprise version including extra tools including customer support, testing and training tools, and production container deployment. (It’s priced depending on size of organization and usage.)

Importantly, whichever package is used, the tools run on a company’s own training data; and the company can ultimately host their bots wherever they choose, which have been some of the unique selling points for those using Rasa’s platform, when they are less interested in working with organizations that might also be competitors.

Adobe’s new AI assistant for searching on Adobe Stock, which has some 100 million images, was built on Rasa.

“We wanted to give our users an AI assistant that lets them search with natural language commands,” said Brett Butterfield, director of software development at Adobe, in a statement. “We looked at several online services, and, in the end, Rasa was the clear choice because we were able to host our own servers and protect our user’s data privacy. Being able to automate full conversations and the fact it is open source were key elements for us.” Other customers include Parallon and TalkSpace, Zurich and Allianz, Telekom, and UBS.

Open source has become big business in the last several years, and so a startup that’s built an AI platform that has a very direct application in the enterprise built on it presents an an obvious attraction for VCs.

“Automation is the next battleground for the enterprise, and while this is a very difficult space to win, especially for unstructured information like text and voice, we are confident Rasa has what it takes given their impressive adoption by developers,” said Andrei Brasoveanu, partner at Accel, in a statement. “Existing solutions don’t let in-house developer teams control their own automation destiny. Rasa is applying commercial open source software solutions for AI environments similarly to what open source leaders such as Cloudera, Mulesoft, and Hashicorp have done for others.”

Armis nabs $65M Series C as IoT security biz grows in leaps and bounds

Posted by on 11 April, 2019

This post was originally published on this site

Armis is helping companies protect IoT devices on the network without using an agent, and it’s apparently a problem that is resonating with the market, as the startup reports 700 percent growth in the last year. That caught the attention of investors, who awarded them with a $65 million Series C investment to help keep accelerating that growth.

Sequoia Capital led the round with help from new investors Insight Venture Partners and Intermountain Ventures. Returning investors Bain Capital Ventures, Red Dot Capital Partners and Tenaya Capital also participated. Today’s investment brings the total raised to $112 million, according to the company.

The company is solving a hard problem around device management on a network. If you have devices where you cannot apply an agent to track them, how do you manage them? Nadir Izrael, company co-founder and CTO, says you have to do it very carefully because even scanning for ports could be too much for older devices and they could shut down. Instead, he says that Armis takes a passive approach to security, watching and learning and understanding what normal device behavior looks like — a kind of behavioral fingerprinting.

“We observe what devices do on the network. We look at their behavior, and we figure out from that everything we need to know,” Izreal told TechCrunch. He adds, “Armis in a nutshell is a giant device behavior crowdsourcing engine. Basically, every client of Armis is constantly learning how devices behave. And those statistical models, those machine learning models, they get merged into master models.”

Whatever they are doing, they seem to have hit upon a security pain point. They announced a $30 million Series B almost exactly a year ago, and they went back for more because they were growing quickly and needed the capital to hire people to keep up.

That kind of growth is a challenge for any startup. The company expects to double its 125 person work force before the end of the year, but the company is working to put systems in place to incorporate those new people and service all of those new customers.

The company plans to hire more people in sales and marketing, of course, but they will concentrate on customer support and building out partnership programs to get some help from systems integrators, ISVs and MSPs, who can do some of the customer hand-holding for them.

With consumer G+ dead, Currents hopes to make waves in the enterprise

Posted by on 10 April, 2019

This post was originally published on this site

Google today announced that Google+ in G Suite, the last remaining remnants of what was once Google’s attempt to rival Facebook and Twitter, will now be called Currents. We don’t need to belabor the fact that Google+ was a flop and that its death was probably long overdue. We’ve done that. Now it’s time to look ahead and talk about what’s next for Currents. To do that, I sat down with David Thacker, the VP of Product Management for G Suite, at Google’s Cloud Next conference.

As Thacker told me, Google has shifted its resources to have the former Google+ team focus on Currents instead. But before we get to what that teams plans to do, let’s talk about the name first. Currents, after all, was also the name of the predecessor of Google Play Newsstand, the app that was the predecessor of the Google News app.

The official line is that “Currents” is meant to evoke the flow of information. Thacker also noted that the team did a lot of research around the name and that it had “very low recognition.” I guess that’s fair. It also allows Google to reuse an old trademark without having to jump through too many hoops. Since the Google+ name obviously now carries some baggage, changing the name makes sense anyway. “The enterprise version is distinct and separate now and it was causing confusion among our customers,” said Thacker.

“This allows us to do new things and move much faster in the enterprise,” Thacker explained. “To run a consumer social network at the scale of consumer G+ requires a lot of resources and efforts, as you can imagine. And that’s partially the reason we decided to sunset that product, as we just didn’t feel it was worth that investment given the user base on that. But it basically frees up that team to focus on the enterprise vision.”

Now, however, with consumer G+ gone, the company is going to invest in Currents. “We’re moving consumer resources into the enterprise,” he said.

The plan here clearly isn’t to just let Currents linger but to improve it for business users. And while Google has never publicly shared user numbers, Thacker argues that those businesses that do use it tend to use it expensively. The hope, though, surely, is to increase that number — whatever it may be — significantly over time. “If you look at our top G Suite customers, most of them use the product actively as a way to connect really broad organizations,” Thacker said.

Thacker also noted that this move now removes a lot of constraints since the team doesn’t have to think about consumer features anymore. “When Google+ was first designed, it was never designed for that [enterprise] use case, but organizations had the same need to break down silos and help spread ideas and knowledge in their company,” Thacker explained. “So while Google+ didn’t succeed as a consumer product, it will certainly live on in the enterprise.”

What will that future look like? As Thacker told me, the team started with revamping the posting workflow, which was heavily focused on image sharing, for example, which isn’t exactly all that important in a business context.

But there are other features the team is planning to launch, too, including better analytics. “Analytics is a really important part of it,” said Thacker. “When people are posting on Currents, whether it’s executives trying to engage their employee base, they want to see how that’s resonating. And so we built in some pretty rich analytics.”

The team also built a new set of administrative controls that help manage how organizations can control and manage their usage of Currents.

Going forward then, we may actually see a bit of innovation in Currents — something that was sorely lacking from Google+ while it was lingering in limbo. Google Cloud’s CEO Thomas Kurian told me that he wants to make collaboration one of his focus areas. Currents is an obvious fit there, and there are plenty of ways to integrate it with the rest of G Suite still.

InVision announces new integrations with Jira

Posted by on 10 April, 2019

This post was originally published on this site

Today InVision announced even deeper integrations with Jira, letting users embed actual InVision prototypes right within a Jira ticket. The company also announced the Jira app for InVision Studio, letting designers in Studio see interactive Jira tickets in real time.

InVision has already had lighter integrations with Atlassian products, including Jira, Confluence and Trello. It’s also worth noting that Atlassian participated in InVision’s $115 million Series F funding round.

The partnership makes sense. Atlassian provides a parallel product to InVision, except instead of serving designers, Atlassian serves engineers.

But it brings up an interesting challenge for InVision, last valued at $1.9 billion. The company went from creating its own market with a paid prototyping and collaboration tool to competing with giants and startups alike as it introduced new products.

InVision Studio, for instance, is meant to compete with the likes of Adobe XD, Sketch, and Figma, among others.

At the same time, InVision’s strategy has always been to become a connective tissue for the broader design landscape. CEO Clark Valberg has said in the past that he sees InVision becoming the Salesforce of the design world, with a broad array of partnerships and integrations across the industry to handle each, nuanced fraction of the process in a single, fluid place.

“Up until now we’ve been a fairly horizontal player,” said VP of Product Mike Davidson. “We created the market for prototyping. There was no paid market for a prototyping tool until InVision came along. Now that you see us provide a more vertical stack of tools, we don’t want to lose the great thing we’ve built with the InVision Prototyping tool. It’s been more popular than we could have ever imagined.”

Davidson added that InVision now serves 100 of the Fortune 100 companies.

And since its launch in 2011, InVision has maintained that original strategic course of staying open, particularly with Atlassian. But InVision isn’t just friendly with Atlassian. The company also introduced an App Store and Asset Store in InVision Studio (partnerships include Slack, Dribbble, and Getty), with plans to launch a developer API so anyone can build apps for InVision Studio. Plus, InVision has made a handful of acquisitions, and launched the Design Forward Fund, which allocates $5 million toward investing in design startups.

VP of Partnerships and Community Mike Davidson believes that balancing this open garden philosophy with the desire to provide the very best products across the entire process (automatically putting InVision in competition with other design startups) is one of the company’s greatest challenges.

“We want to provide a first-cclass experience from beginning to end but we also want to provide a system that’s open enough where you can use your tool of choice for any one of the particular functions,” said Davidson. “It’s a difficult balance. We want to allow for designers and developers to choose which tools they use for whatever job they’re trying to do, but we also want to be the best choice for each one of those functions.”

The right way to do AI in security

Posted by on 10 April, 2019

This post was originally published on this site

Artificial intelligence applied to information security can engender images of a benevolent Skynet, sagely analyzing more data than imaginable and making decisions at lightspeed, saving organizations from devastating attacks. In such a world, humans are barely needed to run security programs, their jobs largely automated out of existence, relegating them to a role as the button-pusher on particularly critical changes proposed by the otherwise omnipotent AI.

Such a vision is still in the realm of science fiction. AI in information security is more like an eager, callow puppy attempting to learn new tricks – minus the disappointment written on their faces when they consistently fail. No one’s job is in danger of being replaced by security AI; if anything, a larger staff is required to ensure security AI stays firmly leashed.

Arguably, AI’s highest use case currently is to add futuristic sheen to traditional security tools, rebranding timeworn approaches as trailblazing sorcery that will revolutionize enterprise cybersecurity as we know it. The current hype cycle for AI appears to be the roaring, ferocious crest at the end of a decade that began with bubbly excitement around the promise of “big data” in information security.

But what lies beneath the marketing gloss and quixotic lust for an AI revolution in security? How did AL ascend to supplant the lustrous zest around machine learning (“ML”) that dominated headlines in recent years? Where is there true potential to enrich information security strategy for the better – and where is it simply an entrancing distraction from more useful goals? And, naturally, how will attackers plot to circumvent security AI to continue their nefarious schemes?

How did AI grow out of this stony rubbish?

The year AI debuted as the “It Girl” in information security was 2017. The year prior, MIT completed their study showing “human-in-the-loop” AI out-performed AI and humans individually in attack detection. Likewise, DARPA conducted the Cyber Grand Challenge, a battle testing AI systems’ offensive and defensive capabilities. Until this point, security AI was imprisoned in the contrived halls of academia and government. Yet, the history of two vendors exhibits how enthusiasm surrounding security AI was driven more by growth marketing than user needs.

Page 10 of 75« First...89101112...203040...Last »

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue