Category Archives: Tech News

Container security startup Aqua lands $62M Series C

Posted by on 3 April, 2019

This post was originally published on this site

Aqua Security, a startup that helps customers launch containers securely, announced a $62 million Series C investment today led by Insight Partners.

Existing investors Lightspeed Venture Partners, M12 (Microsoft’s venture fund), TLV Partners and Shlomo Kramer also participated. With today’s investment, the startup’s investments since inception now total over $100 million, according to the company.

Early investors took a chance on the company when it was founded in 2015. Containers were barely a thing back then, but the founders had a vision of what was coming down the pike and their bet has paid off in a big way as the company now has first-mover advantage. As more companies turn to Kubernetes and containers, the need for a security product built from the ground up to secure this kind of environment is essential.

While co-founder and CEO Dror Davidoff says the company has 60 Fortune 500 customers, he’s unable to share names, but he can provide some clues like five of the world’s top banks. As companies like that turn to new technology like containers, they aren’t going to go whole hog without a solid security option. Aqua gives them that.

“Our customers are all taking very dramatic steps towards adoption of those new technologies, and they know that existing security tools that they have in place will not solve the problems,” Davidoff told TechCrunch. He said that most customers have started small, but then have expanded as container adoption increases.

You may thank that an ephemeral concept like a container would be less of a security threat, but Davidoff says that the open nature of containerization actually leaves them vulnerable to tampering. “Container lives long enough to be dangerous,” he said. He added, “They are structured in an open way, making it simple to hack, and once in, to do lateral movement. If the container holds sensitive info, it’s easy to have access to that information.”

Aqua scans container images for malware and makes sure only certified images can run, making it difficult for a bad actor to insert an insecure image, but the ephemeral nature of containers also helps if something slips through. DevOp can simply take down the faulty container and put a newly certified clean one quickly.

The company has 150 employees with offices in the Boston area and R&D in Tel Aviv in Israel. With the new influx of cash, the company plans to expand quickly, growing sales and marketing, customer support and expanding the platform into areas to cover emerging areas like serverless computing. Davidoff says the company could double in size in the next 12-18 months and he’s expecting 3x to 4x customer growth.

All of that money should provide fuel to grow the company as containerization spreads and companies look for a security solution to keep containers in production safe.

How to handle dark data compliance risk at your company

Posted by on 2 April, 2019

This post was originally published on this site

Slack and other consumer-grade productivity tools have been taking off in workplaces large and small — and data governance hasn’t caught up.

Whether it’s litigation, compliance with regulations like GDPR, or concerns about data breaches, legal teams need to account for new types of employee communication. And that’s hard when work is happening across the latest messaging apps and SaaS products, which make data searchability and accessibility more complex.

Here’s a quick look at the problem, followed by our suggestions for best practices at your company.

Problems

The increasing frequency of reported data breaches and expanding jurisdiction of new privacy laws are prompting conversations about dark data and risks at companies of all sizes, even small startups. Data risk discussions necessarily include the risk of a data breach, as well as preservation of data. Just two weeks ago it was reported that Jared Kushner used WhatsApp for official communications and screenshots of those messages for preservation, which commentators say complies with recordkeeping laws but raises questions about potential admissibility as evidence.

Densify announces new tool to optimize container management in the cloud

Posted by on 2 April, 2019

This post was originally published on this site

Densify, a Toronto company that helps customers optimize their cloud resources to control usage and spending, announced a new tool today specifically designed to optimize container usage in the cloud.

Company CEO Gerry Smith, says that as containerization proliferates, it’s getting more difficult to track and control cloud infrastructure resource usage as software development and deployment happens with increasing speed.

“The whole basis upon which people buy and use cloud and container resources has become wildly expensive because of the lack of a resource management system,” Smith said.

The Densify solution looks at the consumption and for ways to cut costs and usage. “We have analytics in the cloud, any of various common cloud services that you can connect to, and then we use machine learning to analyze the resources and your cloud and container consumption,” he said.

Densify continuously make recommendations on how to make better use of resources and to find the cheapest computing, whether that’s reserved instances, spot instances or other discounted cloud resources.

What’s more, it can help you identify whether you are providing too few resources to accommodate the number of containers you are deploying, as well as too many.

This may sound a bit like what Spotinst and Cloudyn, the company Microsoft bought a couple of years ago, do in terms of helping control costs in the cloud, but Smith says, for his company it’s more about understanding the resources than pure cost.

“We look at ourselves as a resource management platform. So what we do is characterize the applications, demands of CPU and all the other resources, and use machine learning to predict what it’s going to need at any any given minute, at any given day of a week of the year, so that we can then better predictively match the right supply,” Smith explained.

It’s providing information about each container at a highly detailed level including “what’s running, what resources are being allocated, and the true utilization of an organization’s Kubernetes environment at a cluster, namespace and container level,” according to the company. All of this information should help DevOps teams better understand the resources required by their container deployments.

The company has actually been around since 2006 under the name Cirba. In its early guise it helped companies manage VMware installations. In 2016, it pivoted to cloud resource management and changed the company name to Densify. It has raised around $60 million since inception, with about half of that coming after the company changed to Densify in 2016.

The company is based in Toronto, but has offices in London and Melbourne as well

FireHydrant lands $1.5M seed investment to bring order to IT disaster recovery

Posted by on 2 April, 2019

This post was originally published on this site

FireHydrant, a NYC startup wants to help companies recover from IT disasters more quickly, and understand why they happened with the goal of preventing similar future scenarios from happening again. Today, the fledgling startup announced a $1.5 million seed investment from Work-Bench, a New York City venture capital firm that invests in early stage enterprise startups.

In addition to the funding, the company announced it was opening registration for its FireHydrant incident management platform. The product has been designed with Google’s Site Reliability Engineering (SRE) methodology in mind, but company co-founder and CEO Bobby Ross says the tool is designed to help anyone understand the cause of a disaster, regardless of what happened, and whether they practice SRE or not.

“I had been involved in several fire fighting scenarios — from production databases being dropped to Kubernetes upgrades gone wrong — and every incident had a common theme: ​absolute chaos​,” Ross wrote in a blog post announcing the new product.

The product has two main purposes, according to Ross. It helps you figure out what’s happening as you attempt to recover from an on-going disaster scenario, and once you’ve put out the fire, it lets you do a post-mortem to figure out exactly what happened with the hope of making sure that particular disaster doesn’t happen again.

As Ross describes it, a tool like PagerDuty can alert you that there’s a problem, but FireHydrant lets you figure out what specifically is going wrong and how to solve it. He says that the tool works by analyzing change logs, as a change is often the primary culprit of IT incidents. When you have an incident, FireHydrant will surface that suspected change, so you can check it first.

“We’ll say, hey, you had something change recently in this vicinity where you have an alert going off. There is a high likelihood that this change was actually causing your incident. And we actually bubble that up and mark it as a suspect,” Ross explained.

Screenshot: FireHydrant

Like so many startups the company developed from a pain point the founders were feeling. The three founders were responsible for solving major outages at companies like Namely, DigitalOcean, CoreOS, and Paperless Post.

But the actual idea for the company came about almost accidentally. In 2017, Ross was working on a series of videos and needed a way to explain what he was teaching. “I began writing every line of code with live commentary, and soon FireHydrant started to take the shape of what I envisioned as an SRE while at Namely, and I started to want it more than the video series. 40 hours of screencasts recorded later, I decided to stop recording and focus on the product…,” Ross wrote in the blog post.

Today it integrates with PagerDuty, Github and Slack, but the company is just getting started with the three founders, all engineers, working on the product and a handful of Beta customers. It is planning to hire more engineers to keep building out the product. It’s early days, but if this tool works as described, it could go a long way toward solving the fire fighting issues that every company faces at some point.

Pixeom raises $15M for its software-defined edge computing platform

Posted by on 2 April, 2019

This post was originally published on this site

Pixeom, a startup that offers a software-defined edge computing platform to enterprises, today announced that it has raised a $15M funding round from Intel Capital, National Grid Partners and previous investor Samsung Catalyst Fund. The company plans to use the new funding to expands its go-to-market capacity and invest in product development.

If the Pixeom name sounds familiar, that may be because you remember it as a Raspberry Pie-based personal cloud platform. Indeed, that’s the service the company first launched back in 2014. It quickly pivoted to an enterprise model, though. As Pixeom CEO Sam Nagar told me, that pivot came about after a conversation the company had with Samsung about adopting its product for that company’s needs. In addition, it was also hard to find venture funding. The original Pixeom device allowed users to set up their own personal cloud storage and other applications at home. While there is surely a market for these devices, especially among privacy conscious tech enthusiasts, it’s not massive, especially as users became more comfortable with storing their data in the cloud. “One of the major drivers [for the pivot] was that it was actually very difficult to get VC funding in an industry where the market trends were all skewing towards the cloud,” Nagar told me.

At the time of its launch, Pixeom also based its technology on OpenStack, the massive open source project that helps enterprises manage their own data centers, which isn’t exactly known as a service that can easily be run on a single machine, let alone a low-powered one. Today, Pixeom uses containers to ship and manage its software on the edge.

What sets Pixeom apart from other edge computing platforms is that it can run on commodity hardware. There’s no need to buy a specific hardware configuration to run the software, unlike Microsoft’s Azure Stack or similar services. That makes it significantly more affordable to get started and allows potential customers to reuse some of their existing hardware investments.

Pixeom brands this capability as ‘software-defined edge computing’ and there is clearly a market for this kind of service. While the company hasn’t made a lot of waves in the press, more than a dozen Fortune 500 companies now use its services. With that, the company now has revenues in the double-digit millions and its software manages more than a million devices worldwide.

As is so often the case in the enterprise software world, these clients don’t want to be named, but Nagar tells me that they include one of the world’s largest fast food chains, for example, which uses the Pixeom platform in its stores.

On the software side, Pixeom is relatively cloud agnostic. One nifty feature of the platform is that it is API-compatible with Google Cloud Platform, AWS and Azure and offers an extensive subset of those platforms’ core storage and compute services, including a set of machine learning tools. Pixeom’s implementation may be different, but for an app, the edge endpoint on a Pixeom machine reacts the same way as its equivalent endpoint on AWS, for example.

Until now, Pixeom mostly financed its expansion — and the salary of its over 90 employees — from its revenue. It only took a small funding round when it first launched the original device (together with a Kickstarter campaign). Technically, this new funding round is part of this, so depending on how you want to look at this, we’re either talking about a very large seed round or a Series A round.

Cloud Foundry ❤ Kubernetes

Posted by on 2 April, 2019

This post was originally published on this site

Cloud Foundry, the open-source platform-as-a-service project that more than half of the Fortune 500 companies use to help them build, test and deploy their applications, launched well before Kubernetes existed. Because of this, the team ended up building Diego, its own container management service. Unsurprisingly, given the popularity of Kubernetes, which has become somewhat of the de facto standard for container orchestration, a number of companies in the Cloud Foundry ecosystem starting looking into how they could use Kubernetes to replace Diego.

The result of this is Project Eirini, which was first proposed by IBM. As the Cloud Foundry Foundation announced today, Project Eirini now passes the core functional tests the team runs to validate the software releases of its application runtime, the core Cloud Foundry service that deploys and manages applications (if that’s a bit confusing, don’t even think about the fact that there’s also a Cloud Foundry Container Runtime, which already uses Kubernetes, but which is mostly meant to give enterprise a single platform for running their own applications and pre-built containers from third-party vendors).

a foundry for clouds“That’s a pretty big milestone,” Cloud Foundry Foundation CTO Chip Childers told me. “The project team now gets to shift to a mode where they’re focused on hardening the solution and making it a bit more production-ready. But at this point, early adopters are also starting to deploy that [new] architecture.”

Childers stressed that while the project was incubated by IBM, which has been a long-time backer of the overall Cloud Foundry project, Google, Pivotal and others are now also contributing and have dedicated full-time engineers working on the project. In addition, SUSE, SAP and IBM are also active in developing Eirini.

Eirini started as an incubation project, and while few doubted that this would be a successful project, there was a bit of confusion around how Cloud Foundry would move forward now that it essentially had two container engines for running its core service. At the time, there was even some concern that the project could fork. “I pushed back at the time and said: no, this is the natural exploration process that open-source communities need to go through,” Childers said. “What we’re seeing now is that with Pivotal and Google stepping in, that’s a very clear sign that this is going to be the go-forward architecture for the future of the Cloud Foundry Application Runtime.”

A few months ago, by the way, Kubernetes was still missing a few crucial pieces the Cloud Foundry ecosystem needed to make this move. Childers specifically noted that Windows support — something the project’s enterprise users really need — was still problematic and lacked some important features. In recent releases, though, the Kubernetes team fixed most of these issues and improved its Windows support, rendering those issues moot.

What does all of this mean for Diego? Childers noted that the community isn’t at a point where it’ll hold developing that tool. At some point, though, it seems likely that the community will decide that it’s time to start the transition period and make the move to Kubernetes official.

It’s worth noting that IBM today announced its own preview of Eirini in its Cloud Foundry Enterprise Environment and that the latest version of SUSE’s Cloud Foundry-based Application Platform includes a similar preview as well.

In addition, the Cloud Foundry Foundation, which is hosting its semi-annual developer conference in Philadelphia this week, also announced that it has certified it first to systems integrators, Accenture and HCL as part of its recently launched certification program for companies that work in the Cloud Foundry ecosystem and have at least 10 certified developers on their teams.

Edgybees’s new developer platform brings situational awareness to live video feeds

Posted by on 2 April, 2019

This post was originally published on this site

San Diego-based Edgybees today announced the launch of Argus, its API-based developer platform that makes it easy to add augmented reality features to live video feeds.

The service has long used this capability to run its own drone platform for first responders and enterprise customers, which allows its users to tag and track objects and people in emergency situations, for example, to create better situational awareness for first responders.

I first saw a demo of the service a year ago, when the team walked a group of journalists through a simulated emergency, with live drone footage and an overlay of a street map and the location of ambulances and other emergency personnel. It’s clear how these features could be used in other situations as well, given that few companies have the expertise to combine the video footage, GPS data and other information, including geographic information systems, for their own custom projects.

Indeed, that’s what inspired the team to open up its platform. As the Edgybees team told me during an interview at the Ourcrowd Summit last month, it’s impossible for the company to build a new solution for every vertical that could make use of it. So instead of even trying (though it’ll keep refining its existing products), it’s now opening up its platform.

“The potential for augmented reality beyond the entertainment sector is endless, especially as video becomes an essential medium for organizations relying on drone footage or CCTV,” said Adam Kaplan, CEO and co-founder of Edgybees. “As forward-thinking industries look to make sense of all the data at their fingertips, we’re giving developers a way to tailor our offering and set them up for success.”

In the run-up to today’s launch, the company has already worked with organizations like the PGA to use its software to enhance the live coverage of its golf tournaments.

Okta brings identity management to server level

Posted by on 2 April, 2019

This post was originally published on this site

Since it was founded in 2009, Okta has been focused on protecting identity — first for individuals in the cloud, and later at the device level. Today at its Oktane customer conference, the company announced a new level of identity protection at the server level.

The new tool, called Advanced Server Access, provides identity management for Windows and Linux Servers, whether they are in a data center or the cloud. The product supports major cloud infrastructure vendors like Amazon Web Services, Microsoft Azure and Google Cloud Platform, and gives IT the ability to protect access to servers, reduce the likelihood of identity theft and bring a level of automation to the server credential process.

As company founder and CEO Todd McKinnon points out, as every organization becomes a technology company building out their own applications, protecting servers becomes increasingly critical. “Identity is getting more and more important because there is more technology and zero trust in the network. You need to manage identity not just for users or devices. We are now applying our identity [experience] to the most critical resources for these emerging tech companies, their servers,” he said.

McKinnon explained that developers typically communicate with Linux servers via the SSH protocol. It required logging in of course, even before today’s announcement, but what Okta is doing is simplifying that in the same way it simplified logging into cloud applications for individuals.

People’s roles change over time, but instead of changing those roles at the identity layer to allow access to the server, in a typical shop the development or operations team creates an admin account with a superset of permissions and simply shares that. “That means the admin account has all the permissions, and also means they are sharing these credentials,” he said. If those credentials get stolen, the thief potentially has access to the entire universe of servers inside a company.

Okta’s idea is to bring a level of automation to the server identity management process, so that users maintain their own individual credentials and permissions in a more automated fashion, even as roles change across the entire server infrastructure a company manages. “It’s continuous, automatic, real-time checking of the state of the machine, and the state of the user and the permissions that makes it far more secure,” he said.

The tool is continuously monitoring this information to make sure nothing has changed such as another machine has taken over, avoiding man-in-the-middle attacks. It’s also making sure that there is no virus or malware, and that the person who is using the machine is who they say they are and has access at the level they are using it.

Okta went public almost exactly two years ago, and it needs to keep finding ways to expand its core identity services. Bringing it to the server level as this new product moves the idea of identity management deeper into a technology stack, and McKinnon hinted the company isn’t done yet.

“You might not think of server access as an identity opportunity, but the way we do it will make it clear that it really is an opportunity, and the same can be said for the next several innovations we will have after this,” he said.

Chef goes 100% open source

Posted by on 2 April, 2019

This post was originally published on this site

Chef, the popular automation service, today announced that it is open sourcing all of its software under the Apache 2 license. Until now, Chef used an open core model with a number of proprietary products that complemented its open-source tools. Most of these proprietary tools focused on enterprise users and their security and deployment needs. Now, all of these tools, which represent somewhere between a third and half of Chef’s total code base, are open source, too.

“We’re moving away from our open core model,” Chef SVP of products and engineering Corey Scobie told me. “We’re now moving to exclusively open source software development.”

He added that this also includes open product development. Going forward, the company plans to share far more details about its roadmap, feature backlogs and other product development details. All of Chef’s commercial offerings will also be built from the same open source code that everybody now has access to.

Scobie noted that there are a number of reasons why the company is doing this. He believes, for example, that the best way to build software is to collaborate in public with those who are actually using it.

“With that philosophy in mind, it was really easy to justify how we’d take the remainder of the software that we product and make it open source,” Scobie said. “We believe that that’s the best way to build software that works for people — real people in the real world.”

Another reason, Scobie said, is that it was becoming increasingly difficult for Chef to explain which parts of the software were open source and which were not. “We wanted to make that conversation easier, to be perfectly honest.”

Chef’s decision comes during a bit of a tumultuous time in the open source world. A number of companies like Redis, MongoDB and Elasic have recently moved to licenses that explicitly disallow the commercial use of their open source products by large cloud vendors like AWS unless they also buy a commercial license.

But here is Chef, open sourcing everything. Chef co-founder and board member Adam Jacob doesn’t think that’s a problem. “In the open core model, you’re saying that the value is in this proprietary sliver. The part you pay me for is this sliver of its value. And I think that’s incorrect,” he said. “I think, in fact, the value was always in the totality of the product.”

Jacob also argues that those companies that are moving to these new, more restrictive licenses, are only hurting themselves. “It turns out that the product was what mattered in the first place,” he said. “They continue to produce great enterprise software for their customers and their customers continue to be happy and continue to buy it, which is what they always would’ve done.” He also noted that he doesn’t think AWS will ever be better at running Elasticsearch than Elastic or, for that matter, at running Chef better than Chef.

It’s worth noting that Chef also today announced the launch of its Enterprise Automation Stack, which brings together all of Chef’s tools (Chef Automate, Infra, InSpec, Habitat and Workstation) under a unified umbrella.

“Chef is fully committed to enabling organizations to eliminate friction across the lifecycle of all of their applications, ensuring that, whether they build their solutions from our open source code or license our commercial distribution, they can benefit from collaboration as code,” said Chef CEO Barry Crist. “Chef Enterprise Automation Stack lets teams establish and maintain a consistent path to production for any application, in order to increase velocity and improve efficiency, so deployment and updates of mission-critical software become easier, move faster and work flawlessly.”

TradingView acquires TradeIt to add instant trading APIs to its investor toolkit

Posted by on 2 April, 2019

This post was originally published on this site

After raising $37 million to bring its on-the-spot stock market analytics tools to a wider range of publishers and other internet partners, TradingView today has announced its first acquisition to supercharge the services that it offers to investors, wherever they happen to be online. The startup has acquired TradeIt, which has built an API for on-the-spot trading on any site that uses it.

The terms of deal were not disclosed, but we understand from sources close to the deal that it was under $20 million, more specifically in the “high teens.” TradeIt, which used to be called Trading Ticket, had raised about $12 million from investors that included Peter Thiel’s mostly-fintech fund Valar Ventures, Citi Ventures and others. TradingView had raised just over $40 million with investors including Insight Partners, TechStars and others.

The deal is a big move for consolidation: together the two say they will serve more than 10 million monthly active users in 150 countries, covering some $70 billion in linked assets. But also, better economies of scale, and better margins for companies that provide services that touch consumers not necessarily from a “home” of their own.

The latter is a growing trend that has mirrored the rise of social media and other services that aggregate content from multiple sources; and also the bigger trend of instant, on-demand everything, where consumers are happier with the convenience of buying or engaging with something right when they want to, rather than shopping around, delaying or navigating to another place to do it.

That has also seen the rise of commerce APIs to buy things instantly, not to mention the emergence of a wide range of commerce applications that let people easily buy goods and services on the spot. (And in line with that, TradingView says that nearly half of its user base today is millennials, with an additional 13 percent even younger, Gen Z. “The groups are particularly drawn to [our] extensive charting expertise,” the company says.)

In fintech, and in the world of investing specifically, that’s a trend that has also helped the growth of cryptocurrency, which has opened up the world of investing and thinking about investing to a whole new class of consumers who — for better or worse — are hearing about investing opportunities via viral social media campaigns and other new kinds of channels. Whether cryptocurrency speculation bears out longer term, it is depositing a new class of people into the world of thinking about companies and investing in them.

That taps into the sweet spot where TradeIt and TradingView are building their business.

“TradeIt’s secure and compliant relationships with established U.S. retail brokerages, coupled with their robust integrations with top investing apps, allows TradingView to be part of the backbone of the investing ecosystem,” said Denis Globa, TradingView founder and CEO, in a statement.

TradingView’s partners today include Crunchbase, Investopedia, SeekingAlpha, Zacks, Binance, CME Group and Entrepreneur, where users are able to access a premium tier of TradingView tools by way of a subscription in order to do some instant data and price modelling of a company that they might be reading about. The thinking is that now they will also be able to go one step further by trading stocks related to that information. TradingView, meanwhile, can use that extra feature to make a little more money and sell its service to partners as more sticky, to the tune of 80 percent more time spent with publishers as a result of integrating TradingView’s tools.

That’s something that the two companies can already attest to doing well in partnership.

“TradingView’s vision aligns strongly with our view of the distributed financial networks of the future,” said Nathan Richardson, TradeIt CEO, in a statement. “We’ve worked with TradingView for several years now, and always felt our complementary products and shared retail investing users makes us stronger together.”

Richardson and his cofounder Betsy Eisenberg — who are both joining TradingView — had together built Yahoo Finance — so they are already well experienced in how to leverage the potential of bringing together content with utility.

“Nathan Richardson and Betsy Eisenberg are fintech pioneers who led the development of Yahoo! Finance from scratch. With them on board, we’re extremely excited about the growth potential,” Globa said.

Page 7 of 75« First...56789...203040...Last »

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue