This post is intended for businesses and other organizations interested... Read more →
Posted by Richy George on 8 May, 2021This post was originally published on this site
The internet is now our nervous system. We are constantly streaming and buying and watching and liking, our brains locked into the global information matrix as one universal and coruscating emanation of thought and emotion.
What happens when the machine stops though?
It’s a question that E.M. Forster was intensely focused on more than a century ago in a short story called, rightly enough, “The Machine Stops,” about a human civilization connected entirely through machines that one day just turn off.
Those fears of downtime are not just science fiction anymore. Outages aren’t just missing a must-watch TikTok clip. Hospitals, law enforcement, the government, every corporation — the entire spectrum of human institutions that constitute civilization now deeply rely on connectivity to function.
So when it comes to disaster response, the world has dramatically changed. In decades past, the singular focus could be roughly summarized as rescue and mitigation — save who you can while trying to limit the scale of destruction. Today though, the highest priority is by necessity internet access, not just for citizens, but increasingly for the on-the-ground first responders who need bandwidth to protect themselves, keep abreast of their mission objectives, and have real-time ground truth on where dangers lurk and where help is needed.
While the sales cycles might be arduous as we learned in part one and the data trickles have finally turned to streams in part two, the reality is that none of that matters if there isn’t connectivity to begin with. So in part three of this series on the future of technology and disaster response, we’re going to analyze the changing nature of bandwidth and connectivity and how they intersect with emergencies, taking a look at how telcos are creating resilience in their networks while defending against climate change, how first responders are integrating connectivity into their operations, and finally, exploring how new technologies like 5G and satellite internet will affect these critical activities.
Climate change is inducing more intense weather patterns all around the world, creating second- and third-order effects for industries that rely on environmental stability for operations. Few industries have to be as dynamic to the changing context as telecom companies, whose wired and wireless infrastructure is regularly buffeted by severe storms. Resiliency of these networks isn’t just needed for consumers — it’s absolutely necessary for the very responders trying to mitigate disasters and get the network back up in the first place.
Unsurprisingly, no issue looms larger for telcos than access to power — no juice, no bars. So all three of America’s major telcos — Verizon (which owns TechCrunch’s parent company Verizon Media, although not for much longer), AT&T and T-Mobile — have had to dramatically scale up their resiliency efforts in recent years to compensate both for the demand for wireless and the growing damage wrought by weather.
Jay Naillon, senior director of national technology service operations strategy at T-Mobile, said that the company has made resilience a key part of its network buildout in recent years, with investments in generators at cell towers that can be relied upon when the grid cannot. In “areas that have been hit by hurricanes or places that have fragile grids … that is where we have invested most of our fixed assets,” he said.
Like all three telcos, T-Mobile pre-deploys equipment in anticipation for disruptions. So when a hurricane begins to swirl in the Atlantic Ocean, the company will strategically fly in portable generators and mobile cell towers in anticipation of potential outages. “We look at storm forecasts for the year,” Naillon explained, and do “lots of preventative planning.” They also work with emergency managers and “run through various drills with them and respond and collaborate effectively with them” to determine which parts of the network are most at risk for damage in an emergency. Last year, the company partnered with StormGeo to accurately predict weather events.
Predictive AI for disasters is also a critical need for AT&T. Jason Porter, who leads public sector and the company’s FirstNet first-responder network, said that AT&T teamed up with Argonne National Laboratory to create a climate-change analysis tool to evaluate the siting of its cell towers and how they will weather the next 30 years of “floods, hurricanes, droughts and wildfires.” “We redesigned our buildout … based on what our algorithms told us would come,” he said, and the company has been elevating vulnerable cell towers four to eight feet high on “stilts” to improve their resiliency to at least some weather events. That “gave ourselves some additional buffer.”
AT&T has also had to manage the growing complexity of creating reliability with the chaos of a climate-change-induced world. In recent years, “we quickly realized that many of our deployments were due to weather-related events,” and the company has been “very focused on expanding our generator coverage over the past few years,” Porter said. It’s also been very focused on building out its portable infrastructure. “We essentially deploy entire data centers on trucks so that we can stand up essentially a central office,” he said, empathizing that the company’s national disaster recovery team responded to thousands of events last year.
Particularly on its FirstNet service, AT&T has pioneered two new technologies to try to get bandwidth to disaster-hit regions faster. First, it has invested in drones to offer wireless services from the sky. After Hurricane Laura hit Louisiana last year with record-setting winds, our “cell towers were twisted up like recycled aluminum cans … so we needed to deploy a sustainable solution,” Porter described. So the company deployed what it dubs the FirstNet One — a “dirigible” that “can cover twice the cell coverage range of a cell tower on a truck, and it can stay up for literally weeks, refuel in less than an hour and go back up — so long-term, sustainable coverage,” he said.
Secondly, the company has been building out what it calls FirstNet MegaRange — a set of high-powered wireless equipment that it announced earlier this year that can deploy signals from miles away, say from a ship moored off a coast, to deliver reliable connectivity to first responders in the hardest-hit disaster zones.
As the internet has absorbed more of daily life, the norms for network resilience have become ever more exacting. Small outages can disrupt not just a first responder, but a child taking virtual classes and a doctor conducting remote surgery. From fixed and portable generators to rapid-deployment mobile cell towers and dirigibles, telcos are investing major resources to keep their networks running continuously.
Yet, these initiatives are ultimately costs borne by telcos increasingly confronting a world burning up. Across conversations with all three telcos and others in the disaster response space, there was a general sense that utilities just increasingly have to self-insulate themselves in a climate-changed world. For instance, cell towers need their own generators because — as we saw with Texas earlier this year — even the power grid itself can’t be guaranteed to be there. Critical applications need to have offline capabilities, since internet outages can’t always be prevented. The machine runs, but the machine stops, too.
While we may rely on connectivity in our daily lives as consumers, disaster responders have been much more hesitant to fully transition to connected services. It is precisely in the middle of a tornado and the cell tower is down that you realize a printed map might have been nice to have. Paper, pens, compasses — the old staples of survival flicks remain just as important in the field today as they were decades ago.
Yet, the power of software and connectivity to improve emergency response has forced a rethinking of field communications and how deeply technology is integrated on the ground. Data from the frontlines is extremely useful, and if it can be transmitted, dramatically improves the ability of operations planners to respond safely and efficiently.
Both AT&T and Verizon have made large investments in directly servicing the unique needs of the first responder community, with AT&T in particular gaining prominence with its FirstNet network, which it exclusively operates through a public-private partnership with the Department of Commerce’s First Responder Network Authority. The government offered a special spectrum license to the FirstNet authority in Band 14 in exchange for the buildout of a responder-exclusive network, a key recommendation of the 9/11 Commission, which found that first responders couldn’t communicate with each other on the day of those deadly terrorist attacks. Now, Porter of AT&T says that the company’s buildout is “90% complete” and is approaching 3 million square miles of coverage.
Why so much attention on first responders? The telcos are investing here because in many ways, the first responders are on the frontiers of technology. They need edge computing, AI/ML rapid decision-making, the bandwidth and latency of 5G (which we will get to in a bit), high reliability, and in general, are fairly profitable customers to boot. In other words, what first responders need today are what consumers in general are going to want tomorrow.
Cory Davis, director of public safety strategy and crisis response at Verizon, explained that “more than ever, first responders are relying on technology to go out there and save lives.” His counterpart, Nick Nilan, who leads product management for the public sector, said that “when we became Verizon, it was really about voice [and] what’s changed over the last five [years] is the importance of data.” He brings attention to tools for situational awareness, mapping, and more that are a becoming standard in the field. Everything first responders do “comes back to the network — do you have the coverage where you need it, do you have the network access when something happens?”
The challenge for the telcos is that we all want access to that network when catastrophe strikes, which is precisely when network resources are most scarce. The first responder trying to communicate with their team on the ground or their operations center is inevitably competing with a citizen letting friends know they are safe — or perhaps just watching the latest episode of a TV show in their vehicle as they are fleeing the evacuation zone.
That competition is the argument for a completely segmented network like FirstNet, which has its own dedicated spectrum with devices that can only be used by first responders. “With remote learning, remote work and general congestion,” Porter said, telcos and other bandwidth providers were overwhelmed with consumer demand. “Thankfully we saw through FirstNet … clearing that 20 MHz of spectrum for first responders” helped keep the lines clear for high-priority communications.
FirstNet’s big emphasis is on its dedicated spectrum, but that’s just one component of a larger strategy to give first responders always-on and ready access to wireless services. AT&T and Verizon have made prioritization and preemption key operational components of their networks in recent years. Prioritization gives public safety users better access to the network, while preemption can include actively kicking off lower-priority consumers from the network to ensure first responders have immediate access.
Nilan of Verizon said, “The network is built for everybody … but once we start thinking about who absolutely needs access to the network at a period of time, we prioritize our first responders.” Verizon has prioritization, preemption, and now virtual segmentation — “we separate their traffic from consumer traffic” so that first responders don’t have to compete if bandwidth is limited in the middle of a disaster. He noted that all three approaches have been enabled since 2018, and Verizon’s suite of bandwidth and software for first responders comes under the newly christened Verizon Frontline brand that launched in March.
With increased bandwidth reliability, first responders are increasingly connected in ways that even a decade ago would have been unfathomable. Tablets, sensors, connected devices and tools — equipment that would have been manual are now increasingly digital.
That opens up a wealth of possibilities now that the infrastructure is established. My interview subjects suggested applications as diverse as the decentralized coordination of response team movements through GPS and 5G; real-time updated maps that offer up-to-date risk analysis of how a disaster might progress; pathfinding for evacuees that’s updated as routes fluctuate; AI damage assessments even before the recovery process begins; and much, much more. In fact, when it comes to the ferment of the imagination, many of those possibilities will finally be realized in the coming years — when they have only ever been marketing-speak and technical promises in the past.
We’ve been hearing about 5G for years now, and even 6G every once in a while just to cause reporters heart attacks, but what does 5G even mean in the context of disaster response? After years of speculation, we are finally starting to get answers.
Naillon of T-Mobile noted that the biggest benefit of 5G is that it “allows us to have greater coverage” particularly given the low-band spectrum that the standard partially uses. That said, “As far as applications — we are not really there at that point from an emergency response perspective,” he said.
Meanwhile, Porter of AT&T said that “the beauty of 5G that we have seen there is less about the speed and more about the latency.” Consumers have often seen marketing around voluminous bandwidths, but in the first-responder world, latency and edge computing tends to be the most desirable features. For instance, devices can relay video to each other on the frontlines, without necessarily needing a backhaul to the main wireless network. On-board processing of image data could allow for rapid decision-making in environments where seconds can be vital to the success of a mission.
That flexibility is allowing for many new applications in disaster response, and “we are seeing some amazing use cases coming out of our 5G deployments [and] we have launched some of our pilots with the [Department of Defense],” Porter said. He offered an example of “robotic dogs to go and do bomb dismantling or inspecting and recovery.”
Verizon has made innovating on new applications a strategic goal, launching a 5G First Responders Lab dedicated to guiding a new generation of startups to build at this crossroads. Nilan of Verizon said that the incubator has had more than 20 companies across four different cohorts, working on everything from virtual reality training environments to AR applications that allow firefighters to “see through walls.” His colleague Davis said that “artificial intelligence is going to continue to get better and better and better.”
Blueforce is a company that went through the first cohort of the Lab. The company uses 5G to connect sensors and devices together to allow first responders to make the best decisions they can with the most up-to-date data. Michael Helfrich, founder and CEO, said that “because of these new networks … commanders are able to leave the vehicle and go into the field and get the same fidelity” of information that they normally would have to be in a command center to receive. He noted that in addition to classic user interfaces, the company is exploring other ways of presenting information to responders. “They don’t have to look at a screen anymore, and [we’re] exploring different cognitive models like audio, vibration and heads-up displays.”
5G will offer many new ways to improve emergency responses, but that doesn’t mean that our current 4G networks will just disappear. Davis said that many sensors in the field don’t need the kind of latency or bandwidth that 5G offers. “LTE is going to be around for many, many more years,” he said, pointing to the hardware and applications taking advantage of LTE-M standards for Internet of Things (IoT) devices as a key development for the future here.
Michael Martin of emergency response data platform RapidSOS said that “it does feel like there is renewed energy to solve real problems,” in the disaster response market, which he dubbed the “Elon Musk effect.” And that effect definitely does exist when it comes to connectivity, where SpaceX’s satellite bandwidth project Starlink comes into play.
Satellite uplinks have historically had horrific latency and bandwidth constraints, making them difficult to use in disaster contexts. Furthermore, depending on the particular type of disaster, satellite uplinks can be astonishingly challenging to setup given the ground environment. Starlink promises to shatter all of those barriers — easier connections, fat pipes, low latencies and a global footprint that would be the envy of any first responder globally. Its network is still under active development, so it is difficult to foresee today precisely what its impact will be on the disaster response market, but it’s an offering to watch closely in the years ahead, because it has the potential to completely upend the way we respond to disasters this century if its promises pan out.
Yet, even if we discount Starlink, the change coming this decade in emergency response represents a complete revolution. The depth and resilience of connectivity is changing the equation for first responders from complete reliance on antiquated tools to an embrace of the future of digital computing. The machine is no longer stoppable.
Posted by Richy George on 7 May, 2021This post was originally published on this site
Robotic process automation (RPA) has certainly been getting a lot of attention in the last year, with startups, acquisitions and IPOs all coming together in a flurry of market activity. It all seemed to culminate with UiPath’s IPO last month. The company that appeared to come out of nowhere in 2017 eventually had a final private valuation of $35 billion. It then had the audacity to match that at its IPO. A few weeks later, it still has a market cap of over $38 billion in spite of the stock price fluctuating at points.
Was this some kind of peak for the technology or a flash in the pan? Probably not. While it all seemed to come together in the last year with a big increase in attention to automation in general during the pandemic, it’s a market category that has been around for some time.
RPA allows companies to automate a group of highly mundane tasks and have a machine do the work instead of a human. Think of finding an invoice amount in an email, placing the figure in a spreadsheet and sending a Slack message to Accounts Payable. You could have humans do that, or you could do it more quickly and efficiently with a machine. We’re talking mind-numbing work that is well suited to automation.
In 2019, Gartner found RPA was the fastest-growing category in enterprise software. In spite of that, the market is still surprisingly small, with IDC estimates finding it will reach just $2 billion in 2021. That’s pretty tiny for the enterprise, but it shows that there’s plenty of room for this space to grow.
We spoke to five investors to find out more about RPA, and the general consensus was that we are just getting started. While we will continue to see the players at the top of the market — like UiPath, Automation Anywhere and Blue Prism — jockeying for position with the big enterprise vendors and startups, the size and scope of the market has a lot of potential and is likely to keep growing for some time to come.
To learn about all of this, we queried the following investors:
We have seen a range of RPA startups emerge in recent years, with companies like UiPath, Blue Prism and Automation Anywhere leading the way. As the space matures, where do the biggest opportunities remain?
Mallun Yen: One of the fastest-growing categories of software, RPA has been growing at over 60% in recent years, versus 13% for enterprise software generally. But we’ve barely scratched the surface. The COVID-19 pandemic forced companies to shift how they run their business, how they hire and allocate staff.
Given that the workforce will remain at least partially permanently remote, companies recognize that this shift is also permanent, and so they need to make fundamental changes to how they run their businesses. It’s simply suboptimal to hire, train and deploy remote employees to run routine processes, which are prone to, among other things, human error and boredom.
Jai Das: All the companies that you have listed are focused on automating simple repetitive tasks that are performed by humans. These are mostly data entry and data validation jobs. Most of these tasks will be automated in the next couple of years. The new opportunity lies in automating business processes that involve multiple humans and machines within complicated workflow using AI/ML.
Sometimes this is also called process mining. There have been BPM companies in the past that have tried to automate these business processes, but they required a lot of services to implement and maintain these automated processes. AI/ML is providing a way for software to replace all these services.
Soma Somasegar: For all the progress that we have seen in RPA, I think it is still early days. The global demand for RPA market size in terms of revenue was more than $2 billion this past year and is expected to cross $20 billion in the coming decade, growing at a CAGR of more than 30% over the next seven to eight years, according to analysts such as Gartner.
That’s an astounding growth rate in the coming years and is a reflection of how early we are in the RPA journey and how much more is ahead of us. A recent study by Deloitte indicates that up to 50% of the tasks in businesses performed by employees are considered mundane, administrative and labor-intensive. That is just a recipe for a ton of process automation.
There are a lot of opportunities that I see here, including process discovery and mining; process analytics; application of AI to drive effective, more complex workflow automation; and using low code/no code as a way to enable a broader set of people to be able to automate tasks, processes and workflows, to name a few.
Laela Sturdy: We’re a long way from needing to think about the space maturing. In fact, RPA adoption is still in its early infancy when you consider its immense potential. Most companies are only now just beginning to explore the numerous use cases that exist across industries. The more enterprises dip their toes into RPA, the more use cases they envision.
I expect to see market leaders like UiPath continue to innovate rapidly while expanding the breadth and depth of their end-to-end automation platforms. As the technology continues to evolve, we should expect RPA to penetrate even more deeply into the enterprise and to automate increasingly more — and more critical — business processes.
Ed Sim: Most large-scale automation projects require a significant amount of professional services to deliver on the promises, and two areas where I still see opportunity include startups that can bring more intelligence and faster time to value. Examples include process discovery, which can help companies quickly and accurately understand how their business processes work and prioritize what to automate versus just rearchitecting an existing workflow.
Posted by Richy George on 6 May, 2021This post was originally published on this site
While insurance providers continue to get disrupted by startups like Lemonade, Alan, Clearcover, Pie and many others applying tech to rethink how to build a business around helping people and companies mitigate against risks with some financial security, one issue that has not disappeared is fraud. Today, a startup out of France is announcing some funding for AI technology that it has built for all insurance providers, old and new, to help them detect and prevent it.
Shift Technology, which provides a set of AI-based SaaS tools to insurance companies to scan and automatically flag fraud scenarios across a range of use cases — they include claims fraud, claims automation, underwriting, subrogation detection and financial crime detection — has raised $220 million, money that it will be using both to expand in the property and casualty insurance market, the area where it is already strong, as well as to expand into health, and to double down on growing its business in the U.S. It also provides fraud detection for the travel insurance sector.
This Series D is being led Advent International, via Advent Tech, with participation from Avenir and others. Accel, Bessemer Venture Partners, General Catalyst, and Iris Capital — who were all part of Shift’s Series C led by Bessemer in 2019 — also participated. With this round, Paris and Boston-based Shift Technology has now raised some $320 million and has confirmed that it is now valued at over $1 billion.
The company currently has around 100 customers across 25 different countries — with customers including Generali France and Mitsui Sumitomo — and says that it has already analyzed nearly two billion claims, data that’s feeding its machine learning algorithms to improve how they work.
The challenge (or I suppose, opportunity) that Shift is tackling, however, is much bigger. The Coalition Against Insurance Fraud, a non-profit in the U.S., estimates that at least $80 billion of fraudulent claims are made annually in the U.S. alone, but the figure is likely significantly higher. One problem has, ironically, been the move to more virtualized processes, which open the door to malicious actors exploiting loopholes in claims filing and fudging information.
Shift is also not alone in tackling this issue: the market for insurance fraud detection globally was estimated to be worth $2.5 billion in 2019 and projected to be worth as much as $8 billion by 2024.
In addition to others in claims management tech such as Brightcore and Guidewire, many of the wave of insuretech startups are building in their own in-house AI-based fraud protection, and it’s very likely that we’ll see a rise of other fraud protection services, built out of fintech to guard against financial crime, making their way to insurance, as the mechanics of how the two work and the compliance issues both face are very closely aligned.
“The entire Shift team has worked tirelessly to build this company and provide insurers with the technology solutions they need to empower employees to best be there for their policyholders. We are thrilled to partner with Advent International, given their considerable sector expertise and global reach and are taking another giant step forward with this latest investment,” stated Jeremy Jawish, CEO and co-founder, Shift Technology, in a statement. “We have only just scratched the surface of what is possible when AI-based decision automation and optimization is applied to the critical processes that drive the insurance policy lifecycle.”
For its backers, one key point with Shift is that it’s helping older providers bring on more tools and services that can help them improve their margins as well as better compete against the technology built by newer players.
“Since its founding in 2014, Shift has made a name for itself in the complex world of insurance,” said Thomas Weisman, an Advent director, in a statement. “Shift’s advanced suite of SaaS products is helping insurers to reshape manual and often time-consuming claims processes in a safer and more automated way. We are proud to be part of this exciting company’s next wave of growth.”
Posted by Richy George on 5 May, 2021This post was originally published on this site
Each of the big cloud platforms has its own methodology for passing on security information to logging and security platforms, leaving it to the vendors to find proprietary ways to translate that into a format that works for their tool. The Cloud Security Notification Framework (CSNF), a new working group that includes Microsoft, Google and IBM is trying to create a new open and standard way of delivering this information.
Nick Lippis, who is co-founder and co-chairman of ONUG, an open enterprise cloud community, which is the primary driver of CSNF says that what they’ve created is part standard and part open source. “What we’ve been really focusing on is how do we automate governance on the cloud. And so security was the place that was ripe for that where we can actually provide some value right away for the community,” he said.
While they’ve pulled in some of the big cloud vendors, they’ve also got large companies who consume cloud services like FedEx, Pfizer and Goldman Sachs. Conspicuously missing from the group is AWS, the biggest player in the cloud infrastructure market by far. But Lippis says that he hopes as the project matures, other companies including AWS will join.
“There’s lots of security programs and industry programs that get out there and that people are asking them to join, and so some companies want to wait to see how well this pans out [before making a commitment to it],” Lippis said. His hope is that over time, that Amazon will come around and join the group, but in the meantime they are working to get to the point everyone in the community will feel good about what they’re doing.
The idea is to start with security alerts and find a way to build a common format to give companies the same kind of system they have in the data center to track security alerts in the cloud. The way they hope to do that is with this open dialogue between the cloud vendors and the companies involved with the group.
“So the structure of that is that there’s a steering committee that is chaired by CISOs from these large cloud consumer brands, and also the cloud providers, and they provide voting and direction. And then there’s the working group where all the work is done. The beauty of what we do is that we have now consumers and also providers working together and collaborating,” he said.
Don Duet, a member of ONUG, who is CEO and co-founder of Concourse Labs, has been involved in the formation of the CSNF. He says to keep the project focused they are looking at this as a data management problem and they are establishing a common vocabulary for everyone to work within the group.
“How do you build a consensus on what are the types of terms that everybody can agree on and then you build the underlying basis so that the experts in your resource providers in this case, Cloud Service Providers, can bless how their data [connects] to those common standards,” Duet explained.
He says that particular problem is more of an organizational problem than a technical one, getting the various stakeholders together and just building consensus around this. At this point, they have that process in place and the next step is proving it by having the various companies involved in this test it out in the coming months.
After they get past the testing phase, in October they plan to actually demonstrate what this looks like in a before and after scenario, with the new framework and without it. As the group works toward these goals, the hope is that eventually the framework will become more established and other companies and vendors will come on board and make this a more standard way of sharing security alerts. If all goes well, they hope to build in other security information into this framework over time.
Posted by Richy George on 5 May, 2021This post was originally published on this site
Timescale, makers of the open source TimescaleDB time series database, announced a $40 million Series B financing round today. The investment comes just over two years after it got a $15 million Series A.
Redpoint Ventures led today’s round with help from existing investors Benchmark, New Enterprise Associates, Icon Ventures and Two Sigma Ventures. The company reports it has now raised approximately $70 million.
TimescaleDB lets users measure data across a time dimension, so anything that would change over time. “What we found is we need a purpose-built database for it to handle scalability, reliability and performance, and we like to think of ourselves as the category-defining relational database for time series,” CEO and co-founder Ajay Kulkarni explained.
He says that the choice to build their database on top of Postgres when it launched 4 years ago was a key decision. “There are a few different databases that are designed for time series, but we’re the only one where developers get the purpose-built time series database plus a complete Postgres database all in one…,” he said.
While the company has an open source version, last year it decided rather than selling an enterprise version (as it had been), it was going to include all of that functionality in the free version of the product and place a bet entirely on the cloud for revenue.
“We decided that we’re going to make a bold bet on the cloud. We think cloud is where the future of database adoption is, and so in the last year, […] we made all of our enterprise features free. If you want to test it yourself, you get the whole thing, but if you want a managed service, then we’re available to run it for you,” he said.
The community approach is working to attract users, with over 2 million monthly active databases, some of which the company is betting will convert to the cloud service over time. Timescale is based in New York City, but it’s a truly remote organization with 60 employees spread across 20 countries and every continent except Antarctica.
He says that as a global company, it creates new dimensions of diversity and different ways of thinking about it. “I think one thing that is actually kind of an interesting challenge for us is what does D&I mean in a totally global org. A lot of people focus on diversity and inclusion within the U.S., but we think we’re doing better than most tech companies in terms of racial diversity, gender diversity,” he said.
And being remote first isn’t going to change even when we get past the pandemic. “I think it may not work for every business, but I think like being remote first has been a real good thing for us,” he said.
Posted by Richy George on 5 May, 2021This post was originally published on this site
With cybercrime on course to be a $6 trillion problem this year, organizations are throwing ever more resources at the issue to avoid being a target. Now, a startup that’s built a platform to help them stress-test the investments that they have made into their security IT is announcing some funding on the back of strong demand from the market for its tools.
Cymulate, which lets organizations and their partners run machine-based attack simulations on their networks to determine vulnerabilities and then automatically receive guidance around how to fix what is not working well enough, has picked up $45 million, funding that the startup — co-headquartered in Israel and New York — will be using to continue investing in its platform and to ramp up its operations after doubling its revenues last year on the back of a customer list that now numbers 300 large enterprises and mid-market companies, including the Euronext stock exchange network as well as service providers such as NTT and Telit.
London-based One Peak Partners is leading this Series C, with previous investors Susquehanna Growth Equity (SGE), Vertex Ventures Israel, Vertex Growth and Dell Technologies Capital also participating.
According to Eyal Wachsman, the CEO and co-founder, Cymulate’s technology has been built not just to improve an organization’s security, but an automated, machine-learning-based system to better understand how to get the most out of the security investments that have already been made.
“Our vision is to be the largest cybersecurity ‘consulting firm’ without consultants,” he joked.
The valuation is not being disclosed but as some measure of what is going on, David Klein, managing partner at One Peak, said in an interview that that he expects Cymulate to hit a $1 billion valuation within two years at the rate it’s growing and bringing in revenue right now. The startup has now raised $71 million, so it’s likely the valuation is in the mid-hundreds of millions. (We’ll continue trying to get a better number to have a more specific data point here.)
Cymulate — pronounced “sigh-mulate”, like the “cy” in “cyber” and a pun of “simulate”) is cloud-based but works across both cloud and on-premises environments and the idea is that it complements work done by (human) security teams both inside and outside of an organization, as well as the security IT investments — in terms of software or hardware) that they have already made.
“We do not replace — we bring back the power of the expert by validating security controls and checking whether everything is working correctly to optimize a company’s security posture,” Wachsman said. “Most of the time, we find our customers are using only 20% of the capabilities that they have. The main idea is that we have become a standard.”
The company’s tools are based in part on the MITRE ATT&CK framework, a knowledge base of threats, tactics and techniques used by a number of other cybersecurity services, including a number of others building continuous validation services that compete with Cymulate. These include the likes of FireEye, Palo Alto Networks, Randori, Khosla-backed AttackIQ and many more.
Although Cymulate is optimized to help customers better use the security tools they already have, it is not meant to replace other security apps, Wachsman noted, even if the by-product might become buying less of those apps in the future.
“I believe my message every day when talking with security experts is to stop buying more security products,” he said in an interview. “They won’t help defend you from the next attack. You can use what you’ve already purchased as long as you configure it well.”
In his words, Cymulate acts as a “black box” on the network, where it integrates with security and other software (it can also work without integrating but integrations allow for a deeper analysis). After running its simulations, it produces a map of the network and its threat profile, an executive summary of the situation that can be presented to management and a more technical rundown, which includes recommendations for mitigations and remediations.
Alongside validating and optimising existing security apps and identifying vulnerabilities in the network, Cymulate also has built special tools to fit different kinds of use cases that are particularly relevant to how businesses are operation today. They include evaluating remote working deployments, the state of a network following an M&A process, the security landscape of an organization that links up with third parties in supply chain arrangements, how well an organization’s security architecture is meeting (or potentially conflicting) with privacy and other kinds of regulatory compliance requirements, and it has built a “purple team” deployment, where in cases where security teams do not have the resources for running separate “red teams” to stress test something, blue teams at the organization can use Cymulate to build a machine learning-based “team” to do this.
The fact that Cymulate has built the infrastructure to run all of these processes speaks to a lot of potential of what more it could build, especially as our threat landscape, and how we do business, both continue to evolve. Even as it is, though, opportunity today is a massive one, with Gartner estimating that some $170 billion will be spent on information security by enterprises in 2022. That’s one reason why investors are here, too.
“The increasing pace of global cyber security attacks has resulted in a crisis of trust in the security posture of enterprises and a realization that security testing needs to be continuous as opposed to periodic, particularly in the context of an ever-changing IT infrastructure and rapidly evolving threats. Companies understand that implementing security solutions is not enough to guarantee protection against cyber threats and need to regain control,” said Klein, in a statement. “We expect Cymulate to grow very fast,” he told me more directly.
Posted by Richy George on 4 May, 2021This post was originally published on this site
We tend to think of venture capital in tens or hundreds of millions, even billions of dollars, so it’s refreshing to find Evening Fund, a new $2 million micro fund that focuses on small investments between $50,000 and $100,000 as it seeks to help young startups with early funding.
The new fund was launched by Kat Orekhova and Rapha Danilo. Orekhova, who started her career as a math professor, is a former Facebook data scientist who has been dabbling in angel investing and working with young startups for awhile now. They call it Evening Fund because they work as founders by day and investors by night.
She says that she wanted to create something more formal to help early-stage startups get off the ground and has help from limited partners that include Sarah Smith at Bain Capital, Lee Linden, general partner at Quiet Capital and a long list of tech industry luminaries.
Orekhova says she and her partner invest small sums of money in B2B SaaS companies, which are pre-seed, seed and occasionally A rounds. They will invest in consumer here and there as well. She says one of their key value propositions is that they can help with more than just the money. “One way in which I think Rapha and I can really help our founders is that we give very specific, practical advice, not just kind of super high level,” she told me.
That could be something like how to hire your first designer where the founders may not even know what a designer does. “You’re figuring out ‘how do I hire my first designer?’ and ‘what does the designer even do?’ because most founders have never hired a designer before. So we give them extremely practical hands-on stuff like ‘here are the competencies’ or ‘what’s the difference between a graphic designer, a visual designer, a UX designer and a researcher,’ ” she said. They go so far as to give them a list of candidates to help them get going.
She says that she realized while she was at Facebook that she wanted to eventually start a company, so she began volunteering her time to work with companies going through Y Combinator. “I think a lot of people don’t know where to start, but in my case I looked at the YC list, found a company that I thought I could be helpful to. I reached out cold and said ‘Hey, I don’t want money. I don’t want equity. I just want to try to be helpful to you and see where that goes,’ ” she said.
That lead to scouting for startups for some larger venture capital firms and eventually dabbling in financing some of these startups that she was helping. Today’s announcement is the culmination of these years of work and the groundwork she laid to make herself familiar with how the startup ecosystem works.
The new firm already has its first investment under its belt, Dala, an AI-powered internal search tool that helps connect users to workplace knowledge that’s often locked in applications like Google Suite, Slack and Notion.
As though Evening isn’t enough to keep her and Danilo busy, they are also each working on their own startups. Orekhova wasn’t ready to share much on that just yet as her company remains in stealth.
Posted by Richy George on 4 May, 2021This post was originally published on this site
SAP CEO Christian Klein was appointed co-CEO with Jennifer Morgan last April just as the pandemic was hitting full force across the world. Within six months, Morgan was gone and he was sole CEO, put in charge of a storied company at 38 years old. By October, its stock price was down and revenue projections for the coming years were flat.
That is definitely not the way any CEO wants to start their tenure, but the pandemic forced Klein to make some decisions to move his customers to the cloud faster. That, in turn, had an impact on revenue until the transition was completed. While it makes sense to make this move now, investors weren’t happy with the news.
There was also the decision to spin out Qualtrics, the company his predecessor acquired for $8 billion in 2018. As he looked back on the one-year mark, Klein sat down with me to discuss all that has happened and the unique set of challenges he faced.
Starting in the same month that a worldwide pandemic blows up presents unique challenges for a new leader. For starters, Klein couldn’t visit anyone in person and get to know the team. Instead, he went straight to Zoom and needed to make sure everything was still running.
The CEO says that the company kept chugging along in spite of the disruption. “When I took over this new role, I of course had some concerns about how to support 400,000 customers. After one year, I’ve been astonished. Our support centers are running without disruption and we are proud of that and continue to deliver value,” he said.
Taking over when he couldn’t meet in person with employees or customers has worked out better than he thought. “It was much better than I expected, and of course personally for me, it’s different. I’m the CEO, but I wasn’t able to travel and so I didn’t have the opportunity to go to the U.S., and this is something that I’m looking forward to now, meeting people and talking to them live,” he said.
That’s something he simply wasn’t able to do for his first year because of travel restrictions, so he says communication has been key, something a lot of executives have discussed during COVID. “I’m in regular contact with the employees, and we do it virtually. Still, it’s not the same as when you do it live, but it helps a lot these days. I would say you cannot over-communicate in such times,” he said.
Posted by Richy George on 4 May, 2021This post was originally published on this site
Activist investor Starboard Value is clearly. fed up with Box and it let the cloud content management know it in no uncertain terms in a letter published yesterday. The firm, which bought a 7.7% stake in Box two years ago claims the company is underperforming, executing poorly and making bad business decisions — and it wants to inject the board of directors with new blood.
While they couched the letter in mostly polite language, it’s quite clear Starboard is exasperated with Box. “While we appreciate the dialogue we have had with Box’s management team and Board of Directors (the “Board”) over the past two years, we have grown increasingly frustrated with continued poor results, questionable capital allocation decisions, and subpar shareholder returns,” Starboard wrote in its letter.
Box as you can imagine did not take kindly to the shot across its bow and responded in a press release that it has bent over backwards to accommodate Starboard including refreshing the board last year when they added several numbers, whom they point out were approved by Starboard.
“Box has a diverse and independent Board with directors who bring extensive technology experience across enterprise and consumer markets, enterprise IT, and global go-to-market strategy, as well as deep financial acumen and proven track records of helping public companies drive disciplined growth, profitability, and stockholder value. Furthermore, seven of the ten directors on the Box Board will have joined the Board within the last three years,” the company wrote in a statement. In other words, Box is saying it already has injected the new blood that Starboard claims it wants.
Box recently got a $500 million cash injection from KKR widely believed to be an attempt to bulk up cash reserves with the goal generating growth via acquisition. Starboard was particularly taken aback by this move, however. “The only viable explanation for this financing is a shameless and utterly transparent attempt to “buy the vote” and shows complete disregard for proper corporate governance and fiscal discipline,” Starboard wrote.
Alan Pelz-Sharpe, founder and principal analyst at Deep Analysis, a firm that closely tracks the content management market, says the two sides clearly aren’t aligned and that’s not likely to change. “Starboard targeted and gained a seat on the board at Box at a difficult time for the firm, thats the modus operandi for activist investors. Since that time there has clearly been a lot of improvements in terms of Box’s financial goals. However, there is and will remain a misalignment between Starboard’s goals, and Box led by Levie as a whole. Though both would like to see the share price rise, Starboard’s end goal is most likely to see Box acquired, sooner rather than later, and that is not Box’s goal,” he said.
Starboard believes the only way to resolve this situation is to inject the board with still more new blood, taking a swipe at the Box leadership team while it was at it. “There is no good reason that Box should be unable to deliver improved growth and profitability, at least in-line with better performing software companies, which, in turn, would create significant shareholder value,” Starboard wrote.
As such the firm indicated it would be putting up its own slate of board candidates at the company’s next board meeting. In the tit for tat that has been this exchange, Box indicated it would be doing the same.
Meanwhile Box vigorously defended its results. “In the past year, under the oversight of the Operating Committee, the company has made substantial progress across all facets of the business — strategic, operational and financial — as demonstrated by the strong results reported for the full year of fiscal 2021,” the company wrote, pointing to its revenue growth last fiscal year as proof of the progress with revenue of $771 million up 11% year over year.
It’s unclear how this standoff will play out, but clearly Starboard wants to take over the Board and have its way with Box, believing that it can perform better if it were in charge. That could result ultimately, as Pelz-Sharpe suggested, in Box being acquired.
We would appear to heading for a showdown, and when it’s over, Box could be a very different company, or the current leadership could assert control once and for all and we could proceed with Box’s current growth strategy still in place. Time will tell which is the case.
Posted by Richy George on 3 May, 2021This post was originally published on this site
It’s widely known that Dell has a debt problem left over from its massive acquisition of EMC in 2016, and it seems to be moving this year to eliminate part of it in multi-billion chunks. The first step was spinning out VMware as a separate company last month, a move expected to net close to $10 billion.
The second, long expected, finally dropped last night when the company announced it was selling Boomi to a couple of private equity firms for $4 billion. Francisco Partners is joining forces with TPG to make the deal to buy the integration platform.
Boomi is not unlike Mulesoft, a company that Salesforce purchased in 2018 for $6.5 billion, although a bit longer in tooth. They both help companies with integration problems by creating connections between disparate systems. With so many pieces in place from various acquisitions over the years, it seems like a highly useful asset for Dell to help pull these pieces together and make them work, but the cash is trumping that need.
Providing integration services is a growing requirement as companies look for ways to make better use of data locked in siloed systems. Boomi could help and that’s one of the primary reasons for the acquisition, according to Francisco executives.
“The ability to integrate and connect data and workflows across any combination of applications or domains is a critical business capability, and we strongly believe that Boomi is well positioned to help companies of all sizes turn data into their most valuable asset,” Francisco CEO Dipanjan Deb and partner Brian Decker said in a statement.
As you would expect, Boomi’s CEO Chris McNabb put a positive spin on the deal about how his new bosses were going to fuel growth for his company. “By partnering with two tier-one investment firms like Francisco Partners and TPG, we can accelerate our ability for our customers to use data to drive competitive advantage. In this next phase of growth, Boomi will be in a position of strength to further advance our innovation and market trajectory while delivering even more value to our customers,” McNabb said in a statement.
All of this may have some truth to it, but the company goes from being part of a large amorphous corporation to getting absorbed in the machinery of two private equity firms. What happens next is hard to say.
The company was founded in 2000, and sold to Dell in 2010. Today, it has 15,000 customer, but Dell’s debt has been well documented, and when you string together a couple of multi-billion deals as Dell has recently, pretty soon you’re talking real money. While the company has not stated it will explicitly use the proceeds of this deal to pay off debt as it did with the VMware announcement, it stands to reason that this will be the case.
The deal is expected to close later this year, although it will have to pass the typical regulatory scrutiny prior to that.
Copyright 2015 - InnovatePC - All Rights Reserved
Site Design By Digital web avenue