SambaNova raises $676M at a $5.1B valuation to double down on cloud-based AI software for enterprises

Posted by on 13 April, 2021

This post was originally published on this site

Artificial intelligence technology holds a huge amount of promise for enterprises — as a tool to process and understand their data more efficiently; as a way to leapfrog into new kinds of services and products; and as a critical stepping stone into whatever the future might hold for their businesses. But the problem for many enterprises is that they are not tech businesses at their core, so bringing on and using AI will typically involve a lot of heavy lifting. Today, one of the startups building AI services is announcing a big round of funding to help bridge that gap.

SambaNova — a startup building AI hardware and integrated systems that run on it that only officially came out of three years in stealth last December — is announcing a huge round of funding today to take its business out into the world. The company has closed on $676 million in financing, a Series D that co-founder and CEO Rodrigo Liang has confirmed values the company at $5.1 billion.

The round is being led by SoftBank, which is making the investment via Vision Fund 2. Temasek and the government of Singapore Investment Corp. (GIC), both new investors, are also participating, along with previous backers BlackRock, Intel Capital, GV (formerly Google Ventures), Walden International and WRVI, among other unnamed investors. (Sidenote: BlackRock and Temasek separately kicked off an investment partnership yesterday, although it’s not clear if this falls into that remit.)

Co-founded by two Stanford professors, Kunle Olukotun and Chris Ré, and Liang, who had been an engineering executive at Oracle, SambaNova has been around since 2017 and has raised more than $1 billion to date — both to build out its AI-focused hardware, which it calls DataScale, and to build out the system that runs on it. (The “Samba” in the name is a reference to Liang’s Brazilian heritage, he said, but also the Latino music and dance that speaks of constant movement and shifting, not unlike the journey AI data regularly needs to take that makes it too complicated and too intensive to run on more traditional systems.)

SambaNova on one level competes for enterprise business against companies like Nvidia, Cerebras Systems and Graphcore — another startup in the space which earlier this year also raised a significant round. However, SambaNova has also taken a slightly different approach to the AI challenge.

In December, the startup launched Dataflow-as-a-Service as an on-demand, subscription-based way for enterprises to tap into SambaNova’s AI system, with the focus just on the applications that run on it, without needing to focus on maintaining those systems themselves. It’s the latter that SambaNova will be focusing on selling and delivering with this latest tranche of funding, Liang said.

SambaNova’s opportunity, Liang believes, lies in selling software-based AI systems to enterprises that are keen to adopt more AI into their business, but might lack the talent and other resources to do so if it requires running and maintaining large systems.

“The market right now has a lot of interest in AI. They are finding they have to transition to this way of competing, and it’s no longer acceptable not to be considering it,” said Liang in an interview.

The problem, he said, is that most AI companies “want to talk chips,” yet many would-be customers will lack the teams and appetite to essentially become technology companies to run those services. “Rather than you coming in and thinking about how to hire scientists and hire and then deploy an AI service, you can now subscribe, and bring in that technology overnight. We’re very proud that our technology is pushing the envelope on cases in the industry.”

To be clear, a company will still need data scientists, just not the same number, and specifically not the same number dedicating their time to maintaining systems, updating code and other more incremental work that comes managing an end-to-end process.

SambaNova has not disclosed many customers so far in the work that it has done — the two reference names it provided to me are both research labs, the Argonne National Laboratory and the Lawrence Livermore National Laboratory — but Liang noted some typical use cases.

One was in imaging, such as in the healthcare industry, where the company’s technology is being used to help train systems based on high-resolution imagery, along with other healthcare-related work. The coincidentally-named Corona supercomputer at the Livermore Lab (it was named after the 2014 lunar eclipse, not the dark cloud of a pandemic that we’re currently living through) is using SambaNova’s technology to help run calculations related to some COVID-19 therapeutic and antiviral compound research, Marshall Choy, the company’s VP of product, told me.

Another set of applications involves building systems around custom language models, for example in specific industries like finance, to process data quicker. And a third is in recommendation algorithms, something that appears in most digital services and frankly could always do to work a little better than it does today. I’m guessing that in the coming months it will release more information about where and who is using its technology.

Liang also would not comment on whether Google and Intel were specifically tapping SambaNova as a partner in their own AI services, but he didn’t rule out the prospect of partnering to go to market. Indeed, both have strong enterprise businesses that span well beyond technology companies, and so working with a third party that is helping to make even their own AI cores more accessible could be an interesting prospect, and SambaNova’s DataScale (and the Dataflow-as-a-Service system) both work using input from frameworks like PyTorch and TensorFlow, so there is a level of integration already there.

“We’re quite comfortable in collaborating with others in this space,” Liang said. “We think the market will be large and will start segmenting. The opportunity for us is in being able to take hold of some of the hardest problems in a much simpler way on their behalf. That is a very valuable proposition.”

The promise of creating a more accessible AI for businesses is one that has eluded quite a few companies to date, so the prospect of finally cracking that nut is one that appeals to investors.

“SambaNova has created a leading systems architecture that is flexible, efficient and scalable. This provides a holistic software and hardware solution for customers and alleviates the additional complexity driven by single technology component solutions,” said Deep Nishar, senior managing partner at SoftBank Investment Advisers, in a statement. “We are excited to partner with Rodrigo and the SambaNova team to support their mission of bringing advanced AI solutions to organizations globally.”

Posted Under: Tech News
Meroxa raises $15M Series A for its real-time data platform

Posted by on 13 April, 2021

This post was originally published on this site

Meroxa, a startup that makes it easier for businesses to build the data pipelines to power both their analytics and operational workflows, today announced that it has raised a $15 million Series A funding round led by Drive Capital. Existing investors Root, Amplify and Hustle Fund also participated in this round, which together with the company’s previously undisclosed $4.2 million seed round now brings total funding in the company to $19.2 million.

The promise of Meroxa is that businesses can use a single platform for their various data needs and won’t need a team of experts to build their infrastructure and then manage it. At its core, Meroxa provides a single software-as-a-service solution that connects relational databases to data warehouses and then helps businesses operationalize that data.

Image Credits: Meroxa

“The interesting thing is that we are focusing squarely on relational and NoSQL databases into data warehouse,” Meroxa co-founder and CEO DeVaris Brown told me. “Honestly, people come to us as a real-time FiveTran or real-time data warehouse sink. Because, you know, the industry has moved to this [extract, load, transform] format. But the beautiful part about us is, because we do change data capture, we get that granular data as it happens.” And businesses want this very granular data to be reflected inside of their data warehouses, Brown noted, but he also stressed that Meroxa can expose this stream of data as an API endpoint or point it to a Webhook.

The company is able to do this because its core architecture is somewhat different from other data pipeline and integration services that, at first glance, seem to offer a similar solution. Because of this, users can use the service to connect different tools to their data warehouse but also build real-time tools on top of these data streams.

Image Credits: Meroxa

“We aren’t a point-to-point solution,” Meroxa co-founder and CTO Ali Hamidi explained. “When you set up the connection, you aren’t taking data from Postgres and only putting it into Snowflake. What’s really happening is that it’s going into our intermediate stream. Once it’s in that stream, you can then start hanging off connectors and say, ‘Okay, well, I also want to peek into the stream, I want to transfer my data, I want to filter out some things, I want to put it into S3.’ ”

Because of this, users can use the service to connect different tools to their data warehouse but also build real-time tools to utilize the real-time data stream. With this flexibility, Hamidi noted, a lot of the company’s customers start with a pretty standard use case and then quickly expand into other areas as well.

Brown and Hamidi met during their time at Heroku, where Brown was a director of product management and Hamidi a lead software engineer. But while Heroku made it very easy for developers to publish their web apps, there wasn’t anything comparable in the highly fragmented database space. The team acknowledges that there are a lot of tools that aim to solve these data problems, but few of them focus on the user experience.

Image Credits: Meroxa

“When we talk to customers now, it’s still very much an unsolved problem,” Hamidi said. “It seems kind of insane to me that this is such a common thing and there is no ‘oh, of course you use this tool because it addresses all my problems.’ And so the angle that we’re taking is that we see user experience not as a nice-to-have, it’s really an enabler, it is something that enables a software engineer or someone who isn’t a data engineer with 10 years of experience in wrangling Kafka and Postgres and all these things. […] That’s a transformative kind of change.”

It’s worth noting that Meroxa uses a lot of open-source tools but the company has also committed to open-sourcing everything in its data plane as well. “This has multiple wins for us, but one of the biggest incentives is in terms of the customer, we’re really committed to having our agenda aligned. Because if we don’t do well, we don’t serve the customer. If we do a crappy job, they can just keep all of those components and run it themselves,” Hamidi explained.

Today, Meroxa, which the team founded in early 2020, has more than 24 employees (and is 100% remote). “I really think we’re building one of the most talented and most inclusive teams possible,” Brown told me. “Inclusion and diversity are very, very high on our radar. Our team is 50% black and brown. Over 40% are women. Our management team is 90% underrepresented. So not only are we building a great product, we’re building a great company, we’re building a great business.”  

Posted Under: Tech News
Zoho launches new low code workflow automation product

Posted by on 13 April, 2021

This post was originally published on this site

Workflow automation has been one of the key trends this year so far, and Zoho, a company known for its suite of affordable business tools has joined the parade with a new low code workflow product called Qntrl (pronounced control).

Zoho’s Rodrigo Vaca, who is in charge of Qntrl’s marketing says that most of the solutions we’ve been seeing are built for larger enterprise customers. Zoho is aiming for the mid-market with a product that requires less technical expertise than traditional business process management tools.

“We enable customers to design their workflows visually without the need for any particular kind of prior knowledge of business process management notation or any kind of that esoteric modeling or discipline,” Vaca told me.

While Vaca says, Qntrl could require some technical help to connect a workflow to more complex backend systems like CRM or ERP, it allows a less technical end user to drag and drop the components and then get help to finish the rest.

“We certainly expect that when you need to connect to NetSuite or SAP you’re going to need a developer. If nothing else, the IT guys are going to ask questions, and they will need to provide access,” Vaca said.

He believes this product is putting this kind of tooling in reach of companies that may have been left out of workflow automation for the most part, or which have been using spreadsheets or other tools to create crude workflows. With Qntrl, you drag and drop components, and then select each component and configure what happens before, during and after each step.

What’s more, Qntrl provides a central place for processing and understanding what’s happening within each workflow at any given time, and who is responsible for completing it.

We’ve seen bigger companies like Microsoft, SAP, ServiceNow and others offering this type of functionality over the last year as low code workflow automation has taken center stage in business.

This has become a more pronounced need during the pandemic when so many workers could not be in the office. It made moving work in a more automated workflow more imperative, and we have seen companies moving to add more of this kind of functionality as a result.

Brent Leary, principal analyst at CRM Essentials, says that Zoho is attempting to remove some the complexity from this kind of tool.

“It handles the security pieces to make sure the right people have access to the data and processes used in the workflows in the background, so regular users can drag and drop to build their flows and processes without having to worry about that stuff,” Leary told me.

Qntrl is available starting today starting at just $7 per user month.

Posted Under: Tech News
Docugami’s new model for understanding documents cuts its teeth on NASA archives

Posted by on 12 April, 2021

This post was originally published on this site

You hear so much about data these days that you might forget that a huge amount of the world runs on documents: a veritable menagerie of heterogeneous files and formats holding enormous value yet incompatible with the new era of clean, structured databases. Docugami plans to change that with a system that intuitively understands any set of documents and intelligently indexes their contents — and NASA is already on board.

If Docugami’s product works as planned, anyone will be able to take piles of documents accumulated over the years and near-instantly convert them to the kind of data that’s actually useful to people.

If Docugami’s product works as planned, anyone will be able to take piles of documents accumulated over the years and near-instantly convert them to the kind of data that’s actually useful to people.

Because it turns out that running just about any business ends up producing a ton of documents. Contracts and briefs in legal work, leases and agreements in real estate, proposals and releases in marketing, medical charts, etc, etc. Not to mention the various formats: Word docs, PDFs, scans of paper printouts of PDFs exported from Word docs, and so on.

Over the last decade there’s been an effort to corral this problem, but movement has largely been on the organizational side: put all your documents in one place, share and edit them collaboratively. Understanding the document itself has pretty much been left to the people who handle them, and for good reason — understanding documents is hard!

Think of a rental contract. We humans understand when the renter is named as Jill Jackson, that later on, “the renter” also refers to that person. Furthermore, in any of a hundred other contracts, we understand that the renters in those documents are the same type of person or concept in the context of the document, but not the same actual person. These are surprisingly difficult concepts for machine learning and natural language understanding systems to grasp and apply. Yet if they could be mastered, an enormous amount of useful information could be extracted from the millions of documents squirreled away around the world.

What’s up, .docx?

Docugami founder Jean Paoli says they’ve cracked the problem wide open, and while it’s a major claim, he’s one of few people who could credibly make it. Paoli was a major figure at Microsoft for decades, and among other things helped create the XML format — you know all those files that end in x, like .docx and .xlsx? Paoli is at least partly to thank for them.

“Data and documents aren’t the same thing,” he told me. “There’s a thing you understand, called documents, and there’s something that computers understand, called data. Why are they not the same thing? So my first job [at Microsoft] was to create a format that can represent documents as data. I created XML with friends in the industry, and Bill accepted it.” (Yes, that Bill.)

The formats became ubiquitous, yet 20 years later the same problem persists, having grown in scale with the digitization of industry after industry. But for Paoli the solution is the same. At the core of XML was the idea that a document should be structured almost like a webpage: boxes within boxes, each clearly defined by metadata — a hierarchical model more easily understood by computers.

Image Credits: Docugami

“A few years ago I drank the AI kool-aid, got the idea to transform documents into data. I needed an algorithm that navigates the hierarchical model, and they told me that the algorithm you want does not exist,” he explained. “The XML model, where every piece is inside another, and each has a different name to represent the data it contains — that has not been married to the AI model we have today. That’s just a fact. I hoped the AI people would go and jump on it, but it didn’t happen.” (“I was busy doing something else,” he added, to excuse himself.)

The lack of compatibility with this new model of computing shouldn’t come as a surprise — every emerging technology carries with it certain assumptions and limitations, and AI has focused on a few other, equally crucial areas like speech understanding and computer vision. The approach taken there doesn’t match the needs of systematically understanding a document.

“Many people think that documents are like cats. You train the AI to look for their eyes, for their tails … documents are not like cats,” he said.

It sounds obvious, but it’s a real limitation. Advanced AI methods like segmentation, scene understanding, multimodal context, and such are all a sort of hyperadvanced cat detection that has moved beyond cats to detect dogs, car types, facial expressions, locations, etc. Documents are too different from one another, or in other ways too similar, for these approaches to do much more than roughly categorize them.

As for language understanding, it’s good in some ways but not in the ways Paoli needed. “They’re working sort of at the English language level,” he said. “They look at the text but they disconnect it from the document where they found it. I love NLP people, half my team is NLP people — but NLP people don’t think about business processes. You need to mix them with XML people, people who understand computer vision, then you start looking at the document at a different level.”

Docugami in action

Illustration showing a person interacting with a digital document.

Image Credits: Docugami

Paoli’s goal couldn’t be reached by adapting existing tools (beyond mature primitives like optical character recognition), so he assembled his own private AI lab, where a multidisciplinary team has been tinkering away for about two years.

“We did core science, self-funded, in stealth mode, and we sent a bunch of patents to the patent office,” he said. “Then we went to see the VCs, and SignalFire basically volunteered to lead the seed round at $10 million.”

Coverage of the round didn’t really get into the actual experience of using Docugami, but Paoli walked me through the platform with some live documents. I wasn’t given access myself and the company wouldn’t provide screenshots or video, saying it is still working on the integrations and UI, so you’ll have to use your imagination … but if you picture pretty much any enterprise SaaS service, you’re 90% of the way there.

As the user, you upload any number of documents to Docugami, from a couple dozen to hundreds or thousands. These enter a machine understanding workflow that parses the documents, whether they’re scanned PDFs, Word files, or something else, into an XML-esque hierarchical organization unique to the contents.

“Say you’ve got 500 documents, we try to categorize it in document sets, these 30 look the same, those 20 look the same, those five together. We group them with a mix of hints coming from how the document looked, what it’s talking about, what we think people are using it for, etc.,” said Paoli. Other services might be able to tell the difference between a lease and an NDA, but documents are too diverse to slot into pre-trained ideas of categories and expect it to work out. Every set of documents is potentially unique, and so Docugami trains itself anew every time, even for a set of one. “Once we group them, we understand the overall structure and hierarchy of that particular set of documents, because that’s how documents become useful: together.”

Illustration showing a document being turned into a report and a spreadsheet.

Image Credits: Docugami

That doesn’t just mean it picks up on header text and creates an index, or lets you search for words. The data that is in the document, for example who is paying whom, how much and when, and under what conditions, all that becomes structured and editable within the context of similar documents. (It asks for a little input to double check what it has deduced.)

It can be a little hard to picture, but now just imagine that you want to put together a report on your company’s active loans. All you need to do is highlight the information that’s important to you in an example document — literally, you just click “Jane Roe” and “$20,000” and “five years” anywhere they occur — and then select the other documents you want to pull corresponding information from. A few seconds later you have an ordered spreadsheet with names, amounts, dates, anything you wanted out of that set of documents.

All this data is meant to be portable too, of course — there are integrations planned with various other common pipes and services in business, allowing for automatic reports, alerts if certain conditions are reached, automated creation of templates and standard documents (no more keeping an old one around with underscores where the principals go).

Remember, this is all half an hour after you uploaded them in the first place, no labeling or pre-processing or cleaning required. And the AI isn’t working from some preconceived notion or format of what a lease document looks like. It’s learned all it needs to know from the actual docs you uploaded — how they’re structured, where things like names and dates figure relative to one another, and so on. And it works across verticals and uses an interface anyone can figure out in a few minutes. Whether you’re in healthcare data entry or construction contract management, the tool should make sense.

The web interface where you ingest and create new documents is one of the main tools, while the other lives inside Word. There Docugami acts as a sort of assistant that’s fully aware of every other document of whatever type you’re in, so you can create new ones, fill in standard information, comply with regulations and so on.

Okay, so processing legal documents isn’t exactly the most exciting application of machine learning in the world. But I wouldn’t be writing this (at all, let alone at this length) if I didn’t think this was a big deal. This sort of deep understanding of document types can be found here and there among established industries with standard document types (such as police or medical reports), but have fun waiting until someone trains a bespoke model for your kayak rental service. But small businesses have just as much value locked up in documents as large enterprises — and they can’t afford to hire a team of data scientists. And even the big organizations can’t do it all manually.

NASA’s treasure trove

Image Credits: NASA

The problem is extremely difficult, yet to humans seems almost trivial. You or I could glance through 20 similar documents and a list of names and amounts easily, perhaps even in less time than it takes for Docugami to crawl them and train itself.

But AI, after all, is meant to imitate and transcend human capacity, and it’s one thing for an account manager to do monthly reports on 20 contracts — quite another to do a daily report on a thousand. Yet Docugami accomplishes the latter and former equally easily — which is where it fits into both the enterprise system, where scaling this kind of operation is crucial, and to NASA, which is buried under a backlog of documentation from which it hopes to glean clean data and insights.

If there’s one thing NASA’s got a lot of, it’s documents. Its reasonably well-maintained archives go back to its founding, and many important ones are available by various means — I’ve spent many a pleasant hour perusing its cache of historical documents.

But NASA isn’t looking for new insights into Apollo 11. Through its many past and present programs, solicitations, grant programs, budgets, and of course engineering projects, it generates a huge amount of documents — being, after all, very much a part of the federal bureaucracy. And as with any large organization with its paperwork spread over decades, NASA’s document stash represents untapped potential.

Expert opinions, research precursors, engineering solutions, and a dozen more categories of important information are sitting in files searchable perhaps by basic word matching but otherwise unstructured. Wouldn’t it be nice for someone at JPL to get it in their head to look at the evolution of nozzle design, and within a few minutes have a complete and current list of documents on that topic, organized by type, date, author and status? What about the patent advisor who needs to provide a NIAC grant recipient information on prior art — shouldn’t they be able to pull those old patents and applications up with more specificity than any with a given keyword?

The NASA SBIR grant, awarded last summer, isn’t for any specific work, like collecting all the documents of such and such a type from Johnson Space Center or something. It’s an exploratory or investigative agreement, as many of these grants are, and Docugami is working with NASA scientists on the best ways to apply the technology to their archives. (One of the best applications may be to the SBIR and other small business funding programs themselves.)

Another SBIR grant with the NSF differs in that, while at NASA the team is looking into better organizing tons of disparate types of documents with some overlapping information, at NSF they’re aiming to better identify “small data.” “We are looking at the tiny things, the tiny details,” said Paoli. “For instance, if you have a name, is it the lender or the borrower? The doctor or the patient name? When you read a patient record, penicillin is mentioned, is it prescribed or prohibited? If there’s a section called allergies and another called prescriptions, we can make that connection.”

“Maybe it’s because I’m French”

When I pointed out the rather small budgets involved with SBIR grants and how his company couldn’t possibly survive on these, he laughed.

“Oh, we’re not running on grants! This isn’t our business. For me, this is a way to work with scientists, with the best labs in the world,” he said, while noting many more grant projects were in the offing. “Science for me is a fuel. The business model is very simple — a service that you subscribe to, like Docusign or Dropbox.”

The company is only just now beginning its real business operations, having made a few connections with integration partners and testers. But over the next year it will expand its private beta and eventually open it up — though there’s no timeline on that just yet.

“We’re very young. A year ago we were like five, six people, now we went and got this $10 million seed round and boom,” said Paoli. But he’s certain that this is a business that will be not just lucrative but will represent an important change in how companies work.

“People love documents. Maybe it’s because I’m French,” he said, “but I think text and books and writing are critical — that’s just how humans work. We really think people can help machines think better, and machines can help people think better.”

Posted Under: Tech News
Microsoft goes all in on healthcare with $19.7B Nuance acquisition

Posted by on 12 April, 2021

This post was originally published on this site

When Microsoft announced it was acquiring Nuance Communications this morning for $19.7 billion, you could be excused for doing a Monday morning double take at the hefty price tag.

That’s surely a lot of money for a company on a $1.4 billion run rate, but Microsoft, which has already partnered with the speech-to-text market leader on several products over the last couple of years, saw a company firmly embedded in healthcare and decided to go all in.

And $20 billion is certainly all in, even for a company the size of Microsoft. But 2020 forced us to change the way we do business, from restaurants to retailers to doctors. In fact, the pandemic in particular changed the way we interact with our medical providers. We learned very quickly that you don’t have to drive to an office, wait in waiting room, then in an exam room, all to see the doctor for a few minutes.

Instead, we can get on the line, have a quick chat and be on our way. It won’t work for every condition, of course — there will always be times the physician needs to see you — but for many meetings such as reviewing test results or for talk therapy, telehealth could suffice.

Microsoft CEO Satya Nadella says that Nuance is at the center of this shift, especially with its use of cloud and artificial intelligence, and that’s why the company was willing to pay the amount it did to get it.

“AI is technology’s most important priority, and healthcare is its most urgent application. Together, with our partner ecosystem, we will put advanced AI solutions into the hands of professionals everywhere to drive better decision-making and create more meaningful connections, as we accelerate growth of Microsoft Cloud in Healthcare and Nuance,” Nadella said in a post announcing the deal.

Holger Mueller, an analyst at Constellation Research, says that may be so, but he believes that Microsoft missed the boat with Cortana and this is about helping the company catch up on a crucial technology. “Nuance will be not only give Microsoft technology help in regards to neural network-based speech recognition, but also a massive improvement from vertical capabilities, call center functionality and the MSFT IP position in speech,” he said.

Microsoft sees this deal doubling what was already a considerable total addressable market to nearly $500 billion. While TAMs always tend to run high, that is still a substantial number.

It also fits with Gartner data, which found that by 2022, 75% of healthcare organizations will have a formal cloud strategy in place. The AI component only adds to that number and Nuance brings 10,000 existing customers to Microsoft, including some of the biggest healthcare organizations in the world.

Brent Leary, founder and principal analyst at CRM Essentials, says the deal could provide Microsoft with a ton of health data to help feed the underlying machine learning models and make them more accurate over time.

“There is going be a ton of health data being captured by the interactions coming through telemedicine interactions, and this could create a whole new level of health intelligence,” Leary told me.

That of course could drive a lot of privacy concerns where health data is involved, and it will be up to Microsoft, which just experienced a major breach on its Exchange email server products last month, to assure the public that their sensitive health data is being protected.

Leary says that ensuring data privacy is going to be absolutely key to the success of the deal. “The potential this move has is pretty powerful, but it will only be realized if the data and insights that could come from it are protected and secure — not only protected from hackers but also from unethical use. Either could derail what could be a game-changing move,” he said.

Microsoft also seemed to recognize that when it wrote, “Nuance and Microsoft will deepen their existing commitments to the extended partner ecosystem, as well as the highest standards of data privacy, security and compliance.”

Kate Leggett, an analyst at Forrester Research, thinks healthcare could be just the first step and once Nuance is in the fold, it could go much deeper than that.

“However, the benefit of this acquisition does not stop [with healthcare]. Nuance also offers market-leading customer engagement technologies, with deep expertise and focus in verticals such as financial services. As MSFT evolves their industry editions into other verticals, this acquisition will pay off for other industries. MSFT may also choose to fill in the gaps within their Dynamics solution with Nuance’s customer engagement technologies,” Leggett said.

We are clearly on the edge of a sea change when it comes to how we interact with our medical providers in the future. COVID pushed medicine deeper into the digital realm in 2020 out of simple necessity. It wasn’t safe to go into the office unless absolutely necessary.

The Nuance acquisition, which is expected to close some time later this year, could help Microsoft shift deeper into the market. It could even bring Teams into it as a meeting tool, but it’s all going to depend on the trust level people have with this approach, and it will be up to the company to make sure that both healthcare providers and the people they serve have that.

Posted Under: Tech News
Microsoft is acquiring Nuance Communications for $19.7B

Posted by on 12 April, 2021

This post was originally published on this site

Microsoft agreed today to acquire Nuance Communications, a leader in speech to text software, for $19.7 billion. Bloomberg broke the story over the weekend that the two companies were in talks.

In a post announcing the deal, the company said this was about increasing its presence in the healthcare vertical, a place where Nuance has done well in recent years. In fact, the company announced the Microsoft Cloud for Healthcare last year, and this deal is about accelerating its presence there. Nuance’s products in this area include Dragon Ambient eXperience, Dragon Medical One and PowerScribe One for radiology reporting.

“Today’s acquisition announcement represents the latest step in Microsoft’s industry-specific cloud strategy,” the company wrote. The acquisition also builds on several integrations and partnerships the two companies have made in the last couple of years.

The company boasts 10,000 healthcare customers, according to information on the website. Those include AthenaHealth, Johns Hopkins, Mass General Brigham and Cleveland Clinic to name but a few, and it was that customer base that attracted Microsoft to pay the price it did to bring Nuance into the fold.

Nuance CEO Mark Benjamin will remain with the company and report to Scott Guthrie, Microsoft’s EVP in charge of the cloud and AI group.

Nuance has a complex history. It went public in 2000 and began buying speech recognition products, including Dragon Dictate from Lernout Hauspie, in 2001. It merged with a company called ScanSoft in 2005. That company began life as Visioneer, a scanning company, in 1992.

Today, the company has a number of products including Dragon Dictate, a consumer and business text to speech product that dates back to the early 1990s. It’s also involved in speech recognition, chat bots and natural language processing particularly in healthcare and other verticals.

The company has 6,000 employees spread across 27 countries. In its most recent earnings report from November 2020, which was for Q42020, the company reported $352.9 million in revenue compared to $387.6 million in the same period a year prior. That’s not the direction a company wants to go, but it is still a run rate of over $1.4 billion.

At the time of that earnings call, the company also announced it was selling its medical transcription and electronic health record (EHR) Go-Live services to Assured Healthcare Partners and Aeries Technology Group. Company CEO Benjamin said this was about helping the company concentrate on its core speech services.

“With this sale, we will reach an important milestone in our journey towards a more focused strategy of advancing our Conversational AI, natural language understanding and ambient clinical intelligence solutions,” Benjamin said in a statement at the time.

It’s worth noting that Microsoft already has a number speech recognition and chat bot products of its own, including desktop speech to text services in Windows and on Azure, but it took a chance to buy a market leader and go deeper into the healthcare vertical.

The transaction has already been approved by both company boards and Microsoft reports it expects the deal to close by the end of this year, subject to standard regulatory oversight and approval by Nuance shareholders.

This would mark the second largest purchase by Microsoft ever, only surpassed by the $26.2 billion the company paid for LinkedIn in 2016.

Posted Under: Tech News
SnackMagic picks up $15M to expand from build-your-own snack boxes into a wider gifting marketplace

Posted by on 9 April, 2021

This post was originally published on this site

The office shut-down at the start of the COVID-19 pandemic last year spurred huge investment in digital transformation and a wave of tech companies helping with that, but there were some distinct losers in the shift, too — specifically those whose business models were predicated on serving the very offices that disappeared overnight. Today, one of the companies that had to make an immediate pivot to keep itself afloat is announcing a round of funding, after finding itself not just growing at a clip, but making a profit, as well.

SnackMagic, a build-your-own snack box service, has raised $15 million in a Series A round of funding led by Craft Ventures, with Luxor Capital also participating.

(Both investors have an interesting track record in the food-on-demand space: Most recently, Luxor co-led a $528 million round in Glovo in Spain, while Craft backs/has backed the likes of Cloud Kitchens, Postmates and many more.)

The funding comes on the back of a strong year for the company, which hit a $20 million revenue run rate in eight months and turned profitable in December 2020.

Founder and CEO Shaunak Amin said in an interview that the plan will be to use the funding both to continue growing SnackMagic’s existing business, as well as extend into other kinds of gifting categories. Currently, you can ship snacks anywhere in the world, but the customizable boxes — recipients are gifted an amount that they can spend, and they choose what they want in the box themselves from SnackMagic’s menu, or one that a business has created and branded as a subset of that — are only available in locations in North America, serviced by SnackMagic’s primary warehouse. Other locations are given options of pre-packed boxes of snacks right now, but the plan is to slowly extend its pick-and-mix model to more geographies, starting with the U.K.

Alongside this, the company plans to continue widening the categories of items that people can gift each other beyond chocolates, chips, hot sauces and other fun food items, into areas like alcohol, meal kits and nonfood items. There’s also scope for expanding to more use cases into areas like corporate gifting, marketing and consumer services, as well as analytics coming out of its sales.

Amin calls the data that SnackMagic is amassing about customer interest in different brands and products “the hidden gem” of the platform.

“It’s one of the most interesting things,” he said. Brands that want to add their items to the wider pool of products — which today numbers between 700 and 800 items — also get access to a dashboard where they monitor what’s selling, how much stock is left of their own items, and so on. “One thing that is very opaque [in the CPG world] is good data.”

For many of the bigger companies that lack their own direct sales channels, it’s a significantly richer data set than what they typically get from selling items in the average brick and mortar store, or from a bigger online retailer like Amazon. “All these bigger brands like Pepsi and Kellogg not only want to know this about their own products more but also about the brands they are trying to buy,” Amin said. Several of them, he added, have approached his company to partner and invest, so I guess we should watch this space.

SnackMagic’s success comes from a somewhat unintended, unlikely beginning, and it’s a testament to the power of compelling, yet extensible technology that can be scaled and repurposed if necessary. In its case, there is personalization technology, logistics management, product inventory and accounting, and lots of data analytics involved.

The company started out as Stadium, a lunch delivery service in New York City that was leveraging the fact that when co-workers ordered lunch or dinner together for the office — say around a team-building event or a late-night working session, or just for a regular work day — oftentimes they found that people all hankered for different things to eat.

In many cases, people typically make separate orders for the different items, but that also means if you are ordering to all eat together, things would not arrive at the same time; if it’s being expensed, it’s more complicated on that front too; and if you’re thinking about carbon footprints, it might also mean a lot less efficiency on that front too.

Stadium’s solution was a platform that provided access to multiple restaurants’ menus, and people could pick from all of them for a single order. The business had been operating for six years and was really starting to take off.

“We were quite well known in the city, and we had plans to expand, and we were on track for March 2020 being our best month ever,” Amin said. Then, COVID-19 hit. “There was no one left in the office,” he said. Revenue disappeared overnight, since the idea of delivering many items to one place instantly stopped being a need.

Amin said that they took a look at the platform they had built to pick many options (and many different costs, and the accounting that came with that) and thought about how to use that for a different end. It turned out that even with people working remotely, companies wanted to give props to their workers, either just to say hello and thanks, or around a specific team event, in the form of food and treats — all the more so since the supply of snacks you typically come across in so many office canteens and kitchens were no longer there for workers to tap.

It’s interesting, but perhaps also unsurprising, that one of the by-products of our new way of working has been the rise of more services that cater (no pun intended) to people working in more decentralised ways, and that companies exploring how to improve rewarding people in those environments are also seeing a bump.

Just yesterday, we wrote about a company called Alyce raising $30 million for its corporate gifting platform that is also based on personalization — using AI to help understand the interests of the recipient to make better choices of items that a person might want to receive.

Alyce is taking a somewhat different approach than SnackMagic: it’s not holding any products itself, and there is no warehouse but rather a platform that links buyers with those providing products. And Alyce’s initial audience is different, too: instead of internal employees (the first, but not final, focus for SnackMagic) it is targeting corporate gifting, or presents that sales and marketing people might send to prospects or current clients as a please and thank you gesture.

But you can also see how and where the two might meet in the middle — and compete not just with each other, but the many other online retailers, Amazon and otherwise, plus the consumer goods companies themselves looking for ways of diversifying business by extending beyond the B2C channel.

“We don’t worry about Amazon. We just get better,” Amin said when I asked him about whether he worried that SnackMagic was too easy to replicate. “It might be tough anyway,” he added, since “others might have the snacks but picking and packing and doing individual customization is very different from regular e-commerce. It’s really more like scalable gifting.”

Investors are impressed with the quick turnaround and identification of a market opportunity, and how it quickly retooled its tech to make it fit for purpose.

“SnackMagic’s immediate success was due to an excellent combination of timing, innovative thinking and world-class execution,” said Bryan Rosenblatt, principal investor at Craft Ventures, in a statement. “As companies embrace the future of a flexible workplace, SnackMagic is not just a snack box delivery platform but a company culture builder.”

Posted Under: Tech News
Daily Crunch: KKR invests $500M into Box

Posted by on 8 April, 2021

This post was originally published on this site

Box gets some financial ammunition against an activist investor, Samsung launches the Galaxy SmartTag+ and we look at the history of CryptoPunks. This is your Daily Crunch for April 8, 2021.

The big story: KKR invests $500M into Box

Private equity firm KKR is making an investment into Box that should help the cloud content management company buy back shares from activist investor Starboard Value, which might otherwise have claimed a majority of board seats and forced a sale.

After the investment, Aaron Levie will remain with Box as its CEO, but independent board member Bethany Mayer will become the chair, while KKR’s John Park is joining the board as well.

“The KKR move is probably the most important strategic move Box has made since it IPO’d,” said Alan Pelz-Sharpe of Deep Analysis. “KKR doesn’t just bring a lot of money to the deal, it gives Box the ability to shake off some naysayers and invest in further acquisitions.”

The tech giants

Samsung’s AirTags rival, the Galaxy SmartTag+, arrives to help you find lost items via AR — This is a version of Samsung’s lost-item finder that supports Bluetooth Low Energy and ultra-wideband technology.

Spotify stays quiet about launch of its voice command ‘Hey Spotify’ on mobile — Access to the “Hey Spotify” voice feature is rolling out more broadly, but Spotify isn’t saying anything officially.

Verizon and Honda want to use 5G and edge computing to make driving safer — The two companies are piloting different safety scenarios at the University of Michigan’s Mcity, a test bed for connected and autonomous vehicles.

Startups, funding and venture capital

Norway’s Kolonial rebrands as Oda, bags $265M on a $900M valuation to grow its online grocery delivery business in Europe — Oda’s aim is to provide “a weekly shop” for prices that compete against those of traditional supermarkets.

Tines raises $26M Series B for its no-code security automation platform — Tines co-founders Eoin Hinchy and Thomas Kinsella were both in senior security roles at DocuSign before they left to start their own company in 2018.

Yext co-founder unveils Dynascore, which dynamically synchronizes music and video — This is the first product from Howard Lerman’s new startup Wonder Inventions.

Advice and analysis from Extra Crunch

Four strategies for getting attention from investors — MaC Venture Capital founder Marlon Nichols joined us at TechCrunch Early Stage to discuss his strategies for early-stage investing, and how those lessons can translate into a successful launch for budding entrepreneurs.

How to get into a startup accelerator —  Neal Sáles-Griffin, managing director of Techstars Chicago, explains when and how to apply to a startup accelerator.

Understanding how fundraising terms can affect early-stage startups — Fenwick & West partner Dawn Belt breaks down some of the terms that trip up first-time entrepreneurs.

(Extra Crunch is our membership program, which helps founders and startup teams get ahead. You can sign up here.)

Everything else

The Cult of CryptoPunks — Ethereum’s “oldest NFT project” may not actually be the first, but it’s the wildest.

Biden proposes gun control reforms to go after ‘ghost guns’ and close loopholes — President Joe Biden has announced a new set of initiatives by which he hopes to curb the gun violence he described as “an epidemic” and “an international embarrassment.”

Apply to Startup Battlefield at TechCrunch Disrupt 2021 — All you need is a killer pitch, an MVP, nerves of steel and the drive and determination to take on all comers to claim the coveted Disrupt Cup.

The Daily Crunch is TechCrunch’s roundup of our biggest and most important stories. If you’d like to get this delivered to your inbox every day at around 3pm Pacific, you can subscribe here.

Posted Under: Tech News
Immersion cooling to offset data centers’ massive power demands gains a big booster in Microsoft

Posted by on 8 April, 2021

This post was originally published on this site

LiquidStack does it. So does Submer. They’re both dropping servers carrying sensitive data into goop in an effort to save the planet. Now they’re joined by one of the biggest tech companies in the world in their efforts to improve the energy efficiency of data centers, because Microsoft is getting into the liquid-immersion cooling market.

Microsoft is using a liquid it developed in-house that’s engineered to boil at 122 degrees Fahrenheit (lower than the boiling point of water) to act as a heat sink, reducing the temperature inside the servers so they can operate at full power without any risks from overheating.

The vapor from the boiling fluid is converted back into a liquid through contact with a cooled condenser in the lid of the tank that stores the servers.

“We are the first cloud provider that is running two-phase immersion cooling in a production environment,” said Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development in Redmond, Washington, in a statement on the company’s internal blog. 

While that claim may be true, liquid cooling is a well-known approach to dealing with moving heat around to keep systems working. Cars use liquid cooling to keep their motors humming as they head out on the highway.

As technology companies confront the physical limits of Moore’s Law, the demand for faster, higher performance processors mean designing new architectures that can handle more power, the company wrote in a blog post. Power flowing through central processing units has increased from 150 watts to more than 300 watts per chip and the GPUs responsible for much of Bitcoin mining, artificial intelligence applications and high end graphics each consume more than 700 watts per chip.

It’s worth noting that Microsoft isn’t the first tech company to apply liquid cooling to data centers and the distinction that the company uses of being the first “cloud provider” is doing a lot of work. That’s because bitcoin mining operations have been using the tech for years. Indeed, LiquidStack was spun out from a bitcoin miner to commercialize its liquid immersion cooling tech and bring it to the masses.

“Air cooling is not enough”

More power flowing through the processors means hotter chips, which means the need for better cooling or the chips will malfunction.

“Air cooling is not enough,” said Christian Belady, vice president of Microsoft’s datacenter advanced development group in Redmond, in an interview for the company’s internal blog. “That’s what’s driving us to immersion cooling, where we can directly boil off the surfaces of the chip.”

For Belady, the use of liquid cooling technology brings the density and compression of Moore’s Law up to the datacenter level

The results, from an energy consumption perspective, are impressive. The company found that using two-phase immersion cooling reduced power consumption for a server by anywhere from 5 percent to 15 percent (every little bit helps).

Microsoft investigated liquid immersion as a cooling solution for high performance computing applications such as AI. Among other things, the investigation revealed that two-phase immersion cooling reduced power consumption for any given server by 5% to 15%. 

Meanwhile, companies like Submer claim they reduce energy consumption by 50%, water use by 99%, and take up 85% less space.

For cloud computing companies, the ability to keep these servers up and running even during spikes in demand, when they’d consume even more power, adds flexibility and ensures uptime even when servers are overtaxed, according to Microsoft.

“[We] know that with Teams when you get to 1 o’clock or 2 o’clock, there is a huge spike because people are joining meetings at the same time,” Marcus Fontoura, a vice president on Microsoft’s Azure team, said on the company’s internal blog. “Immersion cooling gives us more flexibility to deal with these burst-y workloads.”

At this point, data centers are a critical component of the internet infrastructure that much of the world relies on for… well… pretty much every tech-enabled service. That reliance however has come at a significant environmental cost.

“Data centers power human advancement. Their role as a core infrastructure has become more apparent than ever and emerging technologies such as AI and IoT will continue to drive computing needs. However, the environmental footprint of the industry is growing at an alarming rate,” Alexander Danielsson, an investment manager at Norrsken VC noted last year when discussing that firm’s investment in Submer.

Solutions under the sea

If submerging servers in experimental liquids offers one potential solution to the problem — then sinking them in the ocean is another way that companies are trying to cool data centers without expending too much power.

Microsoft has already been operating an undersea data center for the past two years. The company actually trotted out the tech as part of a push from the tech company to aid in the search for a COVID-19 vaccine last year.

These pre-packed, shipping container-sized data centers can be spun up on demand and run deep under the ocean’s surface for sustainable, high-efficiency and powerful compute operations, the company said.

The liquid cooling project shares most similarity with Microsoft’s Project Natick, which is exploring the potential of underwater datacenters that are quick to deploy and can operate for years on the seabed sealed inside submarine-like tubes without any onsite maintenance by people. 

In those data centers nitrogen air replaces an engineered fluid and the servers are cooled with fans and a heat exchanger that pumps seawater through a sealed tube.

Startups are also staking claims to cool data centers out on the ocean (the seaweed is always greener in somebody else’s lake).

Nautilus Data Technologies, for instance, has raised over $100 million (according to Crunchbase) to develop data centers dotting the surface of Davey Jones’ Locker. The company is currently developing a data center project co-located with a sustainable energy project in a tributary near Stockton, Calif.

With the double-immersion cooling tech Microsoft is hoping to bring the benefits of ocean-cooling tech onto the shore. “We brought the sea to the servers rather than put the datacenter under the sea,” Microsoft’s Alissa said in a company statement.

Ioannis Manousakis, a principal software engineer with Azure (left), and Husam Alissa, a principal hardware engineer on Microsoft’s team for datacenter advanced development (right), walk past a container at a Microsoft datacenter where computer servers in a two-phase immersion cooling tank are processing workloads. Photo by Gene Twedt for Microsoft.

Posted Under: Tech News
Quiq acquires Snaps to create a combined customer messaging platform

Posted by on 8 April, 2021

This post was originally published on this site

At first glance, Quiq and Snaps might sound like similar startups — they both help businesses talk to their customers via text messaging and other messaging apps. But Snaps CEO Christian Brucculeri said “there’s almost no overlap in what we do” and that the companies are “almost complete complements.”

That’s why Quiq (based in Bozeman, Montana) is acquiring Snaps (based in New York). The entire Snaps team is joining Quiq, with Brucculeri becoming senior vice president of sales and customer success for the combined organization.

Quiq CEO Mike Myer echoed Bruccleri’s point, comparing the situation to dumping two pieces of a jigsaw puzzle on the floor and discovering “the two pieces fit perfectly.” More specifically, he told me that Quiq has generally focused on customer service messaging, with a “do it yourself, toolset approach.” After all, the company was founded by two technical co-founders, and Myer joked, “We can’t understand why [a customer] can’t just call an API.”

Snaps, meanwhile, has focused more on marketing conversations, and on a managed service approach where it handles all of the technical work for its customers. In addition, Myer said that while Quiq has “really focused on platform aspect from beginning” — building integrations with more than a dozen messaging channels including Apple Business Chat, Google’s Business Messages, Instagram, Facebook Messenger and WhatsApp — it doesn’t have “a deep natural language or conversational AI capability” the way Snaps does.

Myer added that demand for Quiq’s offering has been growing dramatically, with revenue up 300% year-over-year in the last six months of 2020. At the same time, he suggested that the divisions between marketing and customer service are beginning to dissolve, with service teams increasingly given sales goals, and “at younger, more commerce-focused organizations, they don’t have this differentiation between marketing and customer service” at all.

Apparently the two companies were already working together to create a combined offering for direct messaging on Instagram, which prompted broader discussions about how to bring the two products together. Moving forward, they will offer a combined platform for a variety of customers under the Quiq brand. (Quiq’s customers include Overstock.com, West Elm, Men’s Wearhouse and Brinks Home Security, while Snaps’ane Bryant, Live Nation, General Assembly, Clairol and Nioxin.) Brucculeri said this will give businesses one product to manage their conversations across “the full customer journey.”

“The key term you’re hearing is conversation,” Myer added. “It’s not about a ticket or a case or a question […] it’s an ongoing conversation.”

Snaps had raised $11.3 million in total funding from investors including Signal Peak Ventures. The financial terms of the acquisition were not disclosed.

Posted Under: Tech News
Page 2 of 8312345...102030...Last »

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue