Google makes converting VMs to containers easier with the GA of Migrate for Anthos

Posted by on 20 November, 2019

This post was originally published on this site

At its Cloud Next event in London, Google today announced a number of product updates around its managed Anthos platform, as well as Apigee and its Cloud Code tools for building modern applications that can then be deployed to Google Cloud or any Kubernetes cluster.

Anthos is one of the most important recent launches for Google, as it expands the company’s reach outside of Google Cloud and into its customers’ data centers and, increasingly, edge deployments. At today’s event, the company announced that it is taking Anthos Migrate out of beta and into general availability. The overall idea behind Migrate is that it allows enterprises to take their existing, VM-based workloads and convert them into containers. Those machines could come from on-prem environments, AWS, Azure or Google’s Compute Engine, and — once converted — can then run in Anthos GKE, the Kubernetes service that’s part of the platform.

“That really helps customers think about a leapfrog strategy, where they can maintain the existing VMs but benefit from the operational model of Kubernetes,” Google VP of product management Jennifer Lin told me. “So even though you may not get all of the benefits of a cloud-native container day one, what you do get is consistency in the operational paradigm.”

As for Anthos itself, Lin tells me that Google is seeing some good momentum. The company is highlighting a number of customers at today’s event, including Germany’s Kaeser Kompressoren and Turkey’s Denizbank.

Lin noted that a lot of financial institutions are interested in Anthos. “A lot of the need to do data-driven applications, that’s where Kubernetes has really hit that sweet spot because now you have a number of distributed datasets and you need to put a web or mobile front end on [them],” she explained. “You can’t do it as a monolithic app, you really do need to tap into a number of datasets — you need to do real-time analytics and then present it through a web or mobile front end. This really is a sweet spot for us.”

Also new today is the general availability of Cloud Code, Google’s set of extensions for IDEs like Visual Studio Code and IntelliJ that helps developers build, deploy and debug their cloud-native applications more quickly. The idea, here, of course, is to remove friction from building containers and deploying them to Kubernetes.

In addition, Apigee hybrid is now also generally available. This tool makes it easier for developers and operators to manage their APIs across hybrid and multi-cloud environments, a challenge that is becoming increasingly common for enterprises. This makes it easier to deploy Apigee’s API runtimes in hybrid environments and still get the benefits of Apigees monitoring and analytics tools in the cloud. Apigee hybrid, of course, can also be deployed to Anthos.

Posted Under: Tech News
Clumio raises $135M Series C for its backup as a service platform

Posted by on 19 November, 2019

This post was originally published on this site

Clumio, a 100-people startup that offers a SaaS-like service for enterprise backup, today announced that it has raised a $135 million Series C round, led by existing investor Sutter Hill Ventures and new investor Altimeter Captial. The announcement comes shortly after the company’s disclosure in August that it had quietly raised a total of $51 million in Series A and B rounds in 2017 and 2018. The company says it plans to use this new funding to “accelerate its vision to deliver a globally consolidated data protection service in and for the public cloud.”

Given the amount of money invested in the company, chances are Clumio is getting close to a $1 billion valuation, but the company is not disclosing its valuation at this point.

The overall mission of Clumio is to build a platform on public clouds that gives enterprises a single data protection service that can handle backups of their data in on-premises, cloud and SaaS applications. When it came out of stealth, the company’s focus was on VMware on premises. Since then, the team has expanded this to include VMware running on public clouds.

“When somebody moves to the cloud, they don’t want to be in the business of managing software or infrastructure and all that, because the whole reason to move to the cloud was essentially to get away from the mundane,” explained Clumio CEO and co-founder Poojan Kumar.

The next step in this process, as the company also announced today, is to make it easier for enterprises to protect the cloud-native applications they are building now. The company today launched this service for AWS and will likely expand it to other clouds like Microsoft Azure, soon.

The market for enterprise backup is only going to expand in the coming years. We’ve now reached a point, after all, where it’s not unheard of to talk about enterprises that run thousands of different applications. For them, Clumio wants to become the one-stop-shop for all things data protection — and its investors are obviously buying into the company’s vision and momentum.

“When there’s a foundational change, like the move to the cloud, which is as foundational a change, at least, as the move from mainframe to open systems in the 80s and 90s,” said Mike Speiser, Managing Director at Sutter Hill Ventures . “When there’s a change like that, you have to re-envision, you have to refactor and think of the world — the new world — in a new way and start from scratch. If you don’t, what’s gonna end up happening is people make decisions that are short term decisions that seem like they will work but end up being architectural dead ends. And those companies never ever end up winning. They just never end up winning and that’s the opportunity right now on this big transition across many markets, including the backup market for Clumio.”

Speiser also noted that SaaS allows for a dramatically larger market opportunity for companies like Clumio. “What SaaS is doing, is it’s not only allowing us to go after the traditional Silicon Valley, high end, direct selling, expensive markets that were previously buying high-end systems and data centers. But what we’re seeing — and we’re seeing this with Snowflake and […] we will see it with Clumio — is there’s an opportunity to go after a much broader market opportunity.”

Starting next year, Clumio will expand that market by adding support for data protection for a first SaaS app, with more to follow, as well as support for backup in more regions and clouds. Right now, the service’s public cloud tool focuses on AWS — and only in the United States. Next year, it plans to support international regions as well.

Kumar stressed that he wants to build Clumio for the long run, with an IPO as part of that roadmap. His investors probably wouldn’t mind that, either.

Posted Under: Tech News
Salesforce, AWS expand partnership to bring Amazon Connect to Service Cloud

Posted by on 19 November, 2019

This post was originally published on this site

Salesforce and AWS announced an expansion of their on-going partnership that actually goes back to a $400 million 2016 infrastructure services agreement, and expanded last year to include data integration between the two companies. This year, Salesforce announced it will be offering AWS telephony and call transcription services with Amazon Connect as part of its Service Cloud call center solution.

“We have a strategic partnership with Amazon Web Services, which will allow customers to purchase Amazon Connect from us, and then it will be pre-integrated and out of the box to provide a full transcription of the call, and of course that’s alongside of an actual call recording of the call,” Bill Patterson, executive vice president for Service Cloud explained.

It’s worth noting that the company will be partnering with other telephony vendors as well, so that customers can choose the Amazon solution or another from Cisco, Avaya or Genesys, Patterson said.

These telephony partnerships fill in a gap in the Service Cloud call center offering, and give Salesforce direct access to the call itself. The telephony vendors will handle call transcription and hand that off to Salesforce, which can then use its intelligence layer called Einstein to “read” the transcript and offer the CSR next best actions in real time, something the company has been able to do with interactions from chat and other channels, but couldn’t do with voice.

“As this conversation evolves, the consumer is explaining what their problem is, and Einstein is [monitoring] that conversation. As the conversation gets to a critical mass, Einstein begins to understand what the content is about and suggests a specific solution to the agent,” Patterson said.

Salesforce will begin piloting this new Service Cloud capability in the spring with general availability expected next summer.

Only last week, Salesforce announced a major partnership with Microsoft to move Salesforce Marketing Cloud to Azure. These announcements show Salesforce will continue to use multiple cloud partners when it makes sense for the business. Today, it’s Amazon’s turn.

Posted Under: Tech News
SocialRank sells biz to Trufan, pivots to a mobile LinkedIn

Posted by on 19 November, 2019

This post was originally published on this site

What do you do when your startup idea doesn’t prove big enough? Run it as a scrawny but profitable lifestyle business? Or sell it to a competitor and take another swing at the fences? Social audience analytics and ad targeting startup SocialRank chose the latter and is going for glory.

Today, SocialRank announced it’s sold its business, brand, assets, and customers to influencer marketing campaign composer and distributor Trufan which will run it as a standalone product. But SocialRank’s team isn’t joining up. Instead, the full six-person staff is sticking together to work on a mobile-first professional social network called Upstream aiming to nip at LinkedIn.

SocialRank co-founder and CEO Alex Taub

Started in 2014 amidst a flurry of marketing analytics tools, SocialRank had raised $2.1 million from Rainfall Ventures and others before hitting profitability in 2017. But as the business plateaued, the team saw potential to use data science about people’s identity to get them better jobs.

“A few months ago we decided to start building a new product (what has become Upstream). And when we came to the conclusion to go all-in on Upstream, we knew we couldn’t run two businesses at the same time” SocialRank co-founder and CEO Alex Taub tells me. “We decided then to run a bit of a process. We ended up with a few offers but ultimately felt like Trufan was the best one to continue the business into the future.”

The move lets SocialRank avoid stranding its existing customers like the NFL, Netflix, and Samsung that rely on its audience segmentation software. Instead, they’ll continue to be supported by Trufan where Taub and fellow co-founder Michael Schonfeld will become advisors.

“While we built a sustainable business, we essentially knew that if we wanted to go real big, we would need to go to the drawing board” Taub explains.

SocialRank

Two-year-old Trufan has raised $1.8 million Canadian from Round13 Capital, local Toronto startup Clearbanc’s founders, and several NBA players. Trufan helps brands like Western Union and Kay Jewellers design marketing initiatives that engage their customer communities through social media. It’s raising an extra $400,000 USD in venture debt from Round13 to finance the acquisition, which should make Trufan cash-flow positive by the end of the year.

Why isn’t the SocialRank team going along for the ride? Taub said LinkedIn was leaving too much opportunity on the table. While it’s good for putting resumes online and searching for people, “All the social stuff are sort of bolt-ons that came after Facebook and Twitter arrived. People forget but LinkedIn is the oldest active social network out there”, Taub tells me, meaning it’s a bit outdated.

Trufan’s team

Rather than attack head-on, the newly forged Upstream plans to pick the Microsoft-owned professional network apart with better approaches to certain features. “I love the idea of ‘the unbundling of LinkedIn’, ala what’s been happening with Craigslist for the past few years” says Taub. “The first foundational piece we are building is a social professional network around giving and getting help. We’ll also be focused on the unbundling of the groups aspect of LinkedIn.”

Taub concludes that entrepreneurs can shackle themselves to impossible goals if they take too much venture capital for the wrong business. As we’ve seen with SoftBank, investors demand huge returns that can require pursuing risky and unsustainable expansion strategies.

“We realized that SocialRank had potential to be a few hundred million dollar in revenue business but venture growth wasn’t exactly the model for it” Taub says. “You need the potential of billions in revenue and a steep growth curve.” A professional network for the smartphone age has that kind of addressable market. And the team might feel better getting out of bed each day knowing they’re unlocking career paths for people instead of just getting them to click ads.

Posted Under: Tech News
Eden office management platform raises $25 million Series B

Posted by on 19 November, 2019

This post was originally published on this site

Eden, the workplace management platform that connects office managers with service providers, today announced the close of a $25 million Series B round led by Reshape. Participants in the round also include Fifth Wall Ventures, Mitsui Fudosan, RXR Realty, Thor Equities, Bessemer Venture Partners, Alate Partners, Quiet Capital, S28 Capital, Canvas Ventures, Comcast Ventures, Upshift Partners, Impala Ventures, ENIAC Ventures, and Crystal Towers, among others.

Eden was founded by Joe DuBey and Kyle Wilkinson back in 2015 and launched out of Y Combinator as an on-demand tech repair and support service, sending IT specialists to consumers’ homes to help set up a printer or repair a cracked phone screen. Within the first year, Eden had pivoted its business entirely to the enterprise, helping B2B clients with their IT issues at much cheaper cost than employing an IT specialist full time.

By 2017, Eden had expanded well beyond IT support into other office management categories, like inventory management around supplies, cleaning, handiwork and more. Indeed, revenue shifted dramatically from Eden’s W2 wizards toward third-party vendors and service providers, with around 75 percent coming from third parties.

Today, 100 percent of Eden’s revenue comes from connecting offices with third-party providers. The company is live in 25 markets, including a few international cities like Berlin and London. Eden now has more than 2,000 service providers on the platform.

The next phase of the company, according to DuBey, is to focus on the full spectrum of property management, zooming out to landlords and property managers.

“The broader vision we have is that everyone in the workplace will use Eden to have a better day at work, from the landlord of the building to the software engineer to the office manager, who is our primary client,” said DuBey. “One thing we’ve learned is that there is a meaningful part of the world you can serve by working directly with the business or the office or facilities manager. But it might be the majority of our category where you really need to build a relationship with the landlord and the property manager to really be successful.”

To that end, Eden is currently in beta with software aimed at landlords and property managers that could facilitate registered guests and check-ins, as well as building-related maintenance and service issues.

Eden has raised just over $40 million in funding since inception.

Posted Under: Tech News
The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip

Posted by on 19 November, 2019

This post was originally published on this site

Deep learning is all the rage these days in enterprise circles, and it isn’t hard to understand why. Whether it is optimizing ad spend, finding new drugs to cure cancer, or just offering better, more intelligent products to customers, machine learning — and particularly deep learning models — have the potential to massively improve a range of products and applications.

The key word though is ‘potential.’ While we have heard oodles of words sprayed across enterprise conferences the last few years about deep learning, there remain huge roadblocks to making these techniques widely available. Deep learning models are highly networked, with dense graphs of nodes that don’t “fit” well with the traditional ways computers process information. Plus, holding all of the information required for a deep learning model can take petabytes of storage and racks upon racks of processors in order to be usable.

There are lots of approaches underway right now to solve this next-generation compute problem, and Cerebras has to be among the most interesting.

As we talked about in August with the announcement of the company’s “Wafer Scale Engine” — the world’s largest silicon chip according to the company — Cerebras’ theory is that the way forward for deep learning is to essentially just get the entire machine learning model to fit on one massive chip. And so the company aimed to go big — really big.

Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.

The CS-1 is a “complete solution” product designed to be added to a data center to handle AI workflows. It includes the Wafer Scale Engine (or WSE, i.e. the actual processing core) plus all the cooling, networking, storage, and other equipment required to operate and integrate the processor into the data center. It’s 26.25 inches tall (15 rack units), and includes 400,000 processing cores, 18 gigabytes of on-chip memory, 9 petabytes per second of on-die memory bandwidth, 12 gigabit ethernet connections to move data in and out of the CS-1 system, and sucks just 20 kilowatts of power.

A cross-section look at the CS-1. Photo via Cerebras

Cerebras claims that the CS-1 delivers the performance of more than 1,000 leading GPUs combined — a claim that TechCrunch hasn’t verified, although we are intently waiting for industry-standard benchmarks in the coming months when testers get their hands on these units.

In addition to the hardware itself, Cerebras also announced the release of a comprehensive software platform that allows developers to use popular ML libraries like TensorFlow and PyTorch to integrate their AI workflows with the CS-1 system.

In designing the system, CEO and co-founder Andrew Feldman said that “We’ve talked to more than 100 customers over the past year and a bit,“ in order to determine the needs for a new AI system and the software layer that should go on top of it. “What we’ve learned over the years is that you want to meet the software community where they are rather than asking them to move to you.”

I asked Feldman why the company was rebuilding so much of the hardware to power their system, rather than using already existing components. “If you were to build a Ferrari engine and put it in a Toyota, you cannot make a race car,” Feldman analogized. “Putting fast chips in Dell or [other] servers does not make fast compute. What it does is it moves the bottleneck.” Feldman explained that the CS-1 was meant to take the underlying WSE chip and give it the infrastructure required to allow it to perform to its full capability.

A diagram of the Cerebras CS-1 cooling system. Photo via Cerebras.

That infrastructure includes a high-performance water cooling system to keep this massive chip and platform operating at the right temperatures. I asked Feldman why Cerebras chose water, given that water cooling has traditionally been complicated in the data center. He said, “We looked at other technologies — freon. We looked at immersive solutions, we looked at phase-change solutions. And what we found was that water is extraordinary at moving heat.”

A side view of the CS-1 with its water and air cooling systems visible. Photo via Cerebras.

Why then make such a massive chip, which as we discussed back in August, has huge engineering requirements to operate compared to smaller chips that have better yield from wafers. Feldman said that “ it massively reduces communication time by using locality.”

In computer science, locality is placing data and compute in the right places within, let’s say a cloud, that minimizes delays and processing friction. By having a chip that can theoretically host an entire ML model on it, there’s no need for data to flow through multiple storage clusters or ethernet cables — everything that the chip needs to work with is available almost immediately.

According to a statement from Cerebras and Argonne National Laboratory, Cerebras is helping to power research in “cancer, traumatic brain injury and many other areas important to society today” at the lab. Feldman said that “It was very satisfying that right away customers were using this for things that are important and not for 17-year-old girls to find each other on Instagram or some shit like that.”

(Of course, one hopes that cancer research pays as well as influencer marketing when it comes to the value of deep learning models).

Cerebras itself has grown rapidly, reaching 181 engineers today according to the company. Feldman says that the company is hands down on customer sales and additional product development.

It has certainly been a busy time for startups in the next-generation artificial intelligence workflow space. Graphcore just announced this weekend that it was being installed in Microsoft’s Azure cloud, while I covered the funding of NUVIA, a startup led by the former lead chip designers from Apple who hope to apply their mobile backgrounds to solve the extreme power requirements these AI chips force on data centers.

Expect ever more announcements and activity in this space as deep learning continues to find new adherents in the enterprise.

Posted Under: Tech News
18 months after acquisition, MuleSoft is integrating more deeply into Salesforce

Posted by on 18 November, 2019

This post was originally published on this site

A year and a half after getting acquired by Salesforce for $6.5 billion, MuleSoft is beginning to resemble a Salesforce company — using its language and its methodologies to describe new products and services. This week at Dreamforce, as the company’s mega customer conference begins in San Francisco, MuleSoft announced a slew of new services as it integrates more deeply into the Salesforce family of products.

MuleSoft creates APIs to connect different systems together. This could be quite useful for Salesforce as a bridge between older software that may be on-prem or in the cloud. It allows Salesforce and its customers to access data wherever it lives, even from different parts of the Salesforce ecosystem itself.

MuleSoft made a number of announcements designed to simplify that process and put it in the hands of more customers. For starters, it’s announcing Accelerators, which are pre-defined integrations that let companies connect more easily to other systems. Not surprisingly, two of the first ones connect data from external products and services to Salesforce Service Cloud and Salesforce Commerce Cloud.

“What we’ve done is we’ve pre-built integrations to common back-end systems like ServiceNow and JIRA in Service Cloud, and we prebuilt those integrations, and then automatically connected that data and services through a Salesforce Lightning component directly in the Service console,” Lindsey Irvine, chief marketing officer at MuleSoft, explained.

What this does is allow the agent to get a more complete view of the customer by getting not just the data that’s stored in Salesforce, but in other systems as well.

The company also wants to put these kinds of integration skills in the hands of more Salesforce customers, so they have designed a set of courses in Trailhead, the company’s training platform, with the goal of helping 100,000 Salesforce admins, developers, integration architects and line of business users develop expertise around creating and managing these kinds of integrations.

The company is also putting resources into creating the API Community Manager, a place where people involved in building and managing these integrations can get help from a community of users, all built on Salesforce products and services, says Mark Dao, chief product officer at MuleSoft.

“We’re leveraging Community Cloud, Service Cloud and Marketing Cloud to create a true developer experience platform. And what’s interesting is that it’s targeting both the business users — in other words, business development teams and marketing teams — as well as external developers,” he said. He added that the fact this is working with business users as well as the integration experts is something new, and the goal is to drive increased usage of APIs using MuleSoft inside Salesforce customer organizations.

Finally, the company announced Flow Designer, a new tool fueled by Einstein AI, which helps automate the creation of workflows and integrations between systems in a more automated fashion without requiring coding skills.

MuleSoft Flow Designer requires no coding (Screenshot: MuleSoft)

Dao says this is about putting MuleSoft in reach of more users. “It’s about enabling use cases for less technical users in the context of the MuleSoft Anypoint Platform. This really requires a new way of thinking around creating integrations, and we’ve been making Flow Designer simpler and simpler, and removing that technical layer from those users,” he said.

API Community Manager is available now. Accelerators will be available by the end of the year and Flow Designer updates will be available Q2 2020, according to the company.

These and other features are all designed to take some of the complexity out of using MuleSoft to help connect various systems across the organization, including both Salesforce and external programs, to make use of data wherever it lives. MuleSoft does requires a fair bit of technical skill, so if the company is able to simplify integration tasks, it could help put it in the hands of more users.

Posted Under: Tech News
Gremlin brings Chaos Engineering as a Service to Kubernetes

Posted by on 18 November, 2019

This post was originally published on this site

The practice of Chaos Engineering developed at Amazon and Netflix a decade ago to help those web scale companies test their complex systems for worst-case scenarios before they happened. Gremlin was started by a former employee of both these companies to make it easier to perform this type of testing without a team of Site Reliability Engineers (SREs). Today, the company announced that it now supports Chaos Engineering-style testing on Kubernetes clusters.

The company made the announcement at the beginning of KubeCon, the Kubernetes conference taking place in San Diego this week.

Gremlin co-founder and CEO Kolton Andrus says that the idea is to be able to test and configure Kubernetes clusters so they will not fail, or at least reduce the likelihood. He says to do this it’s critical to run chaos testing (tests of mission-critical systems under extreme duress) in live environments, whether you’re testing Kubernetes clusters or anything else, but it’s also a bit dangerous to do be doing this. He says to mitigate the risk, best practices suggest that you limit the experiment to the smallest test possible that gives you the most information.

“We can come in and say I’m going to deal with just these clusters. I want to cause failure here to understand what happens in Kubernetes when these pieces fail. For instance, being able to see what happens when you pause the scheduler. The goal is being able to help people understand this concept of the blast radius, and safely guide them to running an experiment,” Andrus explained.

In addition, Gremlin is helping customers harden their Kubernetes clusters to help prevent failures with a set of best practices. “We clearly have the tooling that people need [to conduct this type of testing], but we’ve also learned through many, many customer interactions and experiments to help them really tune and configure their clusters to be fault tolerant and resilient,” he said.

The Gremlin interface is designed to facilitate this kind of targeted experimentation. You can check the areas you want to apply a test, and you can see graphically which parts of the system are being tested. If things get out of control, there is a kill switch to stop the tests.

Gremlin Kubernetes testing screen (Screenshot: Gremlin)

Gremlin launched in 2016. Its headquarters are in San Jose. It offers both a freemium and pay product. The company has raised almost $27 million, according to Crunchbase data.

Posted Under: Tech News
Bill McDermott takes reins as ServiceNow CEO sooner than expected with new CFO

Posted by on 18 November, 2019

This post was originally published on this site

When former SAP CEO Bill McDermott announced that he was stepping down in October after a decade in the position it was pretty unexpected. He indicated at that point, he would stay until the end of the year to help with the transition to new leadership — then ServiceNow hired him to be its CEO just a few weeks later. Today, the company announced, McDermott has taken over his duties earlier than expected.

The company also announced it has filled its vacant CFO job, hiring Gina Mastantuono, who previously served in similar roles at Ingram Micro and Revlon, and has more than 20 years experience in finance.

It was a game of CEO musical chairs when ServiceNow announced on October 22nd that current CEO John Donahoe was leaving to be CEO at Nike. The company also announced it was bringing in McDermott, who had spent the previous decade at SAP.

It is a time of transition for ServiceNow, having to replace both a CFO and CEO, but they landed two experienced pros, who should help continue to guide the company into the future. The company has stated that it hopes to eventually achieve a $10 billion revenue goal under the new leadership team.

As I wrote in a piece analyzing his move to ServiceNow, McDermott seemed to fully embrace that challenge, even though he has a ways to go:

McDermott has his work cut out for him. The company’s 2018 revenue was $2.6 billion. Still, he fully embraced the $10 billion challenge. “Well let me answer that very simply, I completely stand by [the $10 billion goal], and I’m looking forward to achieving it,” he said with bravado during today’s call.

Mastantuono has a lot in common with McDermott, who also came from a much larger organization to help lead ServiceNow to the next level. At her previous position at Ingram Micro she led finance for a company with $50 billion in revenue and more than 200,000 customers.

Mastantuono sees a company with great potential as she takes over to guide the financial side of the organization. “ServiceNow is highly regarded by its customers and has tremendous momentum and opportunity to enable digital transformation and help make work, work better for people,” she said in a statement.

The new leadership duo has its work cut out for it, but it’s a company with lots of room for growth. It will now be up to McDermott and Mastantuono to lead it into that next phase.

Posted Under: Tech News
Ohi raises $2.75M to power same-day delivery for brands that aren’t Amazon

Posted by on 18 November, 2019

This post was originally published on this site

The world has gotten so much faster. Amazon has made two-day shipping the standard and same- or next-day shipping commonplace. And that doesn’t even include the collection of on-demand players who can get us everything from groceries to alcohol to services like concierge storage and in-home cleaning with the press of a button.

But the logistics around same- or next-day delivery are incredibly complicated, which usually means that only the biggest, most successful brands and platforms can pull it off.

Enter Ohi.

Ohi was founded last year by Ben Jones, with a mission to democratize e-commerce by offering Amazon-level speed to smaller brands. The company today announced the close of a $2.75 million seed round led by Flybridge Capital Partners .

Ohi partners with landlords to turn what would normally be leased as commercial retail property or office space into micro-warehouses within major cities. The company then offers those warehouses on flexible leases that can be as short as three months, which help D2C brands distribute their inventory and power same- or next-day delivery of their products. Ohi employs 1099 workers to handle pick and pack at warehouses, and partners with Postmates and Doordash for last-mile courier services.

Eventually, Ohi has plans to turn this into a full-fledged platform, paying landlords based on volume. For now, however, the startup is doing traditional leases with landlords, taking on more of a financial risk with the spaces, as it scales up the brand side of the platform.

Ohi charges brands a fixed monthly access fee to the platform, which starts at $750/month. More expensive tiers unlock premium intelligence features around matching inventory to warehouse location, as well as access to more spaces. At the transaction level, Ohi asks for a fee of $2.50 for pick and pack.

Jones says that delivery is actually a higher cost for brands than storage, and that same-day shipping can cost upwards of $50/package for a brand, with same-day pick and pack costing about $10/item. The hope is that Ohi can bring down the price of same-day and next-day delivery by using this Ohi network of commercial space, pick and pack, and courier services to compete with Amazon.

Moreover, Ohi believes that the platform can go well beyond bringing down the price of same-day delivery. The company says it’s brands are also seeing a decrease in cart abandonment when customers see that same-day or next-day delivery option.

Plus, through the data it collects by handling fulfillment for brands, Ohi expects to be able to use its tech to predict demand based on geography and category, helping brands understand their own customers and customers shopping in their particular category.

“There is a lot of positive momentum behind what we’re doing,” said Jones. “Every brand we talk to knows this is the future.”

Jones came up with the idea for Ohi after suffering a serious back injury that left him unable to get around easily or carry things for more than a year. This forced him into a situation where ecommerce was his only option for just about everything. Many of the orders he placed offered three- to five-day shipping, leaving him waiting for what he needed.

He started to investigate how a service could democratize the convenience of same-day and next-day delivery for brands and their customers. And Ohi was born.

Ohi currently offers its service in Manhattan and Brooklyn in New York City, and is launching in Los Angeles this week.

“The greatest challenge we face is how to scale quickly without making mistakes,” said Jones. “It’s not quite as simple as a piece of software that has one-to-many distribution. We’re actually holding brands’ inventory and there’s a physical aspect to this business that makes it more complex. Making sure we can scale that efficiently without making mistakes is going to be one of the biggest challenges.”

Posted Under: Tech News
Page 5 of 81« First...34567...102030...Last »

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue