This post is intended for businesses and other organizations interested... Read more →
Posted by Richy George on 10 July, 2018This post was originally published on this site
Foursquare has just hired Liz Ritzcovan as Chief Revenue Officer.
Ritzcovan hails from BazaarVoice, where she also served as CRO. She previously held CRO positions at Sizmek and Parade Media Group, and before that, spent time at Yahoo, Time Inc, and Interbrand.
Though Foursquare has been around since 2009, things have changed a lot for the company. What started as a consumer-facing app to log and share location information has become a SaaS company focused on helping brands understand their customer’s real-world habits and convert those habits into meaningful transactions and experiences.
That started with the unbundling of the legacy Foursquare app into Foursquare (a Yelp competitor centered around recommendations) and Swarm (a social location check-in app). As of 2016, both apps have more than 50 million active users, which has in turn yielded the data necessary to create enterprise tools.
For example, Pinpoint by Foursquare (an ad product) has more than half of the Ad Age 100 as advertisers, and Attribution by Foursquare (a metric tracking product) has doubled its revenue in 2017. And that doesn’t include the Pilgrim SDK and Places API, which helped contribute to Foursquare’s 50 percent revenue growth year over year for the past three years.
Ritzcovan is aware that, despite the growth of e-commerce, 90 percent of consumer spending and memorable experiences happen in the real world. But getting clients, usually internet-facing companies, to understand that is her new great challenge.
Here’s what she had to say in her announcement blog post:
So what is my first priority as CRO? Client centricity. Foursquare needs to deepen our connection with our partners: explaining to business leaders why it’s critical to leverage more than a single Foursquare solution—be it ad campaigns with Pinpoint, measurement with Attribution, or location-based CRM and messaging with our Pilgrim SDK and Places API—by taking all of these parts together and connecting the dots. Foursquare is more and more about bundling technology licensing, mapping capabilities, and marketing optimization in a suite of solutions. It’s the reason I joined, to help lead the team into packaging these broad “solution sets” for leading organizations and brands.
Posted by Richy George on 8 July, 2018This post was originally published on this site
While serverless computing isn’t new, it has reached an interesting place in its development. As developers begin to see the value of serverless architecture, a whole new startup ecosystem could begin to develop around it.
Serverless isn’t exactly serverless at all, but it does enable a developer to set event triggers and leave the infrastructure requirements completely to the cloud provider. The vendor delivers exactly the right amount of compute, storage and memory and the developer doesn’t even have to think about it (or code for it).
That sounds ideal on its face, but as with every new technology, for each solution there is a set of new problems and those issues tend to represent openings for enterprising entrepreneurs. That could mean big opportunities in the coming years for companies building security, tooling, libraries, APIs, monitoring and a whole host of tools serverless will likely require as it evolves.
In the beginning we had physical servers, but there was lots of wasted capacity. That led to the development of virtual machines, which enabled IT to take a single physical server and divide it into multiple virtual ones. While that was a huge breakthrough for its time, helped launch successful companies like VMware and paved the way for cloud computing, it was the only beginning.
Then came containers, which really began to take off with the development of Docker and Kubernetes, two open source platforms. Containers enable the developer to break down a large monolithic program into discrete pieces, which helps it run more efficiently. More recently, we’ve seen the rise of serverless or event-driven computing. In this case, the whole idea of infrastructure itself is being abstracted away.
While it’s not truly serverless, since you need underlying compute, storage and memory to run a program, it is removing the need for developers to worry about servers. Today, so much coding goes into connecting the program’s components to run on whatever hardware (virtual or otherwise) you have designated. With serverless, the cloud vendor handles all of that for the developer.
All of the major vendors have launched serverless products with AWS Lambda, Google Cloud Functions and Microsoft Azure Functions all offering a similar approach. But it has the potential to be more than just another way to code. It could eventually shift the way we think about programming and its relation to the underlying infrastructure altogether.
It’s important to understand that we aren’t quite there yet, and a lot of work still needs to happen for serverless to really take hold, but it has enormous potential to be a startup feeder system in coming years and it’s certainly caught the attention of investors looking for the next big thing.
Tim Wagner, general manager for AWS Lambda, says the primary advantage of serverless computing is that it allows developers to strip away all of the challenges associated with managing servers. “So there is no provisioning, deploying patching or monitoring — all those details at the the server and operating system level go away,” he explained.
He says this allows developers to reduce the entire coding process to the function level. The programmer defines the event or function and the cloud provider figures out the exact amount of underlying infrastructure required to run it. Mind you, this can be as little as a single line of code.
Sarah Guo, a partner at Greylock Partners, who invests in early stage companies sees serverless computing as offering a way for developers to concentrate on just the code by leaving the infrastructure management to the provider. “If you look at one of the amazing things cloud computing platforms have done, it has just taken a lot of the expertise and cost that you need to build a scalable service and shifted it to [the cloud provider],” she said. Serverless takes that concept and shifts it even further by allowing developers to concentrate solely on the user’s needs without having to worry about what it takes to actually run the program.
Cloud computing company Digital Ocean recently surveyed over 4800 IT pros, of which 55 percent identified themselves as developers. When asked about serverless, nearly half of respondents reported they didn’t fully understand the serverless concept. On the other hand, they certainly recognized the importance of learning more about it with 81 percent reporting that they plan to do further research this year.
When asked if they had deployed a serverless application in the last year, not surprisingly about two-thirds reported they hadn’t. This was consistent across regions with India reporting a slightly higher rate of serverless adoption.
Of those using serverless, Digital Ocean found that AWS was by far the most popular service with 58 percent of respondents reporting Lambda was their chosen tool, followed by Google Cloud Functions with 23 percent and Microsoft Azure Functions further back at 10 percent.
Interestingly enough, one of the reasons that respondents reported a reluctance to begin adopting serverless was a lack of tooling. “One of the biggest challenges developers report when it comes to serverless is monitoring and debugging,” the report stated. That lack of visibility, however could also represent an opening for startups.
The thing about abstraction is that it simplifies operations on one level, but it also creates a new set of requirements, some expected and some that might surprise as a new way of programming scales. This lack of tooling could potentially hinder the development, but more often than not when necessity calls, it can stimulate the development of a new set of instrumentation.
This is certainly something that Guo recognizes as an investor. “I think there is a lot of promise as we improve a bunch of things around making it easier for developers to access serverless, while expanding the use cases, and concentrating on issues like visibility and security, which are all [issues] when you give more and more control of [the infrastructure] to someone else,” she said.
Ping Li, general partner at Accel also sees an opportunity here for investors. “I think the reality is that anytime there’s a kind of shift from a developer application perspective, there’s an opportunity to create a new set of tools or products that help you enable those platforms,” he said.
Li says the promise is there, but it won’t happen right away because there needs to be a critical mass of developers using serverless methodologies first. “I would say that we are definitely interested in serverless in that we believe it’s going to be a big part of how applications will be built in the future, but it’s still in its early stages,” Ping said.
S. Somasgear, managing director at Madrona Ventures says that even as serverless removes complexity, it creates a new set of issues, which in turn creates openings for startups. “It is complicated because we are trying to create this abstraction layer over the underlying infrastructure and telling the developers that you don’t need to worry about it. But that means, there are a lot of tools that have to exist in place — whether it is development tools, deployment tools, debugging tools or monitoring tools — that enable the developer to know certain things are happening when you’re operating in a serverless environment.
Having that visibility in a serverless world is a real challenge, but it is not the only opening here. There are also opportunities for trigger or function libraries or companies akin to Twilio or Stripe, which offer easy API access to a set of functionality without having a particular expertise like communications or payment gateways There could be similar analogous needs in the serverless world.
Companies are beginning to take advantage of serverless computing to find new ways of solving problems. Over time, we should begin to see more developer momentum toward this approach and more tools develop.
While it is early days, as Guo says, it’s not as though developers love running infrastructure. It’s just been a necessity. “I think will be very interesting. I just think we’re still very early in the ecosystem,” she said. Yet certainly the potential is there if the pieces fall into place and programmer momentum builds around this way of developing applications for it to really take off and for a startup ecosystem to follow.
Posted by Richy George on 3 July, 2018This post was originally published on this site
At the end of last November, Google announced that Diane Bryant, who at the time was on a leave of absence from her position as the head of Intel’s data center group, would become Google Cloud’s new COO. This was a major coup for Google, but it wasn’t meant to last. After only seven months on the job, Bryant has left Google Cloud, as Business Insider first reported today.
“We can confirm that Diane Bryant is no longer with Google. We are grateful for the contributions she made while at Google and we wish her the best in her next pursuit,” a Google spokesperson told us when we reached out for comment.
The reasons for Bryant’s departure are currently unclear. It’s no secret that Intel is looking for a new CEO and Bryant would fit the bill. Intel also famously likes to recruit insiders as its leaders, though I would be surprised if the company’s board had already decided on a replacement. Bryant spent more than 25 years at Intel and her hire at Google looked like it would be a good match, especially given that Google’s position behind Amazon and Microsoft in the cloud wars means that it needs all the executive talent it can get.
When Bryant was hired, Google Cloud CEO Diane Greene noted that “Diane’s strategic acumen, technical knowledge and client focus will prove invaluable as we accelerate the scale and reach of Google Cloud.” According to the most recent analyst reports, Google Cloud’s market share has ticked up a bit — and its revenue has increased at the same time — but Google remains a distant third in the competition and it doesn’t look like that’s changing anytime soon.
Posted by Richy George on 2 July, 2018This post was originally published on this site
Dell, which went private in one of the the largest leveraged buyouts in tech circa 2013, announced today that it will once again be going public through a relatively complex mechanism that will once again bring the company back onto the public markets with founder Michael Dell and Silver Lake Partners largely in control.
Dell’s leveraged buyout largely marked the final page in the company’s storied history as a PC provider, going back to the old “dude, you’re getting a Dell” commercials. The company rode that wave to dominance, but as computing shifted to laptops, mobile phones, and complex operations were offloaded into cloud services like Amazon Web Services, Azure and Google Cloud, Dell found itself navigating a complex environment while having to make a significant business transition beyond the PC era. That meant Dell would be beholden to the whims of public markets, perhaps laden with short-term pessimism over the company’s urgent need to find a transition.
The transaction is actually an offer to buy shares that track the company’s involvement in VMWare, converting that tracking stock into Dell Technologies stock that would mark its return as a publicly-traded company. Those shares will end up traded on the NYSE, around five years later after its founder took the company private with Silver Lake Partners in a deal worth roughly $25 billion. Silver Lake Partners owns around 24% of the company, while Dell owns 72% and will continue to serve as the chairman and CEO of the company. This move helps the company bypass the IPO process, which would remove the whole time period of potential investors scrutinizing the company (which has taken on a substantial debt load).
Dell said in its most recent quarter it recorded revenue of $21.4 billion, up 19% year-over-year, and over the past 12 months the company generated $82.4 billion of revenue with a net loss of $2.3 billion. The company said it has also paid down $13 billion of gross debt since its combination with EMC back in 2016. All this has been part of the company’s transition to find new businesses beyond just selling computers, though there’s clearly still demand for those computers in offices around the world. As it has expanded into a broader provider of IT services, it’s potentially positioned itself as a modern enterprise tools provider, which would allow it to more securely navigate public markets while offering investors a way to correctly calibrate its value.
Posted by Richy George on 29 June, 2018This post was originally published on this site
Say you have a job with a large company and you want to know how much vacation time you have left, or how to add your new baby to your healthcare. This usually involves emailing or calling HR and waiting for an answer, or it could even involve crossing multiple systems to get what you need.
Leena AI, a member of the Y Combinator Summer 2018 class, wants to change that by building HR bots to answer question for employees instantly.
The bots can be integrated into Slack or Workplace by Facebook and they are built and trained using information in policy documents and by pulling data from various back-end systems like Oracle and SAP.
Adit Jain, co-founder at Leena AI says the company has its roots in another startup called Chatteron, that the founders started after they got out of college in India in 2015. That product helped people build their own chatbots. Jain says along the way, they discovered while doing their market research, a particularly strong need in HR. They started Leena AI last year to address that specific requirement.
Jain says when building bots, the team learned through its experience with Chatteron, that it’s better to concentrate on a single subject because the underlying machine learning model gets better the more it’s used. “Once you create a bot, for it to really to add value and be [extremely] accurate, and for it to really go deep, it takes a lot of time and effort and that can only happen through verticalization,” Jain explained.
What’s more, as the founders have become more knowledgeable about the needs of HR, they have learned that 80 percent of the questions cover similar topics like vacation, sick time and expense reporting. They have also seen companies using similar back-end systems, so they can now build standard integrators for common applications like SAP, Oracle and Netsuite.
Of course, even though people may ask similar questions, the company may have unique terminology or people may ask the question in an unusual way. Jain says that’s where the natural language processing (NLP) comes in. The system can learn these variations over time as they build a larger database of possible queries.
The company just launched in 2017 and already has a dozen paying customers. They hope to double that number in just 60 days. Jain believes being part of Y Combinator should help in that regard. The partners are helping the team refine its pitch and making introductions to companies that could make use of this tool.
Their ultimate goal is nothing less than to be ubiquitous, to help bridge multiple legacy systems to provide answers seamlessly for employees to all their questions. If they can achieve that, they should be a successful company.
Posted by Richy George on 28 June, 2018This post was originally published on this site
Regardless of what you may think of Facebook as a platform, they run a massive operation and when you reach their level of scale you have to get more creative in how you handle every aspect of your computing environment.
Engineers quickly reach the limits of human ability to track information to the point that checking logs and analytics becomes impractical and unwieldy on a system running thousands of services. This is a perfect scenario to implement machine learning and that is precisely what Facebook has done.
The company published a blog post today about a self-tuning system they have dubbed Sprial. This is pretty nifty and what it does is essentially flip the idea of system tuning on its head. Instead of looking at some data and coding what you want the system to do, you teach the system the right way to do it and it does it for you, using the massive stream of data to continually teach the machine learning models how to push the systems to be ever better.
In the blog post, the Spiral team described it this way: “Instead of looking at charts and logs produced by the system to verify correct and efficient operation, engineers now express what it means for a system to operate correctly and efficiently in code. Today, rather than specify how to compute correct responses to requests, our engineers encode the means of providing feedback to a self-tuning system.”
They say that coding in this way is akin to declarative code, like using SQL statements to tell the database what you want it to do with the data, but the act of applying that concept to systems is not a simple matter.
“Spiral uses machine learning to create data-driven and reactive heuristics for resource-constrained real-time services. The system allows for much faster development and hands-free maintenance of those services, compared with the hand-coded alternative,” the Spiral team wrote in the blog post.
If you consider the sheer number of services running on Facebook, and the number of users trying to interact with those services at any given time, it required sophisticated automation, and that is what Spiral is providing.
The system takes the log data, processes it through Spiral, which is connected with just a few lines of code. It then sends commands back to the server based on the declarative coding statements written by the team. To ensure those commands are always being fine tuned, at the same time, the data gets sent from the server to a model for further adjustment in a lovely virtuous cycle. This process can applied locally or globally.
The tool was developed by the team operating in Boston, and is only available internally inside Facebook. It took lots of engineering to make it happen, the kind of scope that only Facebook could apply to a problem like this (mostly because Facebook is one of the few companies that would actually have a problem like this).
Posted by Richy George on 27 June, 2018This post was originally published on this site
Microsoft today launched two new Azure regions in China. These new regions, China North 2 in Beijing and China East 2 in Shanghai, are now generally available and will complement the existing two regions Microsoft operates in the country (with the help of its local partner, 21Vianet).
As the first international cloud provider in China when it launched its first region there in 2014, Microsoft has seen rapid growth in the region and there is clearly demand for its services there. Unsurprisingly, many of Microsoft’s customers in China are other multinationals that are already betting on Azure for their cloud strategy. These include the likes of Adobe, Coke, Costco, Daimler, Ford, Nuance, P&G, Toyota and BMW.
In addition to the new China regions, Microsoft also today launched a new availability zone for its region in the Netherlands. While availability zones have long been standard among the big cloud providers, Azure only launched this feature — which divides a region into multiple independent zones — into general availability earlier this year. The regions in the Netherlands, Paris and Iowa now offer this additional safeguard against downtime, with others to follow soon.
In other Azure news, Microsoft also today announced that Azure IoT Edge is now generally available. In addition, Microsoft announced the second generation of its Azure Data Lake Storage service, which is now in preview, and some updates to the Azure Data Factory, which now includes a web-based user interface for building and managing data pipelines.
Posted by Richy George on 27 June, 2018This post was originally published on this site
For any company built on top of machine learning operations, the more data they have, the better they are off — as long as they can keep it all under control. But as more and more information pours in from disparate sources, gets logged in obscure databases and is generally hard (or slow) to query, the process of getting that all into one neat place where a data scientist can actually start running the statistics is quickly running into one of machine learning’s biggest bottlenecks.
That’s a problem Intermix.io and its founders, Paul Lappas and Lars Kamp, hope to solve. Engineers get a granular look at all of the different nuances behind what’s happening with some specific function, from the query all the way through all of the paths it’s taking to get to its end result. The end product is one that helps data engineers monitor the flow of information going through their systems, regardless of the source, to isolate bottlenecks early and see where processes are breaking down. The company also said it has raised seed funding from Uncork Capital, S28 Capital, PAUA Ventures along with Bastian Lehman, CEO of Postmates, and Hasso Plattner, Founder of SAP.
“Companies realize being data driven is a key to success,” Kamp said. “The cloud makes it cheap and easy to store your data forever, machine learning libraries are making things easy to digest. But a company that wants to be data driven wants to hire a data scientist. This is the wrong first hire. To do that they need access to all the relevant data, and have it be complete and clean. That falls to data engineers who need to build data assembly lines where they are creating meaningful types to get data usable to the data scientist. That’s who we serve.”
Intermix.io works in a couple of ways: first, it tags all of that data, giving the service a meta-layer of understanding what does what, and where it goes; second, it taps every input in order to gather metrics on performance and help identify those potential bottlenecks; and lastly, it’s able to track that performance all the way from the query to the thing that ends up on a dashboard somewhere. The idea here is that if, say, some server is about to run out of space somewhere or is showing some performance degradation, that’s going to start showing up in the performance of the actual operations pretty quickly — and needs to be addressed.
All of this is an efficiency play that might not seem to make sense at a smaller scale. the waterfall of new devices that come online every day, as well as more and more ways of understanding how people use tools online, even the smallest companies can quickly start building massive data sets. And if that company’s business depends on some machine learning happening in the background, that means it’s dependent on all that training and tracking happening as quickly and smoothly as possible, with any hiccups leading to real-term repercussions for its own business.
Intermix.io isn’t the first company to try to create some application performance management software. There are others like Data Dog and New Relic, though Lappas says that the primary competition from them comes in the form of traditional APM software with some additional scripts tacked on. However, data flows are a different layer altogether, which means they require a more unique and custom approach to addressing that problem.
Posted by Richy George on 27 June, 2018This post was originally published on this site
The rapid pace of technology innovation and applications in recent decades — you could argue that just about every kind of business is a “tech” business these days — has spawned a sea of tech startups and larger businesses that are focused on serving that market, and equally demanding consumers, on a daily basis. Today, a venture capital firm in the UK is announcing a fund aimed at helping to grow the technologies that will underpin a lot of those daily applications.
Cambridge-based IQ Capital is raising £125 million ($165 million) that it will use specifically to back UK startups that are building “deep tech” — the layer of research and development, and potentially commercialised technology, that is considered foundational to how a lot of technology will work in the years and decades to come. So far, some £92 million has been secured, and partner Kerry Baldwin said that the rest is coming “without question” — pointing to strong demand.
There was a time when it was more challenging to raise money for very early stage companies working at the cusp of new technologies, even more so in smaller tech ecosystems like the UK’s. As Ed Stacey, another partner in the firm acknowledges, there is often a very high risk of failure at even more stages of the process, with the tech in some cases not even fully developed, let alone rolled out to see what kind of commercial interest there might be in the product.
However, there has been a clear shift in the last several years.
There a lot more money floating around in tech these days — so much so that it’s created a stronger demand for projects to invest in. (Another consequence of that is that when you do get a promising startup, funds are potentially giving them hundreds of millions and causing other disruptions in how they grow and exit, which is another story…)
And while there are definitely a lot of startups out there in the world today, a lot of them are what you might describe as “me too”, or at least making something that is easily replicated by another startup, making the returns and the wins harder to find among them.
A new focus that we are seeing on “deep tech” is a consequence of both of those trends.
“The low-hanging fruit has been discovered… Shallow tech is a solved problem,” Stacey said, in reference to areas like the basics of e-commerce services and mobile apps. “These are easy to build with open source components, for example. It’s shallow when it can be copied very quickly.”
In contrast, deep tech is “by definition is something that can’t easily be copied,” he continued. “The underlying algorithm is deep, with computational complexity.”
But the challenges run deep in deep tech: not only might a product or technology never come together, or find a customer, but it might face problems scaling if it does take off. IQ Capital’s focus on deep tech is coupled with the company trying to determine which ideas will scale, not just work or find a customer. As we see more deep tech companies emerging and growing, I’m guessing scalability will become an ever more prominent factor in deciding whether a startup gets backing.
IQ Capital’s investments to date span areas like security (Privitar), marketing tech (Grapeshot, which was acquired by Oracle earlier this year), AI (such as speech recognition API developer Speechmatics) and biotechnology (Fluidic Analytics, which measures protein concentrations), all areas that will be the focus of this fund, along with IoT and other emerging technologies and gaps in the current market.
IQ Capital is not the only fund starting to focus on deep tech, nor is its portfolio the only range of startups focusing on this (Allegro.AI and deep-learning chipmaker Hailo are others, to name just two).
LPs in this latest fund include family offices, wealth managers, tech entrepreneurs and CEOs from IQ’s previous investments, as well as British Business Investments, the commercial arm of the British Business Bank, the firm said.
Posted by Richy George on 26 June, 2018This post was originally published on this site
Google is giving developers a new storage option in its cloud. Cloud Filestore, which will launch into beta next month, essentially offers a fully managed network attached storage (NAS) service in the cloud. This means that companies can now easily run applications that need a traditional file system interface on the Google Cloud Platform.
Traditionally, developers who wanted access to a standard file system over the kind of object storage and database options that Google already offered had to rig up a file server with a persistent disk. Filestore does away with all of this and simply allows Google Cloud users to spin up storage as needed.
The promise of Filestore is that it offers high throughput, low latency and high IOPS. The service will come in two tiers: premium and standard. The premium tier will cost $0.30 per GB and month and promises a throughput speed of 700 MB/s and 30,000 IOPS, no matter the storage capacity. Standard tier Filestore storage will cost $0.20 per GB and month, but performance scales with capacity and doesn’t hit peak performance until you store more than 10TB of data in Filestore.
Google launched Filestore at an event in Los Angeles that mostly focused on the entertainment and media industry. There are plenty of enterprise applications in those verticals that need a shared filesystem, but the same can be said for many other industries that rely on similar enterprise applications.
The Filestore beta will launch next month. Since it’s still in beta, Google isn’t making any uptime promises right now and there is no ETA for when the service will come out of beta.
Copyright 2015 - InnovatePC - All Rights Reserved
Site Design By Digital web avenue