SiFive gets $50.6M to help companies get their custom chip designs out the door

Posted by on 2 April, 2018

This post was originally published on this site

With the race to next-generation silicon in full swing, the waterfall of venture money flowing into custom silicon startups is already showing an enormous amount of potential for some more flexible hardware for an increasingly changing technology landscape — and Naveed Sherwani hopes to tap that for everyone else.

That’s the premise of SiFive, a startup that’s designed to help entrepreneurs — or any company — come up with a custom designed chip for their needs. But rather than having to raise tens of millions of dollars from a venture firm or have a massive production system in place, SiFive’s goal is to help get that piece of silicon in the hands of the developer quickly so they can see if it actually works based off a set of basic hardware and IP offered, and then figure out when and how to move it into full-scale production. The company starts by offering templates and then allows them to make some modifications for what eventually ends up as a piece of RISC-V silicon that’s in their hands. SiFive today said it has raised $50.6 million in venture financing in a round led by Sutter Hill Ventures, Spark Capital, and Osage University Partners.

“The way we view it, is that we think we should not depend on people learning special languages and things of that nature to be able to modify the architecture and enhance the architecture,” Sherwani said. “What we believe is there could be a high-level interface, which is what we’re building, which will allow people to take existing cores, bring them into their design space, and then apply a configuration. Moving those configurations, you can modify the core, and then you can get the new modified core. That’s the approach we take, we don’t have to learn a special language or be an expert, it’s the way we present the core. We’d like to start with cores that are verified, and each of these modifications does not cause to become non-verifiable.”

SiFive is based on a design framework for silicon called RISC-V. You could consider it a kind of open source analog to designs by major chip fab firms, but the goal for RISC-V chips is to lean on the decades of experience since the original piece of silicon came out of Intel to develop something that is less messy while still getting the right tasks done. Sherwani says that RISC-V chips have more than 50 instruction sets while common chips will have more than 1,000. By nature, they aren’t at the kind of scale of an Intel, so the kinds of efficiencies those firms might have don’t exist. But SiFive hopes to serve a wide array of varying needs rather than mass-producing a single style of silicon.

There are two flows for developers looking to build out silicon using SiFive. First is the prototype flow, where developers will get a chance to spec out their silicon and figure out their specific needs. The goal there is to get something into the hands of the developer they can use to showcase their ideas or technology, and SiFive works with IP vendors and other supply chain partners — during this time, developers aren’t paying for IP. Once the case is proved out (and the startup has, perhaps, raised money based on that idea) they can switch to a production flow with SiFive where they will start paying for IP and services. There’s also a potential marketplace element as more and more people come up with novel ideas for operational cores.

“For any segment in the market there will be a few templates available,” Sherwani said. “We’ll have some tools and methodologies there, and among all the various templates are available show what would be the best for [that customer]. We also have an app store — we are expecting people who have designed cores who are willing two share it, because they don’t need it to be proprietary. If anyone uses that template, then whatever price they can put on it, they can make some money doing that. This whole idea of marketplaces will get more people excited.”

As there is an intense rush to develop new customized silicon, it may be that services like the ones offered by SiFive become more and more necessary. But there’s another element to the bet behind SiFive: making the chip itself less ambiguous and trying to remove black boxes. That doesn’t necessarily make it wildly more secure than the one next to it, but at the very least, it means when there is a major security flaw like Intel’s Spectre problems, there may be a bit more tolerance from the developer community because there are fewer black boxes.

“All these complications are there and unless you have all this expertise, you can’t do a chip,” Sherwani said. “Our vision is that we deliver the entire chip experience to that platform and people can be able to log in. They don’t need a team, any tools, they don’t need FPGAs because all those will be available on the web. As a result the cost goes down because it’s a shared economy, they’re sharing tools, and that is how we think dramatically you can do chips at much lower cost.”

While there is a lot of venture money flowing into the AI chip space — with many different interpretations of what that hardware looks like — Sherwani said the benefit of working with SiFive is to be able to rapidly adapt an idea to a changing algorithm. Developers have already proven out a lot of different tools and frameworks, but once a piece of silicon is in production it’s not easy to change on the fly. Should those best practices or algorithms change, developers will have an opportunity to reassess and redesign the chip as quickly as possible.

The idea of that custom silicon is going to be a big theme going forward as more and more use cases emerge that could be easier with a customized piece of hardware. Already there are startups like Mythic and SambaNova Systems, which have raised tens of millions of dollars and specialize in the rapid-fire calculations for typical AI processes. But this kind of technology is now showing up in devices ranging from an autonomous vehicle to a fridge, and each use case may carry different needs. Intel and other chip design firms probably can’t hit every niche, and the one-size-fits-all (or even something more modular like an FPGA from Intel) might not hit each sweet spot. That, in theory, is the hole that a company like SiFive could fill.

Posted Under: Tech News
Atlassian’s two-year cloud journey

Posted by on 2 April, 2018

This post was originally published on this site

A couple of years ago, Dropbox shocked a lot of people when it decided to mostly drop the public cloud, and built its own datacenters. More recently, Atlassian did the opposite, closing most of its datacenters and moving to the cloud. Companies make these choices for a variety of reasons. When Atlassian CTO Sri Viswanath came on board in 2016, he made the decision to move the company’s biggest applications to AWS.

In part, this is a story of technical debt — that’s the concept that over time your applications become encumbered by layers of crusty code, making it harder to update and ever harder to maintain. For Atlassian, which was founded in 2002, that bill came due in 2016 when Viswanath came to work for the company.

Atlassian already knew they needed to update the code to move into the future. One of the reasons they brought Viswanath on board was to lead that charge, but the thinking was already in place even before he got there. A small team was formed back in 2015 to work out the vision and the architecture for the new cloud-based approach, but they wanted to have their first CTO in place to carry it through to fruition.

Shifting to microservices

He put the plan into motion, giving it the internal code name Vertigo — maybe because the thought of moving most of their software stack to the public cloud made the engineering team dizzy to even consider. The goal of the project was to rearchitect the software, starting with their biggest products Jira and Confluence, in a such a way that it would lay the foundation for the company for the next decade — no pressure or anything.

Photo: WILLIAM WEST/AFP/Getty Images

They spent a good part of 2016 rewriting the software and getting it set up on AWS. They concentrated on turning their 15-year old code into microservices, which in the end resulted in a smaller code base. He said the technical debt issues were very real, but they had to be careful not to reinvent the wheel, just change what needed to be changed whenever possible.

“The code base was pretty large and we had to go in and do two things. We wanted to build it for multi-tenant architecture and we wanted to create microservices,” he said. “If there was a service that could be pulled out and made self-contained we did that, but we also created new services as part of the process.”

Migrating customers on the fly

Last year was the migration year, and it was indeed a full year-long project to migrate every last customer over to the new system. It started in January and ended in December and involved moving tens of thousands of customers.

Photo: KTSDesign/Science Photo Library

First of all, they automated whatever they could and they also were very deliberate in terms of the migration order, being conscious of migrations that might be more difficult. “We were thoughtful in what order to migrate. We didn’t want to do easiest first and hardest at the end. We didn’t want to do just the harder ones and not make progress. We had to blend [our approaches] to fix bugs and issues throughout the project,” he said.

Viswanath stated that the overarching goal was to move the customers without a major incident. “If you talk to anyone who does migration, that’s a big thing. Everyone has scars doing migrations. We were conscious to do this pretty carefully.” Surprisingly, although it wasn’t perfect, they did manage to complete the entire exercise without a major outage, a point of which the team is justifiably proud. That doesn’t mean that it was always smooth or easy.

“It sounds super easy: ‘we were thoughtful and we migrated,’ but there was warfare every day. When you migrate, you hit a wall and react. It was a daily thing for us throughout the year,” he explained. It took a total team effort involving engineering, product and support. That included having a customer support person involved in the daily scrum meetings so they could get a feel for any issues customers were having and fix them as quickly as possible.

What they gained

As in any cloud project, there are some general benefits to moving an application to the cloud around flexibility, agility and resource elasticity, but there was more than that when it came to this specific project.

Photo: Ade Akinrujomu/Getty Images

First of all it has allowed faster deployment with multiple deployments at the same time, due in large part to the copious use of microservices. That means they can add new features much faster. During the migration year, they held off on new features for the most part because they wanted to keep things as static as possible for the shift over, but with the new system in place they can move much more quickly to add new features.

They get much better performance and if they hit a performance bottleneck, they can just add more resources because it’s the cloud. What’s more, they were able to have a local presence in the EU and that improves performance by having the applications closer to the end users located there.

Finally, they actually found the cloud to be a more economical option, something that not every company that moves to the cloud finds. By closing the datacenters and reducing the capital costs associated with buying hardware and hiring IT personnel to maintain it, they were able to reduce costs.

Managing the people parts

It was a long drawn out project, and as such, they really needed to think about the human aspect of it too. They would swap people in and out to make sure the engineers stayed fresh and didn’t burn out helping with the transition.

One thing that helped was the company culture in general, which Viswanath candidly describes as one with open communication and a general “no bullshit” policy. “We maintained open communication, even when things weren’t going well. People would raise their hand if they couldn’t keep up and we would get them help,” he said.

He admitted that there was some anxiety within the company and for him personally implementing a project of this scale, but they knew they needed to do it for the future of the organization. “There was definitely nervousness on what if this project doesn’t go well. It seemed the obvious right direction and we had to do it. The risk was what if we screwed up in execution and we didn’t realize benefits we set out to do.”

In the end, it was a lot of work, but it worked out just fine and they have the system in place for the future. “Now we are set up for the next 10 years,” he said.

Posted Under: Tech News
Red Hat looks beyond Linux

Posted by on 31 March, 2018

This post was originally published on this site

The Red Hat Linux distribution is turning 25 years old this week. What started as one of the earliest Linux distributions is now the most successful open-source company, and its success was a catalyst for others to follow its model. Today’s open-source world is very different from those heady days in the mid-1990s when Linux looked to be challenging Microsoft’s dominance on the desktop, but Red Hat is still going strong.

To put all of this into perspective, I sat down with the company’s current CEO (and former Delta Air Lines COO) Jim Whitehurst to talk about the past, present and future of the company, and open-source software in general. Whitehurst took the Red Hat CEO position 10 years ago, so while he wasn’t there in the earliest days, he definitely witnessed the evolution of open source in the enterprise, which is now more widespread than every.

“Ten years ago, open source at the time was really focused on offering viable alternatives to traditional software,” he told me. “We were selling layers of technology to replace existing technology. […] At the time, it was open source showing that we can build open-source tech at lower cost. The value proposition was that it was cheaper.”

At the time, he argues, the market was about replacing Windows with Linux or IBM’s WebSphere with JBoss. And that defined Red Hat’s role in the ecosystem, too, which was less about technological information than about packaging. “For Red Hat, we started off taking these open-source projects and making them usable for traditional enterprises,” said Whitehurst.

Jim Whitehurst, Red Hat president and CEO (photo by Joan Cros/NurPhoto via Getty Images)

About five or six ago, something changed, though. Large corporations, including Google and Facebook, started open sourcing their own projects because they didn’t look at some of the infrastructure technologies they opened up as competitive advantages. Instead, having them out in the open allowed them to profit from the ecosystems that formed around that. “The biggest part is it’s not just Google and Facebook finding religion,” said Whitehurst. “The social tech around open source made it easy to make projects happen. Companies got credit for that.”

He also noted that developers now look at their open-source contributions as part of their resumé. With an increasingly mobile workforce that regularly moves between jobs, companies that want to compete for talent are almost forced to open source at least some of the technologies that don’t give them a competitive advantage.

As the open-source ecosystem evolved, so did Red Hat. As enterprises started to understand the value of open source (and stopped being afraid of it), Red Hat shifted from simply talking to potential customers about savings to how open source can help them drive innovation. “We’ve gone from being commeditizers to being innovators. The tech we are driving is now driving net new innovation,” explained Whitehurst. “We are now not going in to talk about saving money but to help drive innovation inside a company.”

Over the last few years, that included making acquisitions to help drive this innovation. In 2015, Red Hat bought IT automation service Ansible, for example, and last month, the company closed its acquisition of CoreOS, one of the larger independent players in the Kubernetes container ecosystem — all while staying true to its open-source root.

There is only so much innovation you can do around a Linux distribution, though, and as a public company, Red Hat also had to look beyond that core business and build on it to better serve its customers. In part, that’s what drove the company to launch services like OpenShift, for example, a container platform that sits on top of Red Hat Enterprise Linux and — not unlike the original Linux distribution — integrates technologies like Docker and Kubernetes and makes them more easily usable inside an enterprise.

The reason for that? “I believe that containers will be the primary way that applications will be built, deployed and managed,” he told me, and argued that his company, especially after the CoreOS acquisition, is now a leader in both containers and Kubernetes. “When you think about the importance of containers to the future of IT, it’s a clear value for us and for our customers.”

The other major open-source project Red Hat is betting on is OpenStack . That may come as a bit of a surprise, given that popular opinion in the last year or so has shifted against the massive project that wants to give enterprises an open source on-premise alternative to AWS and other cloud providers. “There was a sense among big enterprise tech companies that OpenStack was going to be their savior from Amazon,” Whitehurst said. “But even OpenStack, flawlessly executed, put you where Amazon was five years ago. If you’re Cisco or HP or any of those big OEMs, you’ll say that OpenStack was a disappointment. But from our view as a software company, we are seeing good traction.”

Because OpenStack is especially popular among telcos, Whitehurst believes it will play a major role in the shift to 5G. “When we are talking to telcos, […] we are very confident that OpenStack will be the platform for 5G rollouts.”

With OpenShift and OpenStack, Red Hat believes that it has covered both the future of application development and the infrastructure on which those applications will run. Looking a bit further ahead, though, Whitehurst also noted that the company is starting to look at how it can use artificial intelligence and machine learning to make its own products smarter and more secure, but also at how it can use its technologies to enable edge computing. “Now that large enterprises are also contributing to open source, we have a virtually unlimited amount of material to bring our knowledge to,” he said.

 

Posted Under: Tech News
As marketing data proliferates, consumers should have more control

Posted by on 30 March, 2018

This post was originally published on this site

At the Adobe Summit in Las Vegas this week, privacy was on many people’s minds. It was no wonder with social media data abuse dominating the headlines, GDPR just around the corner, and Adobe announcing the concept of a centralized customer experience record.

With so many high profile breaches in recent years, putting your customer data in a central record-keeping system would seem to be a dangerous proposition, yet Adobe sees so many positives for marketers, it likely sees this as a worthy trade-off.

Which is not to say that the company doesn’t see the risks. Executives speaking at the conference continually insisted that privacy is always part of the conversation at Adobe as they build tools — and they have built in security and privacy safeguards into the customer experience record.

Offering better experiences

The point of the exercise isn’t simply to collect data for data’s sake, it’s to offer consumers a more customized and streamlined experience. How does that work? There was a demo in the keynote illustrating a woman’s experience with a hotel brand.

Brad Rencher, EVP and GM at Adobe Experience Cloud explains Adobe’s Cloud offerings. Photo: Jeff Bottari/Invision for Adobe/AP Images

The mythical woman started a reservation for a trip to New York City, got distracted in the middle and was later “reminded” to return to it via Facebook ad. She completed the reservation and was later issued a digital key to key to her room, allowing to bypass the front desk check-in. Further, there was a personal greeting on the television in her room with a custom message and suggestions for entertainment based on her known preferences.

As one journalist pointed out in the press event, this level of detail from the hotel is not something that would thrill him (beyond the electronic check-in). Yet there doesn’t seem to be a way to opt out of that data (unless you live in the EU and are subject to GDPR rules).

Consumers may want more control

As it turns out, that reporter wasn’t alone. According to a survey conducted last year by The Economist Intelligence Unit in conjunction with ForgeRock, an identity management company, consumers are not just willing sheep that tech companies may think we are.

The survey was conducted last October with 1,629 consumers participating from eight countries including Australia, China, France, Germany, Japan, South Korea, the UK and the US. It’s worth noting that survey questions were asked in the context of Internet of Things data, but it seems that the results could be more broadly applied to any types of data collection activities by brands.

There are a couple of interesting data points that perhaps brands should heed as they collect customer data in the fashion outlined by Adobe. In particular as it relates to what Adobe and other marketing software companies are trying to do to build a central customer profile, when asked to rate the statement, “I am uncomfortable with companies building a “profile” of me to predict my consumer behaviour,” 39 percent strongly agreed with that statement. Another 35 percent somewhat agreed. That would suggest that consumers aren’t necessarily thrilled with this idea.

When presented with the statement, Providing my personal information may have more drawbacks than benefits, 32 percent strongly agreed and 41 percent somewhat agreed.

That would suggest that it is on the brand to make it clearer to consumers that they are collecting that data to provide a better overall experience, because it appears that consumers who answered this survey are not necessarily making that connection.

Perhaps it wasn’t a coincidence that at a press conference after the Day One keynote announcing the unified customer experience record, many questions from analysts and journalists focused on notions of privacy. If Adobe is helping companies gather and organize customer data, what role do they have in how their customers’ use that data, what role does the brand have and how much control should consumers have over their own data?

These are questions we seem to be answering on the fly. The technology is here now or very soon will be, and wherever the data comes from, whether the web, mobile devices or the Internet of Things, we need to get a grip on the privacy implications — and we need to do it quickly. If consumers want more control as this survey suggests, maybe it’s time for companies to give it to them.

Posted Under: Tech News
Azure’s availability zones are now generally available

Posted by on 30 March, 2018

This post was originally published on this site

No matter what cloud you build on, if you want to build something that’s highly available, you’re always going to opt to put your applications and data in at least two physically separated regions. Otherwise, if a region goes down, your app goes down, too. All of the big clouds also offer a concept called ‘availability zones’ in their regions to offer developers the option to host their applications in two separate data centers in the same zone for a bit of extra resilience. All big clouds, that is, except for Azure, which is only launching its availability zones feature into general availability today after first announcing a beta last September.

Ahead of today’s launch, Julia White, Microsoft’s corporate VP for Azure, told me that the company’s design philosophy behind its data center network was always about servicing commercial customers with the widest possible range of regions to allow them to be close to their customers and to comply with local data sovereignty and privacy laws. That’s one of the reasons why Azure today offers more regions than any of its competitors, with 38 generally available regions and 12 announced ones.

“Microsoft started its infrastructure approach focused on enterprise organizations and built lots of regions because of that,” White said. “We didn’t pick this regional approach because it’s easy or because it’s simple, but because we believe this is what our customers really want.”

Every availability zone has its own network connection and power backup, so if one zone in a region goes down, the others should remain unaffected. A regional disaster could shut down all of the zones in a single region, though, so most business will surely want to keep their data in at least one additional region.

Posted Under: Tech News
Asana introduces Timeline, lays groundwork for AI-based monitoring as the “team brain” for productivity

Posted by on 30 March, 2018

This post was originally published on this site

When workflow management platform Asana announced a $75 million round of funding in January led by former Vice President Al Gore’s Generation Investment Management, the startup didn’t give much of an indication of what it planned to do with the money, or what it was that won over investors to a new $900 million valuation (a figure we’ve now confirmed with the company).

Now, Asana is taking off the wraps on the next phase of its strategy. This week, the company announced a new feature it’s calling Timeline — composite, visual, and interactive maps of the various projects assigned to different people within a team, giving the group a wider view of all the work that needs to be completed, and how the projects fit together, mapped out in a timeline format.

Timeline is a new premium product: Asana’s 35,000 paying users will be able to access it for no extra charge. Those who are among Asana’s millions of free users will have to upgrade to the premium tier to access it.

The Timeline that Asana is making is intended to be used in scenarios like product launches, marketing campaigns and event planning, and it’s not a matter of a new piece of software where you have to duplicate work, but each project automatically becomes a new segment on a team’s Timeline. Viewing projects through the Timeline allows users to identify if different segments are overlapping and adjust them accordingly.

Perhaps one of the most interesting aspects of the Timeline, however, is that it’s the first instalment of a bigger strategy that Asana plans to tackle over the next year to supercharge and evolve its service, making it the go-to platform for helping keep you focused on work, when you’re at work.

While Asana started out as a place where people go to manage the progress of projects, its ambition going forward is to become a platform that, with a machine-learning engine at the back end, will aim to manage a team’s and a company’s wider productivity and workload, regardless of whether they are actively in the Asana app or not.

“The long term vision is to marry computer intelligence with human intelligence to run entire companies,” Asana co-founder Justin Rosenstein said in an interview. “This is the vision that got investors excited.”

The bigger product — the name has not been revealed — will include a number of different features. Some that Rosenstein has let me see in preview include the ability for people to have conversations about specific projects — think messaging channels but less dynamic and more contained. And it seems that Asana also has designs to move into the area of employee monitoring: it has also been working on a widget of sorts that installs on your computer and watches you work, with the aim of making you more efficient.

“Asana becomes a team brain to keep everyone focused,” said Rosenstein.

Given that Asana’s two co-founders, Dustin Moskovitz and Rosenstein, previously had close ties to Facebook — Moskovitz as a co-founder and Rosenstein as its early engineering lead — you might wonder if Timeline and the rest of its new company productivity engine might be bringing more social elements to the table (or desk, as the case may be).

In fact, it’s quite the opposite.

Rosenstein may have to his credit the creation of the “like” button and other iconic parts of the world’s biggest social network, but he has in more recent times become a very outspoken critic of the distracting effects of services like Facebook’s. It’s part of a bigger trend hitting Silicon Valley, where a number of leading players have, in a wave of mea culpa, turned against some of the bigger innovations particularly in social media.

Some have even clubbed together to form a new organization called the Center for Humane Technology, whose motto is “Reversing the digital attention crisis and realigning technology with humanity’s best interests.” Rosenstein is an advisor, although when I tried to raise the issue of the backlash that has hit Facebook on multiple fronts, he responded pretty flatly, “It’s not something I want to talk about right now.” (That’s what keeping focussed is all about, I guess.)

Asana, essentially, is taking the belief that social can become counterproductive when you have to get something done, and applying it to the enterprise environment.

This is an interesting twist, given that one of the bigger themes in enterprise IT over the last several years has been how to turn business apps and software more “social” — tapping into some of the mechanics and popularity of social networking to encourage employees to collaborate and communicate more with each other even when (as is often the case) they are not in the same physical space.

But social working might not be for everyone, all the time. Slack, the wildly popular workplace chat platform that interconnects users with each other and just about every enterprise and business app, is notable for producing “a gazillion notifications”, in Rosenstein’s words, leading to distraction from actually getting things done. “I’m not saying services like Slack can’t be useful,” he explained. (Slack is also an integration partner of Asana’s.) “But companies are realising that, to collaborate effectively, they need more than communication. They need content and work management. I think that Slack has a lot of useful purposes but I don’t know if all of it is good all the time.”

The “team brain” role that Asana envisions may be all about boosting productivity by learning about you and reducing distraction — you will get alerts, but you (and presumably the brain) prioritise which ones you get, if any at all — but interestingly it has kept another feature characteristic of a lot of social networking services: amassing data about your activities and using that to optimise engagement. As Rosenstein described it, Asana will soon be able to track what you are working on, and how you work on it, to figure out your working patterns.

The idea is that, by using machine learning algorithms, you can learn what a person does quickly, and what might take longer, to help plan that person’s tasks better, and ultimately make that person more productive. Eventually, the system will be able to suggest to you what you should be working on and when.

All of that might sound like music to managers’ ears, but for some, employee monitoring programs sound a little alarming for how closely they monitor your every move. Given the recent wave of attention that social media services have had for all the data they collect, it will be interesting to see how enterprise services like this get adopted and viewed. It’s also not at all clear how these sorts of programs will sit in respect of new directives like GDPR in Europe, which put into place a new set of rules for how any provider of an internet service needs to inform users of how their data is used, and any data collecting needs to have a clear business purpose.

Still, with clearly a different aim in mind — helping you work better — the end could justify the means for some, not just for bosses, but for people who might feel overwhelmed with what is on their work plate every day. “When you come in in the morning, you might have a list [many things] to do today,” Rosenstein said. “We take over your desktop to show the one thing you need to do.”

Posted Under: Tech News
IoT devices could be next customer data frontier

Posted by on 30 March, 2018

This post was originally published on this site

At the Adobe Summit this week in Las Vegas, the company introduced what could be the ultimate customer experience construct, a customer experience system of record that pulls in information, not just from Adobe tools, but wherever it lives. In many ways it marked a new period in the notion of customer experience management, putting it front and center of the marketing strategy.

Adobe was not alone, of course. Salesforce, with its three-headed monster, the sales, marketing and service clouds, was also thinking of a similar idea. In fact, they spent $6.5 billion dollars last week to buy MuleSoft to act as a data integration layer to access  customer information from across the enterprise software stack, whether on prem, in the cloud, or inside or outside of Salesforce. And they announced the Salesforce Integration Cloud this week to make use of their newest company.

As data collection takes center stage, we actually could be on the edge of yet another data revolution, one that could be more profound than even the web and mobile were before it. That is…the Internet of Things.

Here comes IoT

There are three main pieces to that IoT revolution at the moment from a consumer perspective. First of all, there is the smart speaker like the Amazon Echo or Google Home. These provide a way for humans to interact verbally with machines, a notion that is only now possible through the marriage of all this data, sheer (and cheap) compute power and the AI algorithms that fuel all of it.

Next, we have the idea of a connected car, one separate from the self-driving car. Much like the smart speaker, humans can interact with the car, to find directions and recommendations and that leaves a data trail in its wake. Finally we, have sensors like iBeacons sitting in stores, providing retailers with a world of information about a customer’s journey through the store — what they like or don’t like, what they pick up, what they try on and so forth.

There are very likely a host of other categories too, and all of this information is data that needs to be processed and understood just like any other signals coming from customers, but it also has unique characteristics around the volume and velocity of this data — it is truly big data with all of the issues inherent in processing that amount of data.

The means it needs to be ingested, digested and incorporated into that central customer record-keeping system to drive the content and experiences you need to create to keep your customers happy — or so the marketing software companies tell us, at least. (We also need to consider the privacy implications of such a record, but that is the subject for another article.)

Building a better relationship

Regardless of the vendor, all of this is about understanding the customer better to provide a central data gathering system with the hope of giving people exactly what they want. We are no longer a generic mass of consumers. We are instead individuals with different needs, desires and requirements, and the best way to please us they say, is to understand us so well, that the brand can deliver the perfect experience at exactly the right moment.

Photo: Ron Miller

That involves listening to the digital signals we give off without even thinking about it. We carry mobile, connected computers in our pockets and they send out a variety of information about our whereabouts and what we are doing. Social media acts as a broadcast system that brands can tap into to better understand us (or so the story goes).

Part of what Adobe, Salesforce and others can deliver is a way to gather that information, pull it together into his uber record keeping system and apply a level of machine and learning and intelligence to help further the brand’s ultimate goals of serving a customer of one and delivering an efficient (and perhaps even pleasurable) experience.

Getting on board

At an Adobe Summit session this week on IoT (which I moderated), the audience was polled a couple of times. In one show of hands, they were asked how many owned a smart speaker and about three quarters indicated they owned at least one, but when asked how many were developing applications for these same devices only a handful of hands went up. This was in a room full of marketers, mind you.

Photo: Ron Miller

That suggests that there is a disconnect between usage and tools to take advantage of them. The same could be said for the other IoT data sources, the car and sensor tech, or any other connected consumer device. Just as we created a set of tools to capture and understand the data coming from mobile apps and the web, we need to create the same thing for all of these IoT sources.

That means coming up with creative ways to take advantage of another interaction (and data collection) point. This is an entirely new frontier with all of the opportunity involved in that, and that suggests startups and established companies alike need to be thinking about solutions to help companies do just that.

Posted Under: Tech News
Hewlett Packard Enterprise to move HQ to San Jose

Posted by on 28 March, 2018

This post was originally published on this site

Hewlett Packard Enterprise is moving north from Palo Alto to San Jose. The company will relocate 1,000 employees to a 220,000-square-foot space in late 2018. HPE was spun-off from Hewlett-Packard in 2015 and is focused on servers and storage.

This news comes months after HPE announced a different plan in which the company was moving to Santa Clara, where Aruba Networks, a company it previously acquired, is headquartered.

HPE is going to occupy six floors in San Jose’s America Center, which is located near a forthcoming Berryessa BART station.

This move is the latest win for San Jose. Google recently announced it would move in the coming years. According to a report in The Mercury News, the city of San Jose did not offer HPE any financial incentives.

Posted Under: Tech News
Microsoft can ban you for using offensive language

Posted by on 28 March, 2018

This post was originally published on this site

A report by CSOOnline presented the possibility that Microsoft would be able to ban “offensive language” from Skype, Xbox, and, inexplicably, Office. The post, which cites Microsoft’s new terms of use, said that the company would not allow users to “publicly display or use the Services to share inappropriate content or material (involving, for example, nudity, bestiality, pornography, offensive language, graphic violence, or criminal activity)” and that you could lose your Xbox Live Membership if you curse out a kid Overwatch.

“We are committed to providing our customers with safe and secure experiences while using our services. The recent changes to the Microsoft Service Agreement’s Code of Conduct provide transparency on how we respond to customer reports of inappropriate public content,” said a Microsoft spokesperson. The company notes that “Microsoft Agents” do not watch Skype calls and that they can only respond to complaints with clear evidence of abuse. The changes, which go into effect May 1, allows Microsoft to ban you from it services if you’re found passing “inappropriate content” or using “offensive language.”

These new rules give Microsoft more power over abusive users and it seems like Microsoft is cracking down on bad behavior on its platforms. This is good news for victims of abuse in private communications channels on Microsoft products and may give trolls pause before they yell something about your mother on Xbox. We can only dare to dream.

Posted Under: Tech News
GoDaddy to move most of its infrastructure to AWS, not including domain management for its 75M domains

Posted by on 28 March, 2018

This post was originally published on this site

It really is Go Time for GoDaddy . Amazon’s cloud services provider AWS and GoDaddy, the domain registration and management giant, may have competed in the past when it comes to working with small businesses to provide them with web services, but today the two took a step closer together. AWS said that GoDaddy is now migrating “the majority” of its infrastructure to AWS in a multi-year deal that will also see AWS becoming a partner in selling on some products of GoDaddy’s — namely Manaon ged WordPress and GoCentral for managing domains and building and running websites.

The deal — financial terms of which are not being disclosed — is wide-ranging, but it will not include taking on domain management for GoDaddy’s 75 million domains currently under management, a spokesperson for the company confirmed to me.

“GoDaddy is not migrating the domains it manages to AWS,” said Dan Race, GoDaddy’s VP of communications. “GoDaddy will continue to manage all customer domains. Domain management is obviously a core business for GoDaddy.”

The move underscores Amazon’s continuing expansion as a powerhouse in cloud hosting and related services, providing a one-stop shop for customers who come for one product and stay for everything else (not unlike its retail strategy in that regard). Also, it is a reminder of how the economies of scale in the cloud business make it financially challenging to compete if you are not already one of the big players, or lack deep pockets to sustain your business as you look to grow. GoDaddy has been a direct victim of those economics: just last summer, GoDaddy killed off Cloud Servers, its AWS-style business for building, testing and scaling cloud services on GoDaddy infrastructure.

The AWS deal also highlights how GoDaddy is trimming operational costs to improve its overall balance sheet under Scott Wagner, the COO who took over as CEO from Blake Irving at the beginning of this year. 

“As a technology provider with more than 17 million customers, it was very important for GoDaddy to select a cloud provider with deep experience in delivering a highly reliable global infrastructure, as well as an unmatched track record of technology innovation, to support our rapidly expanding business,” said Charles Beadnall, CTO at GoDaddy, in a statement.

AWS provides a superior global footprint and set of cloud capabilities which is why we selected them to meet our needs today and into the future. By operating on AWS, we’ll be able to innovate at the speed and scale we need to deliver powerful new tools that will help our customers run their own ventures and be successful online,” he continued.

AWS said that GoDaddy will be using AWS’s Elastic Container Service for Kubernetes and Elastic Compute Cloud P3 instances, as well as machine learning, analytics, and other database-related and container technology. Race told TechCrunch that the infrastructure components that the company is migrating to AWS currently run at GoDaddy but will be gradually moved away as part of its multi-year migration.

“As a large, high-growth business, GoDaddy will be able to leverage AWS to innovate for its customers around the world,” said Mike Clayville, VP, worldwide commercial sales at AWS, in a statement. “Our industry-leading services will enable GoDaddy to leverage emerging technologies like machine learning, quickly test ideas, and deliver new tools and solutions to their customers with greater frequency. We look forward to collaborating with GoDaddy as they build anew in the cloud and innovate new solutions to help people turn their ideas into reality online.”

 

Posted Under: Tech News
Page 22 of 661« First...10...2021222324...304050...Last »

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue