Monthly Archives: June 2018

Microsoft program provides a decade of updates for Windows IoT devices

Posted by on 6 June, 2018

This post was originally published on this site

If you have an essential Internet of Things device running Windows 10 IoT Core Service, you don’t want to be worried about security and OS patches over a period of years. Microsoft wants to help customers running these kinds of devices with a new program that guarantees 10 years of updates.

The idea is that as third-party partners build applications on top of the Windows 10 IoT Core Services, these OEMs, who create the apps, can pay Microsoft to guarantee updates for these devices for a decade. This can help assure customers that they won’t be vulnerable to attack on these critical systems from unpatched applications.

The service does more than provide updates though. It also gives OEMs the ability to manage the updates and assess the device’s health.

“The Windows IoT Core service offering is enabling partners to commercialize secure IoT devices backed by industry-leading support. And so device makers will have the ability to manage updates for the OS, for the apps and for the settings for OEM-specific files,” Dinesh Narayanan, director of business development for emerging markets explained.

It gives OEMs creating Windows-powered applications on machines like healthcare devices or ATMs this ability to manage them over an extended period. That’s particularly important as these devices tend to have a more extended usage period than say a PC or tablet.”We want to extend support and commit to that support over the long haul for these devices that have a longer life cycle,” Narayanan said.

Beyond the longevity, the service also provides customers with access to the Device Update Center where they can control and customize how and when the devices get updated. It also includes another level of security called Device Health Attestation that allows the OEMs to evaluate the trustworthiness of the devices before they update them using a third party service.

All of this is designed to give Microsoft a foothold in the growing IoT space and to provide an operating system for these devices as they proliferate. While predictions vary dramatically, Gartner has predicted that at least 20 billion connected devices will be online in 2020.

While not all of these will be powered by Windows, or require advanced management capabilities, those that do can be assured if their vendor uses this program that they can manage the devices and keep them up-to-date. And when it comes to the Internet of Things, chances are that’s going to be critical.

Posted Under: Tech News
SAP gives CRM another shot with with new cloud-based suite

Posted by on 5 June, 2018

This post was originally published on this site

Customer Relationship Management (CRM) is a mature market with a clear market leader in Salesforce. It has a bunch other enterprise players like Microsoft, Oracle and SAP vying for position. SAP decided to take another shot today when it released a new business products suite called SAP C/4HANA. (Ya, catchy I know.)

SAP C/4HANA pulls together several acquisitions from the last several years. It started in 2013 when it bought Hybris for around a billion dollars. That gave them a logistics tracking piece. Then last year it got Gigya for $350 million, giving them a way to track customer identity. This year it bought the final piece when it paid $2.4 billion for CallidusCloud for a configure, price quote (CPQ) piece.

SAP has taken these three pieces and packaged them together into a customer relationship management package. They see this term much more broadly than simply tracking a database of names and vital information on customers. They hope with these products to give their customers a way to provide consumer data protection, marketing, commerce, sales and customer service.

They see this approach as different, but it’s really more of what the other players are doing by packaging sales, service and marketing into a single platform. “The legacy CRM systems are all about sales; SAP C/4HANA is all about the consumer. We recognize that every part of a business needs to be focused on a single view of the consumer. When you connect all SAP applications together in an intelligent cloud suite, the demand chain directly fuels the behaviors of the supply chain,” CEO Bill McDermott said in a statement.

It’s interesting that McDermott goes after legacy CRM tools because his company has offered its share of them over the years, but its market share has been headed in the wrong direction. This new cloud-based package is designed to change that. If you can’t build it, you can buy it, and that’s what SAP has done here.

Brent Leary, owner at CRM Essentials, who has been watching this market for many years says that while SAP has a big back-office customer base in ERP, it’s going to be tough to pull customers back to SAP as a CRM provider. “I think their huge base of ERP customers provides them with an opportunity to begin making inroads, but it will be tough as mindshare for CRM/Customer Engagement has moved away from SAP,” he told TechCrunch.

He says that it will be important with this new product to find its niche in a defined market. “It will be imperative going forward for SAP find spots to “own” in the minds of corporate buyers in order to optimize their chances of success against their main competitors,” he said.

It’s obviously not going to be easy, but SAP has used its cash to buy some companies and give it another shot. Time will tell if it was money well spent.

Posted Under: Tech News
Hotel management platform Mews closes €6m Series A

Posted by on 5 June, 2018

This post was originally published on this site

Before we automate hotels with AI and robots (which will almost certainly happen) the first wave of this revolution will be brought by the software that runs hotels with humans.

Thus it is that
Mews, the hotel property management platform, has closed a €6m Series A funding round. The round was led by Notion.vc Capital, with participation from HenQ and Thayer Ventures.

The funding will be used to accelerate the business and open new offices around the world to support its global customer base.

Mews’ platform automates check-ins and payments as also covering booking management and staff training. It’s designed to be an open platform allowing other tools and apps to connect through its API. So, think ‘Slack for hotels’, perhaps.

Mews was founded in 2012 by entrepreneur and ex-hotelier Richard Valtr. Customers include Different Hotels, Machefert, Clink and Wombats, or 43,000 beds in 350 properties.

Valtr said: “Mews’ mission is to help hotels and hostels automate their operations so they can focus on their guests. We want to build the nervous system for hotels that all apps and tools for both guests and hosts can be plugged into. Until recently hoteliers were forced to rely upon a closed one-stop-shop PMS offered up by incumbent players who have held a luddite attitude towards the hospitality industry for years.”

Jos White, General Partner at Notion commented: “We think the hotel industry is at a tipping point in terms of the way it uses technology to better manage their operations and transform the guest experience.”

Posted Under: Tech News
Egnyte releases one-step GDPR compliance solution

Posted by on 4 June, 2018

This post was originally published on this site

Egnyte has always had the goal of protecting data and files wherever they live, whether on-premises or in the cloud. Today, the company announced a new feature to help customers comply with GDPR privacy regulations that went into effect in Europe last week in a straight-forward fashion.

You can start by simply telling Egnyte that you want to turn on “Identify sensitive content.” You then select which sets of rules you want to check for compliance including GDPR. Once you do this, the system goes and scans all of your repositories to find content deemed sensitive under GDPR rules (or whichever other rules you have selected).

Photo: Egnyte

It then gives you a list of files and marks them with a risk factor from 1-9 with one being the lowest level of risk and 9 being the highest. You can configure the program to expose whichever files you wish based on your own level of compliance tolerance. So for instance, you could ask to see any files with a risk level of seven or higher.

“In essence, it’s a data security and governance solution for unstructured data, and we are approaching that at the repository levels. The goal is to provide visibility, control and protection of that information in any in any unstructured repository,” Jeff Sizemore, VP of governance for Egnyte Protect told TechCrunch.

Photo: Egnyte

Sizemore says that Egnyte weighs the sensitivity of the data against the danger it could be exposed and leave a customer in violation of GDPR rules. “We look at things like public links into groups, which is basically just governance of the data, making sure nothing is wide open from a file share perspective. We also look at how the information is being shared,” Sizemore said. A social security number being shared internally is a lot less risky than a thousand social security numbers being shared in a public link.

The service covers 28 nations and 24 languages and it’s pre-configured to understand what data is considered sensitive by country and language. “We already have all the mapping and all the languages sitting underneath these policies. We are literally going into the data and actually scanning through and looking for GDPR-relevant data that’s in the scope of Article 40.”

The new service is generally available on Tuesday morning. The company will be makign an announcement at the InfoSecurity Conference in London. It has had the service in Beta prior to this.

Posted Under: Tech News
The new Gmail will roll out to all users next month

Posted by on 4 June, 2018

This post was originally published on this site

Google today announced that the new version of Gmail will launch into general availability and become available to all G Suite users next month. The exact date remains up in the air but my guess is that it’ll be sooner than later.

The new Gmail offers features like message snoozing, attachment previews, a sidebar for both Google apps like Calendar and third-party services like Trello, offline support, confidential messages that self-destruct after a set time, and more. It’s also the only edition of Gmail that currently allows you to try out Smart Compose, which tries to complete your sentences for you.

Here is what the rollout will look like for G Suite users. Google didn’t detail what the plan for regular users will look like, but if you’re not a G Suite user, you can already try the new Gmail today anyway and chances are stragglers will also get switched over to the new version at a similar pace as G Suite users.

Starting in July, G Suite admins will be able to immediately transition all of their users to the new Gmail, but users can still opt out for another twelve weeks. After that time is up, all G Suite users will move to the new Gmail experience.

Admins can also give users the option to try the new Gmail at their own pace or — and this is the default setting — they can just wait another four weeks and then Google will automatically give users the option to opt in.

Eight weeks after general availability, so sometime in September, all users will be migrated automatically but can still opt out for another four weeks.

That all sounds a bit more complicated than necessary, but the main gist here is: chances are you’ll get access to the new Gmail next month and if you hate it, you can still opt out for a bit longer. Then, if you still hate it, you are out of luck because come October, you will be using the new Gmail no matter what.

Posted Under: Tech News
Microsoft Azure will soon offer machines with up to 12 TB of memory

Posted by on 4 June, 2018

This post was originally published on this site

Do you have an application that needs a lot of memory? Maybe as much as 12 terabytes of memory? Well, you’re in luck because Microsoft Azure will soon offer virtual machines with just that much RAM, based on Intel’s Xeon Scalable servers.

The company made this announcement in concert with the launch of a number of other virtual machine (VM) types that are specifically geared toward running high-memory workloads — and the standard use cases for this is running the SAP Hana in-memory database service.

So in addition to this massive new 12 TB VM, Microsoft is also launching a new 192 GB machine that extends the lower end of Hana-optimized machines on Azure, as well as a number other Hana options that scale across multiple VMs and can offer combined memory sizes of up to 18 TB.

Another new feature of Azure launching today is Standards SSDs. These will offer Azure users a new option for running entry-level production workloads that require consistent disk performance and throughput without the full price of what are now called “premium SSD.” The Standard SSDs won’t offer the same kind of performance, though, but Microsoft promises that developers will still get improved latency, reliability and scalability as compared to standard hard disks in its cloud.

Posted Under: Tech News
Microsoft promises to keep GitHub independent and open

Posted by on 4 June, 2018

This post was originally published on this site

Microsoft today announced its plans to acquire GitHub for $7.5 billion in stock. Unsurprisingly, that sent a few shock waves through the developer community, which still often eyes Microsoft with considerable unease. During a conference call this morning, Microsoft CEO Satya Nadella, incoming GitHub CEO (and Xamarin founder) Nat Friedman and GitHub co-founder and outgoing CEO Chris Wanstrath laid out the plans for GitHub’s future under Microsoft.

The core message everybody on today’s call stressed was that GitHub will continue to operate as an independent company. That’s very much the approach Microsoft took with its acquisition of LinkedIn, but to some degree, it’s also an admission that Microsoft is aware of its reputation among many of the developers who call GitHub their home. GitHub will remain an open platform that any developer can plug into and extend, Microsoft promises. It’ll support any cloud and any device.

Unsurprisingly, while the core of GitHub won’t change, Microsoft does plan to extend GitHub’s enterprise services and integrate them with its own sales and partner channels. And Nadella noted that the company will use GitHub to bring Microsoft’s developer tools and services “to new audiences.”

With Nat Friedman taking over as CEO, GitHub will have a respected technologist at the helm. Microsoft’s acquisition and integration of Xamarin has, at least from the outside, been a success (and Friedman himself always seems very happy about the outcome when I talk to him), so I think this bodes quite well for GitHub. After joining Microsoft, Friedman ran the developer services team at the company. Wanstrath, who only took over the CEO role again after its last CEO was ousted after harassment scandal at the company, had long said that he wanted to step down and take a more active product role. And that’s what’s happening now that Friedman is taking over. Wanstrath will become a technical fellow and work on “strategic software initiatives” at Microsoft.

Indeed, during an interview after the acquisition was announced, Friedman repeatedly noted that he thinks GitHub is the most important developer company today — and it turns out that he started advocating for a closer relationship between the two companies right after he joined Microsoft two years ago.

During today’s press call, Friedman also stressed Microsoft’s commitment to keeping GitHub as open as it is today — but he also plans to expand the service and its community. “We want to bring more developers and more capabilities to GitHub, he said. “Because as a network and as a group of people in a community, GitHub is stronger, the bigger it is.”

Friedman echoed that in our interview later in the day and noted that he expected the developer community to be skeptical of the mashup of these two companies. “There is always healthy skepticism in the developer community,” he told me. “I would ask developers to look at the last few years of Microsoft history and really honestly Microsoft’s transformation into an open source company.” He asked developers to judge Microsoft by that and noted that what really matters, of course, is that the company will follow through on the promises it made today.

As for the product itself, Friedman noted that everything GitHub does should be about making a developer’s life easier. And to get started, that’ll mean making developing in the cloud easier. “We think broadly about the new and compelling types of ways that we can integrate cloud services into GitHub,” he noted. “And this doesn’t just apply to our cloud. GitHub is an open platform. So we have the ability for anyone to plug their cloud services into GitHub, and make it easier for you to go from code to cloud. And it extends beyond the cloud as well. Code to cloud. code to mobile, code to edge device, code to IoT. Every workflow that a developer wants to pursue, we will support.”

Another area the company will work on is the GitHub Marketplace. Microsoft says that it will offer all of its developer tools and services in the GitHub Marketplace.

And unsurprisingly, VS Code, Microsoft’s free and open source code editor, will get deeply integrated GitHub support.

“Our vision is really all about empowering developers and creating a home where you can use any language, any operating system, any cloud, any device for every developer, whether your student, a hobbyist, a large company, a startup or anything in between. GitHub is the home for all developers,” said Friedman. In our interview, he also stressed that his focus will be on making “GitHub better at making GitHub” and that he plans to do so by bringing Microsoft’s resources and infrastructure to the code hosting service, while at the same time leaving it to operate independently. 

It’s unclear whether all of these commitments today will easy developers’ fears of losing GitHub as a relatively neutral third-party in the ecosystem.

Nadella, who is surely aware of this, addressed this directly today. “We recognize the responsibility we take on with this agreement,” he said. “We are committed to being stewards of the GitHub community, which will retain its developer-first ethos operate independently and remain an open platform. We will always listen to develop a feedback and invest in both fundamentals as well as new capability once the acquisition closes.

In his prepared remarks, Nadella also stressed Microsoft’s heritage as a developer-centric company and that is it already the most active organization on GitHub. But more importantly, he addressed Microsoft’s role in the open source community, too. “We have always loved developers, and we love open source developers,” he said. “We’ve been on a journey ourselves with open source and the open source community. Today, we are all in with open source. We are active in the open source ecosystem. We contribute to open source project and some of our most vibrant developer tools and frameworks are open-sourced when it comes to our commitment to all source judges, by the actions we have taken in the recent past our actions today and in the future.”

Posted Under: Tech News
How Yelp (mostly) shut down its own data centers and moved to AWS

Posted by on 4 June, 2018

This post was originally published on this site

Back in 2013, Yelp was a 9-year old company built on a set of internal systems. It was coming to the realization that running its own data centers might not be the most efficient way to run a business that was continuing to scale rapidly. At the same time, the company understood that the tech world had changed dramatically from 2004 when it launched and it needed to transform the underlying technology to a more modern approach.

That’s a lot to take on in one bite, but it wasn’t something that happened willy-nilly or overnight says Jason Yellen, SVP of engineering at Yelp . The vast majority of the company’s data was being processed in a massive Python repository that was getting bigger all the time. The conversation about shifting to a microservices architecture began in 2012.

The company was also running the massive Yelp application inside its own datacenters, and as it grew it was increasingly becoming limited by long lead times required to procure and get new hardware online. It saw this was an unsustainable situation over the long-term and began a process of transforming from running a huge monolithic application on-premises to one built on microservices running in the cloud. It was a quite a journey.

The data center conundrum

Yellen described the classic scenario of a company that could benefit from a shift to the cloud. Yelp had a small operations team dedicated to setting up new machines. When engineering anticipated a new resource requirement, they had to give the operations team sufficient lead time to order new servers and get them up and running, certainly not the most efficient way to deal with a resource problem, and one that would have been easily solved by the cloud.

“We kept running into a bottleneck, I was running a chunk of the search team [at the time] and I had to project capacity out to 6-9 months. Then it would take a few months to order machines and another few months to set them up,” Yellen explained. He emphasized that the team charged with getting these machines going was working hard, but there were too few people and too many demands and something had to give.

“We were on this cusp. We could have scaled up that team dramatically and gotten [better] at building data centers and buying servers and doing that really fast, but we were hearing a lot of AWS and the advantages there,” Yellen explained.

To the cloud!

They looked at the cloud market landscape in 2013 and AWS was the clear leader technologically. That meant moving some part of their operations to EC2. Unfortunately, that exposed a new problem: how to manage this new infrastructure in the cloud. This was before the notion of cloud-native computing even existed. There was no Kubernetes. Sure, Google was operating in a cloud-native fashion in-house, but it was not really an option for most companies without a huge team of engineers.

Yelp needed to explore new ways of managing operations in a hybrid cloud environment where some of the applications and data lived in the cloud and some lived in their data center. It was not an easy problem to solve in 2013 and Yelp had to be creative to make it work.

That meant remaining with one foot in the public cloud and the other in a private data center. One tool that helped ease the transition was AWS Direct Connect, which was released the prior year and enabled Yelp to directly connect from their data center to the cloud.

Laying the groundwork

About this time, as they were figuring out how AWS works, another revolutionary technological change was occurring when Docker emerged and began mainstreaming the notion of containerization. “That’s another thing that’s been revolutionary. We could suddenly decouple the context of the running program from the machine it’s running on. Docker gives you this container, and is much lighter weight than virtualization and running full operating systems on a machine,” Yellen explained.

Another thing that was happening was the emergence of the open source data center operating system called Mesos, which offered a way to treat the data center as a single pool of resources. They could apply this notion to wherever the data and applications lived. Mesos also offered a container orchestration tool called Marathon in the days before Kubernetes emerged as a popular way of dealing with this same issue.

“We liked Mesos as a resource allocation framework. It abstracted away the fleet of machines. Mesos abstracts many machines and controls programs across them. Marathon holds guarantees about what containers are running where. We could stitch it all together into this clear opinionated interface,” he said.

Pulling it all together

While all this was happening, Yelp began exploring how to move to the cloud and use a Platform as a Service approach to the software layer. The problem was at the time they started, there wasn’t really any viable way to do this. In the buy versus build decision making that goes on in large transformations like this one, they felt they had little choice but to build that platform layer themselves.

In late 2013 they began to pull together the idea of building this platform on top of Mesos and Docker, giving it the name PaaSTA, an internal joke that stood for Platform as a Service, Totally Awesome. It became simply known as Pasta.

Photo: David Silverman/Getty Images

The project had the ambitious goal of making their infrastructure work as a single fabric, in a cloud-native fashion before most anyone outside of Google was using that term. Pasta developed slowly with the first developer piece coming online in August 2014 and the first  production service later that year in December. The company actually open sourced the technology the following year.

“Pasta gave us the interface between the applications and development teams. Operations had to make sure Pasta is up and running, while Development was responsible for implementing containers that implemented the interface,” Yellen said.

Moving to deeper into the public cloud

While Yelp was busy building these internal systems, AWS wasn’t sitting still. It was also improving its offerings with new instance types, new functionality and better APIs and tooling. Yellen reports this helped immensely as Yelp began a more complete move to the cloud.

He says there were a couple of tipping points as they moved more and more of the application to AWS — including eventually, the master database. This all happened in more recent years as they understood better how to use Pasta to control the processes wherever they lived. What’s more, he said that adoption of other AWS services was now possible due to tighter integration between the in-house data centers and AWS.

Photo: erhui1979/Getty Images

The first tipping point came around 2016 as all new services were configured for the cloud. He said they began to get much better at managing applications and infrastructure in AWS and their thinking shifted from how to migrate to AWS to how to operate and manage it.

Perhaps the biggest step in this years-long transformation came last summer when Yelp moved its master database from its own data center to AWS. “This was the last thing we needed to move over. Otherwise it’s clean up. As of 2018, we are serving zero production traffic through physical data centers,” he said. While they still have two data centers, they are getting to the point, they have the minimum hardware required to run the network backbone.

Yellen said they went from two weeks to a month to get a service up and running before this was all in place to just a couple of minutes. He says any loss of control by moving to the cloud has been easily offset by the convenience of using cloud infrastructure. “We get to focus on the things where we add value,” he said — and that’s the goal of every company.

Posted Under: Tech News
Helm moves out of Kubernetes’ shadow to become stand-alone project

Posted by on 1 June, 2018

This post was originally published on this site

Helm is an open source project that enables developers to create packages of containerized apps to make installation much simpler. Up until now, it was a sub-project of Kubernetes, the popular container orchestration tool, but as of today it is a stand-alone project.

Both Kubernetes and Helm are projects managed by the Cloud Native Computing Foundation (CNCF). The CNCF’s Technical Oversight Committee approved the project earlier this week. Dan Kohn, executive director at the CNCF says the two projects are closely aligned so it made sense for Helm to be a sub-project up until now.

“What’s nice about Helm is that it’s just an application on top of Kubernetes. Kubernetes is an API and Helm accesses that API. If you want you to install this [package], you access the Kubernetes API, and it pulls this many containers and pods and [it handles] all of the steps involved to do that,” Kohn explained.

This ability to package up a set of requirements allows you to repeat the installation process in a consistent way. “Helm addresses a common user need of deploying applications to Kubernetes by making their configurations reusable,” Brian Grant, principal engineer at Google and Kubernetes (and a member of the TOC) explained in a statement.

Packages are known as “charts,” which consist one or more containers. Kohn says for example, you might want to deploy a chart that includes WordPress and MariaDB in a single container. By creating a chart, it defines the installation process and which pieces need to go in which order to install correctly across a cluster.

Kohn said they decided to pull it out as a separate program because it doesn’t always follow the Kubernetes release schedule, and as such they wanted to make it stand-alone so it wouldn’t necessarily have to be linked to every Kubernetes release.

It also allows developers to benefit from the community, who could build Charts for common installation scenarios. “By joining CNCF, we’ll benefit from the input and participation of the community, and conversely Kubernetes will benefit when a community of developers provides a vast repository of ready-made charts for running workloads on Kubernetes,” Matt Butcher, co-creator of Helm and principal engineer at Microsoft said in a statement.

Besides Microsoft and Google, other project sponsors include Codefresh, Bitnami, Ticketmaster and Codecentric. The project website states there are currently 250 developers contributing to this project. By becoming part of CNCF that will very likely increase soon.

Posted Under: Tech News
Box acquires Progressly to expand workflow options

Posted by on 1 June, 2018

This post was originally published on this site

Box announced today that it has purchased Progressly, a Redwood City startup that focuses on workflow. All 12 Progressly employees will be joining Box immediately. They did not disclose the purchase price.

If you follow Box, you probably know the company announced a workflow tool in 2016 called Box Relay along with a partnership with IBM to sell it inside large enterprises. Jeetu Patel, chief product officer at Box says Relay is great for well defined processes inside a company like contract management or employee on-boarding, but Box wanted to expand on that initial vision to build more complex workflows. The Progressly team will help them do that.

Patel said that the company has heard from customers, especially in larger, more complex organizations, that they need a similar level of innovation on the automation side that they’ve been getting on the content side from Box.

“One of the things that we’ve done is to continue investing in partnerships around workflow with third parties. We have actually gone out and built a product with Relay. But we wanted to continue to make sure that we have an enhancement to our internal automation engine within Box itself. And so we just made an acquisition of a company called Progressly,” Patel told TechCrunch.

That should allow Box to build workflows that not only run within Box, but ones that can integrate and intersect with external workflow engines like Pega and Nintex to build more complex automation in conjunction with the Box set of tools and services. This could involve both internal employees and external organizations and moving content through a much more sophisticated workflow than Box Relay provides.

“What we wanted to do is just make sure that we double down in the investment in workflow, given the level of appetite we’ve seen from the market for someone like Box providing a solution like this,” Patel explained.

By buying Progressly, they were able to acquihire a set of employees who have a focussed understanding of workflow and can help continue to build out that automation engine and incorporate it into the Box platform. Patel says how they could monetize all of this is still open to discussion. For now, the Progressly team is already in the fold and product announcements based on this acquisition could be coming out later this year.

Progressly was founded in 2014 and was headquarted right down the street from Box in Redwood City. The company has raised $6 million, according to data on Crunchbase.

Posted Under: Tech News
Page 5 of 512345

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue