All posts by Richy George

Salesforce, AWS expand partnership with secure data sharing between platforms

Posted by on 25 September, 2018

This post was originally published on this site

Salesforce and Amazon’s cloud arm, AWS, have had a pretty close relationship for some time, signing a $400 million deal for infrastructure cloud services in 2016, but today at Dreamforce, Salesforce’s massive customer conference taking place this week in San Francisco, they took it to another level. The two companies announced they were offering a new set of data integration services between the two cloud platforms for common customers.

Matt Garman, vice president of Amazon Elastic Compute Cloud, says customers looking to transform digitally are still primarily concerned about security when moving data between cloud vendors, More specifically, they were asking for a way to move data more securely between the Salesforce and Amazon platforms. “Customers talked to us about sensitive data in Salesforce and using deep analytics and data processing on AWS and moving them back and forth in secure way,” he said. Today’s announcements let them do that.

In practice, Salesforce customers can set up a direct connection using AWS Private Link to connect directly to private Salesforce APIs and move data from Salesforce to an Amazon service such as Redshift, the company’s data warehouse product, without ever exposing the data to the open internet.

Further, Salesforce customers can set up Lambda functions so that when certain conditions are met in Salesforce, it triggers an action such as moving data (or vice versa). This is commonly known as serverless computing and developers are increasingly using event triggers to drive business processes.

Finally, the two companies are integrating more directly with Amazon Connect, the Amazon contact center software it launched in 2017. This is where it gets more interesting because of course Salesforce offers its own contact center services with Salesforce Service Cloud. The two companies found a way to help common customers work together here to build what they are calling AI-driven self-service applications using Amazon Connect on the Salesforce mobile Lightning development platform.

This could involve among other things, building mobile applications that take advantage of Amazon Lex, AWS’s bot building application and Salesforce Einstein, Salesforce’s artificial intelligence platform. Common customers can download the Amazon Connect CTI Adapter on the Salesforce AppExchange.

Make no mistake, this is a significant announcement in that it involves two of the most successful cloud companies on the planet working directly together to offer products and services that benefit their common customers. This was not lost on Bret Taylor, president and chief product officer at Salesforce. “We’re enabling something that wouldn’t have been possible. It’s really exciting because it’s something unique in the marketplace,” he said.

What’s more, it comes on the heels of yesterday’s partnership news with Apple, giving Salesforce two powerful partners to work with moving forward.

While the level of today’s news is unprecedented between the two companies, they  have been working together for some time. As Garman points out, Heroku, which Salesforce bought in 2010 and Quip, which it bought last year were both built on AWS from the get-go. Salesforce, which mostly runs its own data centers in the U.S. runs most of its public cloud on AWS, especially outside the U.S. Conversely, Amazon uses Salesforce tools internally.

Posted Under: Tech News
Snyk raises $22M on a $100M valuation to detect security vulnerabilities in open source code

Posted by on 25 September, 2018

This post was originally published on this site

Open source software is now a $14 billion+ market and growing fast, in use in one way or another in 95 percent of all enterprises. But that expansion comes with a shadow: open source components can come with vulnerabilities, and so their widespread use in apps become a liability to a company’s cybersecurity.

Now, a startup out of the UK called Snyk, which has built a way to detect when those apps or components are compromised, is announcing a $22 million round of funding to meet the demand from enterprises wanting to tackle the issue head on.

Led by Accel, with participation from GV plus previous investors Boldstart Ventures and Heavybit, this Series B notably is the second round raised by Snyk within seven months — it raised a $7 million Series A in March. That’s a measure of how the company is growing (and how enthusiastic investors are about what it has built so far). The startup is not disclosing its valuation but a source close to the deal says it is around $100 million now (it’s raised about $33 million to date).

As a measure of Snyk’s growth, the company says it now has over 200 paying customers and 150,000 users, with revenues growing five-fold in the last nine months. In March, it had 130 paying customers.

(Current clients include ASOS, Digital Ocean, New Relic and Skyscanner, the company said.)

Snyk plays squarely in the middle of how the landscape for enterprise services exists today. It provides options for organisations to use it on-premises, via the cloud, or in a hybrid version of the two, with a range of paid and free tiers to get users acquainted with the service.

Guy Podjarny, the company’s CEO who co-founded Snyk with Assaf Hefetz and Danny Grander, explained that Snyk works in two parts. First, the startup has built a threat intelligence system “that listens to open source activity.” Tapping into open-conversation platforms — for example, GitHub commits and forum chatter — Snyk uses machine learning to detect potential mentions of vulnerabilities. It then funnels these to a team of human analysts, “who verify and curate the real ones in our vulnerability DB.”

Second, the company analyses source code repositories — including, again, GitHub as well as BitBucket — “to understand which open source components each one uses, flag the ones that are vulnerable, and then auto-fix them by proposing the right dependency version to use and through patches our security team builds.”

Open source components don’t have more vulnerabilities than closed source ones, he added, “but their heavy reuse makes those vulnerabilities more impactful.” Components can be used in thousands of applications, and by Snyk’s estimation, some 77 percent of those applications will end up with components that have security vulnerabilities. “As a result, the chances of an organisation being breached through a vulnerable open source component are far greater than a security flaw purely in their code.”

Podjarny says there is no plans to try to tackle proprietary code longer term but to expand how it can monitor apps built on open source.

“Our focus is on two fronts – building security tools developers love, and fixing open source security,” he said. “We believe the risk from insecure use of open source code is far greater than that of your own code, and is poorly addressed in the industry. We do intend to expand our protection from fixing known vulnerabilities in open source components to monitoring and securing them in runtime, flagging and containing malicious and compromised components.”

While this is a relatively new area for security teams to monitor and address, he added that the Equifax breach highlighted what might happen in the worst-case scenario if such issues go undetected. Snyk is not the only company that has identified the gap in the market. Black Duck focuses on flagging non-compliant open source licences, and offers some security features as well.

However, it is Snyk — whose name derives from a play on the word “sneak”, combined with the acronym meaning “so now you know” — that seems to be catching the most attention at the moment.

“Some of the largest data breaches in recent years were the result of unfixed vulnerabilities in open source dependencies; as a result, we’ve seen the adoption of tools to monitor and remediate such vulnerabilities grow exponentially,” said Philippe Botteri, partner at Accel, who is joining the board with this round. “We’ve also seen the ownership of application security shifting towards developers. We feel that Snyk is uniquely positioned in the market given the team’s deep security domain knowledge and developer-centric mindset, and are thrilled to join them on this mission of bringing security tools to developers.”

Posted Under: Tech News
The 7 most important announcements from Microsoft Ignite today

Posted by on 24 September, 2018

This post was originally published on this site

Microsoft is hosting its Ignite conference in Orlando, Florida this week. And although Ignite isn’t the household name that Microsoft’s Build conference has become over the course of the last few years, it’s a massive event with over 30,000 attendees and plenty of news. Indeed, there was so much news this year that Microsoft provided the press with a 27-page booklet with all of it.

We wrote about quite a few of these today, but here are the most important announcements, including one that wasn’t in Microsoft’s booklet but was featured prominently on stage.

1. Microsoft, SAP and Adobe take on Salesforce with their new Open Data Initiative for customer data

What was announced: Microsoft is teaming up with Adobe and SAP to create a single model for representing customer data that businesses will be able to move between systems.

Why it matters: Moving customer data between different enterprise systems is hard, especially because there isn’t a standardized way to represent this information. Microsoft, Adobe and SAP say they want to make it easier for this data to flow between systems. But it’s also a shot across the bow of Salesforce, the leader in the CRM space. It also represents a chance for these three companies to enable new tools that can extract value from this data — and Microsoft obviously hopes that these businesses will choose its Azure platform for analyzing the data.


2. Microsoft wants to do away with more passwords

What was announced: Businesses that use Microsoft Azure Active Directory (AD) will now be able to use the Microsoft Authenticator app on iOS and Android in place of a password to log into their business applications.

Why it matters: Passwords are annoying and they aren’t very secure. Many enterprises are starting to push their employees to use a second factor to authenticate. With this, Microsoft now replaces the password/second factor combination with a single tap on your phone — ideally without compromising security.


3. Microsoft’s new Windows Virtual Desktop lets you run Windows 10 in the cloud

What was announced: Microsoft now lets businesses rent a virtual Windows 10 desktop in Azure.

Why it matters: Until now, virtual Windows 10 desktops were the domain of third-party service providers. Now, Microsoft itself will offer these desktops. The company argues that this is the first time you can get a multiuser virtualized Windows 10 desktop in the cloud. As employees become more mobile and don’t necessarily always work from the same desktop or laptop, this virtualized solution will allow organizations to offer them a full Windows 10 desktop in the cloud, with all the Office apps they know, without the cost of having to provision and manage a physical machine.


4. Microsoft Office gets smarter

What was announced: Microsoft is adding a number of new AI tools to its Office productivity suite. Those include Ideas, which aims to take some of the hassle out of using these tools. Ideas may suggest a layout for your PowerPoint presentation or help you find interesting data in your spreadsheets, for example. Excel is also getting a couple of new tools for pulling in rich data from third-party sources. Microsoft is also building a new unified search tool for finding data across an organization’s network.

Why it matters: Microsoft Office remains the most widely used suite of productivity applications. That makes it the ideal surface for highlighting Microsoft’s AI chops, and anything that can improve employee productivity will surely drive a lot of value to businesses. If that means sitting through fewer badly designed PowerPoint slides, then this whole AI thing will have been worth it.


5. Microsoft’s massive Surface Hub 2 whiteboards will launch in Q2 2019

What was announced: The next version of the Surface Hub, Microsoft’s massive whiteboard displays, will launch in Q2 2019. The Surface Hub 2 is both lighter and thinner than the original version. Then, in 2020, an updated version, the Surface Hub 2X, will launch that will offer features like tiling and rotation.

Why it matters: We’re talking about a 50-inch touchscreen display here. You probably won’t buy one, but you’ll want one. It’s a disappointment to hear that the Surface Hub 2 won’t launch into next year and that some of the advanced features most users are waiting for won’t arrive until the refresh in 2020.


6. Microsoft Teams gets bokeh and meeting recordings with transcripts

What was announced: Microsoft Teams, its Slack competitor, can now blur the background when you are in a video meeting and it’ll automatically create transcripts of your meetings.

Why it matters: Teams has emerged as a competent Slack competitor that’s quite popular with companies that are already betting on Microsoft’s productivity tools. Microsoft is now bringing many of its machine learning smarts to Teams to offer features that most of its competitors can’t match.


7. Microsoft launches Azure Digital Twins

What was announced: Azure Digital Twins allows enterprises to model their real-world IoT deployments in the cloud.

Why it matters: IoT presents a massive new market for cloud services like Azure. Many businesses were already building their own version of Digital Twins on top of Azure, but those homegrown solutions didn’t always scale. Now, Microsoft is offering this capability out of the box, and for many businesses, this may just be the killer feature that will make them decide on standardizing their IoT workloads on Azure. And as they use Azure Digital Twins, they’ll also want to use the rest of Azure’s many IoT tools.

more Microsoft Ignite 2018 coverage

Posted Under: Tech News
Walmart is betting on the blockchain to improve food safety

Posted by on 24 September, 2018

This post was originally published on this site

Walmart has been working with IBM on a food safety blockchain solution and today it announced it’s requiring that all suppliers of leafy green vegetable for Sam’s and Walmart upload their data to the blockchain by September 2019 .

Most supply chains are bogged down in manual processes. This makes it difficult and time consuming to track down an issue should one like the E. coli romaine lettuce problem from last spring rear its head. By placing a supply chain on the blockchain, it makes the process more traceable, transparent and fully digital. Each node on the blockchain could represent an entity that has handled the food on the way to the store, making it much easier and faster to see if one of the affected farms sold infected supply to a particular location with much greater precision.

Walmart has been working with IBM for over a year on using the blockchain to digitize the food supply chain process. In fact, supply chain is one of the premiere business use cases for blockchain (beyond digital currency). Walmart is using the IBM Food Trust Solution, specifically developed for this use case.

“We built the IBM Food Trust solution using IBM Blockchain Platform, which is a tool or capability that that IBM has built to help companies build, govern and run blockchain networks. It’s built using Hyperledger Fabric (the open source digital ledger technology) and it runs on IBM Cloud,” Bridget van Kralingen, IBM’s senior VP for Global Industries, Platforms and Blockchain explained.

Before moving the process to the blockchain, it typically took approximately 7 days to trace the source of food. With the blockchain, it’s been reduced to 2.2 seconds. That substantially reduces the likelihood  that infected food will reach the consumer.

Photo:  Shana Novak/Getty Images

One of the issues in a requiring the suppliers to put their information on the blockchain is understanding that there will be a range of approaches from paper to Excel spreadsheets to sophisticated ERP systems all uploading data to the blockchain. Walmart spokesperson Molly Blakeman says that this something they worked hard on with IBM to account for. Suppliers don’t have to be blockchain experts by any means. They simply have to know how to upload data to the blockchain application.

“IBM will offer an onboarding system that orients users with the service easily. Think about when you get a new iPhone – the instructions are easy to understand and you’re quickly up and running. That’s the aim here. Essentially, suppliers will need a smart device and internet to participate,” she said.

After working with it for a year, the company things it’s ready for broader implementation with the goal ultimately being making sure that the food that is sold at Walmart is safe for consumption, and if there is a problem, making auditing the supply chain a trivial activity.

“Our customers deserve a more transparent supply chain. We felt the one-step-up and one-step-back model of food traceability was outdated for the 21st century. This is a smart, technology-supported move that will greatly benefit our customers and transform the food system, benefitting all stakeholders,” Frank Yiannas, vice president of food safety for Walmart said in statement.

In addition to the blockchain requirement, the company is also requiring that suppliers adhere to one of the Global Food Safety Initiative (GFSI), which have been internationally recognized as food safety standards, according to the company.

Posted Under: Tech News
Adobe introduces AI assistant to help Analytics users find deeper insights

Posted by on 24 September, 2018

This post was originally published on this site

Adobe Analytics is a sophisticated product, so much so that users might focus on a set of known metrics at the cost of missing key insights. Adobe introduced an AI-fueled virtual assistant called Intelligent Alerts today to help users find deeper insights they might have otherwise missed.

John Bates, director of product management for Adobe Analytics says that in the past, the company has used artificial intelligence and machine learning under the hood of Analytics to help their users understand their customer’s behavior better. This marks the first time, Adobe will be using this technology to understand how the user works with Analytics to offer new data they might not have considered.

“Historically we’ve analyzed the data that we collect on behalf of our customers, on behalf of brands and help provide insights. Now we’re analyzing our users’ behavior within Adobe Analytics, and then mashing them up with those insights that are most relevant and personalized for that individual, based on the signals that we see and how they use our tool,” Bates explained.

Adobe Intelligent Alerts. Screenshot: Adobe

Bates says that this isn’t unlike Netflix recommendations, which recommends content based on other shows and movies you’ve watched before, but applying it to the enterprise user, especially someone who really knows their way around Adobe Analytics. That’s because these power users provide the artificial intelligence engine with the strongest signals.

The way it works is the analyst receives some alerts they can dig into to give them additional insights. If they don’t like what they’re seeing, they can tune the system and it should learn over time what the analyst needs in terms of data.

Intelligent Alert Settings. Screenshot: Adobe

They can configure how often they see the alerts and how many they want to see. This all falls within the realm of Adobe’s artificial intelligence platform they call Sensei. Adobe built Sensei with the idea of injecting intelligence across the Adobe product line.

“It’s really a vision and strategy around how do we take things that data scientists do, and how we inject that into our technology such that an everyday user of Adobe Analytics can leverage the power of these these advanced algorithms to help them better understand their customers and better perform in their jobs,” he said.

Posted Under: Tech News
Microsoft, SAP and Adobe take on Salesforce with their new Open Data Initiative for customer data

Posted by on 24 September, 2018

This post was originally published on this site

Microsoft, SAP and Adobe today announced a new partnership: the Open Data Initiative. This alliance, which is a clear attack against Salesforce, aims to create a single data model for consumer data that is then portable between platforms. That, the companies argue, will provide more transparency and privacy controls for consumers, but the core idea here is to make it easier for enterprises to move their customers’ data around.

That data could be standard CRM data, but also information about purchase behavior and other information about customers. Right now, moving that data between platforms is often hard, given that there’s no standard way for structuring it. That’s holding back what these companies can do with their data, of course, and in this age of machine learning, data is everything.

“We want this to be an open framework”, Microsoft CEO Satya Nadella said during his keynote at the company’s annual Ignite conference. “We are very excited about the potential here about what truly putting customers in control of their own data for our entire industry,” he added.

The exact details of how this is meant to work are a bit vague right now, though. Unsurprisingly, Adobe plans to use this model for its Customer Experience Platform, while Microsoft will build it into its Dynamics 365 CRM service and SAP will support it on its Hana database platform and CRM platforms, too. Underneath all of this is a single data model and then, of course, Microsoft Azure — at least on the Microsoft side.

“Adobe, Microsoft and SAP are partnering to reimagine the customer experience management category,” said Adobe CEO Shantanu Narayen. “Together we will give enterprises the ability to harness and action massive volumes of customer data to deliver personalized, real-time customer experiences at scale.”

Together, these three companies have the footprint to challenge Salesforce’s hold on the CRM market and create a new standard. SAP, especially, has put a lot of emphasis on the CRM market lately and while that’s growing fast, it’s still far behind Salesforce.

more Microsoft Ignite 2018 coverage

Posted Under: Tech News
Salesforce partners with Apple to roll deeper into mobile enterprise markets

Posted by on 24 September, 2018

This post was originally published on this site

Apple and Salesforce are both highly successful, iconic brands, who like to put on a big show when they make product announcements. Today, the two companies announced they were forming a strategic partnership with an emphasis on mobile strategy ahead of Salesforce’s enormous customer conference, Dreamforce, which starts tomorrow in San Francisco.

For Apple, which is has been establishing partnerships with key enterprise brands for the last several years, today’s news is a another big step toward solidifying its enterprise strategy by involving the largest enterprise SaaS vendor in the world.

“We’re forming a strategic partnership with Salesforce to change the way people work and to empower developers of all abilities to build world-class mobile apps,” Susan Prescott, vice president of markets, apps and services at Apple told TechCrunch.

Tim Cook at Apple event on September 12, 2018 Photo: Justin Sullivan/Getty Images

Bret Taylor, president and chief product at Salesforce, who came over in the Quip deal a couple of years ago, says working together, the two companies can streamline mobile development for customers. “Every single one of our customers is on mobile. They all want world-class mobile experiences, and this enables us when we’re talking to a customer about their mobile strategy, that we can be in that conversation together,” he explained.

For starters, the partnership is going to involve three main components: The two companies are going to work together to bring in some key iOS features such Siri Shortcuts and integration with Apple’s Business Chat into the Salesforce mobile app. Much like the partnership between Apple and IBM, Apple and Salesforce will also work together to build industry-specific iOS apps on the Salesforce platform.

The companies are also working together on a new mobile SDK built specifically for Swift, Apple’s popular programming language. The plan is to provide a way to build Swift apps for iOS and deploy them natively on Salesforce’s Lightning platform.

The final component involves deeper integration with Trailhead, Salesforce’s education platform. That will involve a new Trailhead Mobile app on IOS as well as adding Swift education courses to the Trailhead catalogue to help drive adoption of the mobile SDK.

While Apple has largely been perceived as a consumer-focused organization, as we saw a shift to  companies encouraging employees to bring their own devices to work over the last six or seven years, Apple has benefited. As that has happened, it has been able to take advantage to sell more products and services and has partnered with a number of other well-known enterprise brands including IBMCiscoSAP and GE along with systems integrators Accenture and Deloitte.

The move gives Salesforce a formidable partner to continue their incredible growth trajectory. Just last year the company passed the $10 billion run rate putting it in rarified company with some of the most successful software companies in the world. In their most recent earnings call at the end of August, they reported $3.28 billion for the quarter, placing them on a run rate of over $13 billion. Connecting with Apple could help keep that momentum growing.

The two companies will show off the partnership at Dreamforce this week. It’s a deal that has the potential to work out well for both companies, giving Salesforce a more integrated iOS experience and helping Apple increase its reach into the enterprise.

Posted Under: Tech News
Microsoft Azure gets new high-performance storage options

Posted by on 24 September, 2018

This post was originally published on this site

Microsoft Azure is getting a number of new storage options today that mostly focus on use cases where disk performance matters.

The first of these is Azure Ultra SSD Managed Disks, which are now in public preview. Microsoft says that these drives will offer “sub-millisecond latency,” which unsurprisingly makes them ideal for workloads where latency matters.

Earlier this year, Microsoft launched its Premium and Standard SSD Managed Disks offerings for Azure into preview. These ‘ultra’ SSDs represent the next tier up from the Premium SSDs with even lower latency and higher throughput. They’ll offer 160,000 IOPS per second will less than a millisecond of read/write latency. These disks will come in sizes ranging from 4GB to 64TB.

And talking about Standard SSD Managed Disks, this service is now generally available after only three months in preview. To top things off, all of Azure’s storage tiers (Premium and Standard SSD, as well as Standard HDD) now offer 8, 16 and 32 TB storage capacity.

Also new today is Azure Premium files, which is now in preview. This, too, is an SSD-based service. Azure Files itself isn’t new, though. It offers users access to cloud storage using the standard SMB protocol. This new premium offering promises higher throughput and lower latency for these kind of SMB operations.

more Microsoft Ignite 2018 coverage

Posted Under: Tech News
Microsoft hopes enterprises will want to use Cortana

Posted by on 24 September, 2018

This post was originally published on this site

In a world dominated by Alexa and the Google Assistant, Cortana suffers the fate of a perfectly good alternative that nobody uses and everybody forgets about. But Microsoft wouldn’t be Microsoft if it just gave up on its investment in this space, so it’s now launching the Cortana Skills Kit for Enterprise to see if that’s a niche where Cortana can succeed.

This new kit is an end-to-end solution for enterprises that want to build their own skills and agents. Of course, they could have done this before using the existing developer tools. This kit isn’t all that different from those, after all. Microsoft notes that it is designed for deployment inside an organization and represents a new platform for them to build these experiences.

The Skills Kit platform is based on the Microsoft Bot Framework and the Azure Cognitive Services Language Understanding feature.

Overall, this is probably not a bad bet on Microsoft’s part. I can see how some enterprises would want to build their own skills for their employees and customers to access internal data, for example, or to complete routine tasks.

For now, this tool is only available in private preview. No word on when we can expect a wider launch.

more Microsoft Ignite 2018 coverage

Posted Under: Tech News
Microsoft updates its planet-scale Cosmos DB database service

Posted by on 24 September, 2018

This post was originally published on this site

Cosmos DB is undoubtedly one of the most interesting products in Microsoft’s Azure portfolio. It’s a fully managed, globally distributed multi-model database that offers throughput guarantees, a number of different consistency models and high read and write availability guarantees. Now that’s a mouthful, but basically, it means that developers can build a truly global product, write database updates to Cosmos DB and rest assured that every other user across the world will see those updates within 20 milliseconds or so. And to write their applications, they can pretend that Cosmos DB is a SQL- or MongoDB-compatible database, for example.

CosmosDB officially launched in May 2017, though in many ways it’s an evolution of Microsoft’s existing Document DB product, which was far less flexible. Today, a lot of Microsoft’s own products run on CosmosDB, including the Azure Portal itself, as well as Skype, Office 365 and Xbox.

Today, Microsoft is extending Cosmos DB with the launch of its multi-master replication feature into general availability, as well as support for the Cassandra API, giving developers yet another option to bring existing products to CosmosDB, which in this case are those written for Cassandra.

Microsoft now also promises 99.999 percent read and write availability. Previously, it’s read availability promise was 99.99 percent. And while that may not seem like a big difference, it does show that after more of a year of operating Cosmos DB with customers, Microsoft now feels more confident that it’s a highly stable system. In addition, Microsoft is also updating its write latency SLA and now promises less than 10 milliseconds at the 99th percentile.

“If you have write-heavy workloads, spanning multiple geos, and you need this near real-time ingest of your data, this becomes extremely attractive for IoT, web, mobile gaming scenarios,” Microsoft CosmosDB architect and product manager Rimma Nehme told me. She also stressed that she believes Microsoft’s SLA definitions are far more stringent than those of its competitors.

The highlight of the update, though, is multi-master replication. “We believe that we’re really the first operational database out there in the marketplace that runs on such a scale and will enable globally scalable multi-master available to the customers,” Nehme said. “The underlying protocols were designed to be multi-master from the very beginning.”

Why is this such a big deal? With this, developers can designate every region they run Cosmos DB in as a master in its own right, making for a far more scalable system in terms of being able to write updates to the database. There’s no need to first write to a single master node, which may be far away, and then have that node push the update to every other region. Instead, applications can write to the nearest region, and Cosmos DB handles everything from there. If there are conflicts, the user can decide how those should be resolved based on their own needs.

Nehme noted that all of this still plays well with CosmosDB’s existing set of consistency models. If you don’t spend your days thinking about database consistency models, then this may sound arcane, but there’s a whole area of computer science that focuses on little else but how to best handle a scenario where two users virtually simultaneously try to change the same cell in a distributed database.

Unlike other databases, Cosmos DB allows for a variety of consistency models, ranging from strong to eventual, with three intermediary models. And it actually turns out that most CosmosDB users opt for one of those intermediary models.

Interestingly, when I talked to Leslie Lamport, the Turing award winner who developed some of the fundamental concepts behind these consistency models (and the popular LaTeX document preparation system), he wasn’t all that sure that the developers are making the right choice. “I don’t know whether they really understand the consequences or whether their customers are going to be in for some surprises,” he told me. “If they’re smart, they are getting just the amount of consistency that they need. If they’re not smart, it means they’re trying to gain some efficiency and their users might not be happy about that.” He noted that when you give up strong consistency, it’s often hard to understand what exactly is happening.

But strong consistency comes with its drawbacks, too, which leads to higher latency. “For strong consistency there are a certain number of roundtrip message delays that you can’t avoid,” Lamport noted.

The CosmosDB team isn’t just building on some of the fundamental work Lamport did around databases, but it’s also making extensive use of TLA+, the formal specification language Lamport developed in the late 90s. Microsoft, as well as Amazon and others, are now training their engineers to use TLA+ to describe their algorithms mathematically before they implement them in whatever language they prefer.

“Because [CosmosDB is] a massively complicated system, there is no way to ensure the correctness of it because we are humans, and trying to hold all of these failure conditions and the complexity in any one person’s — one engineer’s — head, is impossible,” Microsoft Technical Follow Dharma Shukla noted. “TLA+ is huge in terms of getting the design done correctly, specified and validated using the TLA+ tools even before a single line of code is written. You cover all of those hundreds of thousands of edge cases that can potentially lead to data loss or availability loss, or race conditions that you had never thought about, but that two or three years ago after you have deployed the code can lead to some data corruption for customers. That would be disastrous.”

“Programming languages have a very precise goal, which is to be able to write code. And the thing that I’ve been saying over and over again is that programming is more than just coding,” Lamport added. “It’s not just coding, that’s the easy part of programming. The hard part of programming is getting the algorithms right.”

Lamport also noted that he deliberately chose to make TLA+ look like mathematics, not like another programming languages. “It really forces people to think above the code level,” Lamport noted and added that engineers often tell him that it changes the way they think.

As for those companies that don’t use TLA+ or a similar methodology, Lamport says he’s worried. “I’m really comforted that [Microsoft] is using TLA+ because I don’t see how anyone could do it without using that kind of mathematical thinking — and I worry about what the other systems that we wind up using built by other organizations — I worry about how reliable they are.”

more Microsoft Ignite 2018 coverage

Posted Under: Tech News
Page 30 of 78« First...1020...2829303132...405060...Last »

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue