Category Archives: Cloud Computing

Meet Microsoft’s ‘planet scale’ NoSQL database

Posted by on 13 June, 2016

This post was originally published on this site

Given the strength of SQL Server in business, you might be surprised to learn that Microsoft has spent the last five years building a distributed NoSQL database — until you remember that services like Power BI, Bing, and the Office web apps face the same challenges as services like Netflix. They’re problems more and more enterprises have to deal with too: the deluge of data, the demands of mobility, and the need for low latency even though you’re relying on cloud services.

To read this article in full or to leave a comment, please click here

(Insider Story)

Cloud Foundry stages a comeback

Posted by on 13 June, 2016

This post was originally published on this site

Practically no developers in their right mind want to write an application from scratch and run it on single server anymore. Instead, they want to tap existing services, keep the original coding to a minimum, and test/deploy the finished application in as automated a fashion as possible on scalable infrastructure.

At the start of the cloud era, PaaS (platform as a service) was intended to handle that process, much as the application server handled it in the client/server era. But a funny thing happened: PaaS never really took off. People opted for home-built solutions, with their own scripts and sets of services, and when Docker came along they began figuring ways to dev and test using Linux containers and various container management solutions.

But all that is pretty complicated to set up yourself, which may be one reason why PaaS seems to be making a comeback. From the start the two big PaaS plays have been Cloud Foundry and OpenShift — and they still are (though you can now add Microsoft Azure Service Fabric to the mix). All three now support Docker containers and adhere to the standardized container format developed by the Open Container Initiative.

An open source project with its own foundation, Cloud Foundry is arguably the leading PaaS — though as I discovered in an InfoWorld interview last week, Cloud Foundry Foundation CEO Sam Ramji dislikes the PaaS moniker. “I’m not going to use the PaaS word because it has a historical and failed market,” he says. “[It was] slapped on by analysts who didn’t have anything better to call it and it came out of the NIST definition of cloud computing [PDF]. It’s not a very good definition. It’s very foggy and it’s constraining.“

Instead, Ramji describes Cloud Foundry as a multi-cloud, multi-language application platform that supports continuous delivery. Rather than parse terms, though, Ramji prefers to focus on Cloud Foundry’s growth, with 2,100 contributors and 25 releases in the past year along with a growing list of big customers including Allstate, BNY Mellon, Citigroup, Daimler, Ford, GE Digital, JPMorgan Chase, Kaiser Permanente, and Volkswagen. Cloud Foundry’s commercial partners include HPE, IBM, Pivotal, and SAP (the Foundation was spun out of Pivotal in 2014).

So what accounts for the uptake? In part it’s industry-driven. As with OpenStack, vendors have coalesced around a shared open source code base and are differentiating with their own implementations. In theory, a standardized platform across vendors and clouds reduces fear of lock-in and fosters a bigger talent pool. Ramji also maintains that the ability to set up Cloud Foundry on all the major public clouds as well as on premises helps support customers’ multi-cloud hedges.

But Ramji thinks the main reason behind the rise in Cloud Foundry adoption stems from a familiar chicken-and-egg cycle:

There’s a real record of enterprise maturity that has made all of these companies now start to get much more comfortable. There is sort of a bowling alley metaphor here, which is it’s the lead pin that has to go down — typically, a very aggressive, technology-centric industry leader like JPMorgan Chase or VW or Ford. They’re quite demanding. But once they go public about it, which is typically 9 to 12 months after it’s actually deployed, then everybody else in the category starts paying attention.

Recently, I spoke with Mike Mason, head of technology for ThoughtWorks, who had a slightly different take on the subject: “I think PaaS-in-a-box offerings like Cloud Foundry have reached a tipping point. Why weren’t they adopted earlier? I think it’s the fact that the capability that they provide is now mature enough.“

When asked about specific feature improvements, Ramji cites BOSH, an open source tool chain for release engineering, deployment, and lifecycle management. Ramji maintains that:

BOSH has gotten radically easier to use. You used to have to tell BOSH every detail of your network in order for it to be able to run Cloud Foundry effectively. Enterprise networks have a lot of details so to just put it in numbers, a BOSH manifest a year ago used to be over 2000 lines. … In the newest version of BOSH, which has been out for a few months, we have something called dynamic routing. Basically the network routes don’t have to be hard-coded. They can be written much more simply. We’ve seen 2000-line manifests drop to 50-line manifests.

Another advance relates to Diego, Cloud Foundry’s container management system, which Ramji says has improved dramatically. But he also points to a qualitative change brought about by the support for containers:

Cloud Foundry is becoming a bit less opinionated over time. It used to be you could only give us code. One of the things that makes it easier is not everything is a 12-factor app. You might need to take an old Java app and reroll it slightly and run it on Cloud Foundry for the scalability and management, but you want to just … build a Docker-based container for that. I think that releases the frustration because there’s not only one true way.

Which highlights a crucial developer objection to PaaS solutions: They tend to enforce a certain way of doing things that may or may not suit a given developer — or a given task. The tradeoff is that developers who accept those constraints enjoy automation and prebuilt services that in many cases can make them much more productive.

And make no mistake, enterprises like a little governance with their app dev. And PaaS, or whatever you want to call it, offers a way to enforce policies and procedures — though if you’re too heavy-handed with that you can lose your best people. For Cloud Foundry and its customers, success will be all about finding the delicate balance between standardization and flexibility.

Is Microsoft publishing its own FreeBSD? Yes and no

Posted by on 10 June, 2016

This post was originally published on this site

It sounds like another one for the Hell Freezes Over file: Microsoft has released a version of FreeBSD 10.3, an edition of the liberally licensed Unix-like OS.

But as with previous Microsoft dalliances in the world of open source-licensed OSes, this isn’t a case of Microsoft admitting Windows is a technological and philosophical dead end. Instead, it’s another case of Microsoft investing effort in making Azure more appealing as an environment to run such OSes.

Azure-izing FreeBSD

The details are simple: FreeBSD 10.3, the latest production version of the OS, is available as a download-and-go VM image in the Azure Marketplace. This particular image, however, has Microsoft, not FreeBSD Foundation (the organization that supports FreeBSD development) listed as the publisher.

So what’s new about Microsoft’s particular spin of FreeBSD? A post on the Microsoft Azure blog notes that it sports kernel-level improvements to improve network and storage performance, as well as the “Azure VM Guest Agent” that allows FreeBSD to talk to Azure Fabric and vice versa. There have been Linux kernel contributions by Microsoft in this same vein; they were designed to allow Linux to run well on Hyper-V.

A slightly new wrinkle is Microsoft’s non-Azure-centric contributions to FreeBSD. Those changes, according to Microsoft, are being upstreamed back into FreeBSD, “so anyone who downloads a FreeBSD 10.3 image from the FreeBSD Foundation will get those investments from Microsoft built in to the OS.” In other words, the changes in the Microsoft-published, Azure-hosted FreeBSD aren’t an Azure exclusive — all FreeBSD users will benefit in time.

Offering a helping hand

The other question people are likely to ask is why, kernel contributions notwithstanding, is Microsoft listed as the publisher of the distro? The short answer: support.

According to Microsoft’s blog post, the FreeBSD Foundation is a community of mutually supportive users, “not a solution provider or an ISV with a support organization.” The kinds of customers who run FreeBSD on Azure want to have service-level agreements of some kind, and the FreeBSD Foundation isn’t in that line of work.

This upshot is, if you have problems with FreeBSD on Azure, you can pick up the phone and get Microsoft to help out — but only if you’re running its version of FreeBSD.

Another incentive for Microsoft is that FreeBSD is used as the substrate for virtual appliances from a number of name vendors — e.g., Citrix and Gemalto. Microsoft wants those products to run on Azure, too, and has worked closely with their vendors to ensure that. Microsoft is also hinting this is just a prelude to not only more Hyper-V features in FreeBSD, but also more kernel-level performance contributions generally.

Its own spin on things

Microsoft has so far produced only one thing resembling a distribution of an open source OS: Azure Cloud Switch, a Linux distro designed for ASIC hardware to run Microsoft’s network management software. It hasn’t been made available for public use (it was built mainly for Microsoft’s own internal use at Azure), so don’t hold your breath waiting for it to appear on GitHub.

Microsoft’s direct contributions to other operating systems have inevitably revolved around making them more compatible with its own ecosystem. Even the new, Nadella-driven Microsoft, which is far friendlier to open source, isn’t likely to veer far from that course. But if it means an incrementally better FreeBSD for all, it’s hard to complain.

Is Microsoft publishing its own FreeBSD? Yes and no

Posted by on 10 June, 2016

This post was originally published on this site

It sounds like another one for the Hell Freezes Over file: Microsoft has released a version of FreeBSD 10.3, an edition of the liberally licensed Unix-like OS.

But as with previous Microsoft dalliances in the world of open source-licensed OSes, this isn’t a case of Microsoft admitting Windows is a technological and philosophical dead end. Instead, it’s another case of Microsoft investing effort in making Azure more appealing as an environment to run such OSes.

Azure-izing FreeBSD

The details are simple: FreeBSD 10.3, the latest production version of the OS, is available as a download-and-go VM image in the Azure Marketplace. This particular image, however, has Microsoft, not FreeBSD Foundation (the organization that supports FreeBSD development) listed as the publisher.

So what’s new about Microsoft’s particular spin of FreeBSD? A post on the Microsoft Azure blog notes that it sports kernel-level improvements to improve network and storage performance, as well as the “Azure VM Guest Agent” that allows FreeBSD to talk to Azure Fabric and vice versa. There have been Linux kernel contributions by Microsoft in this same vein; they were designed to allow Linux to run well on Hyper-V.

A slightly new wrinkle is Microsoft’s non-Azure-centric contributions to FreeBSD. Those changes, according to Microsoft, are being upstreamed back into FreeBSD, “so anyone who downloads a FreeBSD 10.3 image from the FreeBSD Foundation will get those investments from Microsoft built in to the OS.” In other words, the changes in the Microsoft-published, Azure-hosted FreeBSD aren’t an Azure exclusive — all FreeBSD users will benefit in time.

Offering a helping hand

The other question people are likely to ask is why, kernel contributions notwithstanding, is Microsoft listed as the publisher of the distro? The short answer: support.

According to Microsoft’s blog post, the FreeBSD Foundation is a community of mutually supportive users, “not a solution provider or an ISV with a support organization.” The kinds of customers who run FreeBSD on Azure want to have service-level agreements of some kind, and the FreeBSD Foundation isn’t in that line of work.

This upshot is, if you have problems with FreeBSD on Azure, you can pick up the phone and get Microsoft to help out — but only if you’re running its version of FreeBSD.

Another incentive for Microsoft is that FreeBSD is used as the substrate for virtual appliances from a number of name vendors — e.g., Citrix and Gemalto. Microsoft wants those products to run on Azure, too, and has worked closely with their vendors to ensure that. Microsoft is also hinting this is just a prelude to not only more Hyper-V features in FreeBSD, but also more kernel-level performance contributions generally.

Its own spin on things

Microsoft has so far produced only one thing resembling a distribution of an open source OS: Azure Cloud Switch, a Linux distro designed for ASIC hardware to run Microsoft’s network management software. It hasn’t been made available for public use (it was built mainly for Microsoft’s own internal use at Azure), so don’t hold your breath waiting for it to appear on GitHub.

Microsoft’s direct contributions to other operating systems have inevitably revolved around making them more compatible with its own ecosystem. Even the new, Nadella-driven Microsoft, which is far friendlier to open source, isn’t likely to veer far from that course. But if it means an incrementally better FreeBSD for all, it’s hard to complain.

The enterprise cloud’s missing piece: Autosizing

Posted by on 10 June, 2016

This post was originally published on this site

Have you moved into a public cloud lately? The first step is to choose the size of the machine instance from a standard machine configuration that has enough vCPUs and enough memory. Of course, cloud providers offer custom machine instances, so you can pick the exact right amount of vCPUs and memory.

But whether it’s a standard or a custom machine instance, enterprises simply guess at the correct size, using on-premises systems as a guide. It’s a logical approach, but it’s not realistic. You rarely run the same workloads on the same server types in the clouds. Moreover, most applications will undergo some refactoring before they end up in the cloud. It’s apples and oranges.

As a result, many enterprises overestimate the resources they need, so they waste money. Some underestimate the resources they need and, thus, suffer performance and stability problems.

Cloud providers will tell you that their standard machine instances let cloud users select the best configurations for their workloads. Clearly, that’s not true. What the public cloud providers should do is build mechanisms that automatically configure the machine for the exact right amount of resources for the workload at hand: autosizing. If a platform is running a workload, it should be able to atomically profile that workload and configure that machine for the workload’s exact needs.

Yes, cloud providers already offer autoscaling and autoprovisioning, and that’s great. But they don’t address machine sizing.

The cloud providers should be able to offer autosizing of machine instances, with a little work. We already have infrastructure as code, where the applications themselves dynamically configure the resources they need. The same concept should be applied to machine instances, so users don’t have to guess. After all, they’re not the cloud infrastructure experts — the providers are.

If customers ask, maybe it will happen.

The next steps for Spark in the cloud

Posted by on 8 June, 2016

This post was originally published on this site

Over the course of the last couple of years, Apache Spark has enjoyed explosive growth in both usage and mind share. These days, any self-respecting big data offering is obliged to either connect to or make use of it.

Now comes the hard part: Turning Spark into a commodity. More than that, it has to live up to its promise of being the most convenient, versatile, and fast-moving data processing framework around.

There are two obvious ways to do that in this cloud-centric world: Host Spark as a service or build connectivity to Spark into an existing service. Several such approaches were unveiled this week at Spark Summit 2016, and they say as much about the companies offering them as they do Spark’s meteoric ascent

Microsoft

Microsoft has pinned a growing share of its future on the success of Azure, and in turn on the success of Azure’s roster of big data tools. Therefore, Spark has been made a first-class citizen in Power BI, Azure HDInsight, and the Azure-hosted R Server.

Power BI is Microsoft’s attempt — emphasis on “attempt” — at creating a Tableau-like data visualization service, while Azure HDInsight is an Azure-hosted Hadoop/R/HBase/Storm-as-a-service offering. For tools like those, the lack of Spark support is like a bike without pedals.

Microsoft is also rolling the dice on a bleeding-edge Spark feature, the recently revamped Structured Streaming component that allows its data to stream directly into Power BI. Structured Streaming is not only a significant upgrade to Spark’s streaming framework, it is a competitor to other data streaming technologies (such as Apache Storm). So far it’s relatively unproven in production, and already faces competition from the likes of Project Apex.

This is more a reflection of Microsoft’s confidence in Spark generally than in Structured Streaming specifically. The sheer amount of momentum around Spark ought to ensure that any issues with Structured Streaming are ironed out in time — whether or not Microsoft contributes any direct work to such a project.

IBM

IBM’s bet on Spark has been nothing short of massive. Not only has Big Blue re-engineered some of its existing data apps with Spark as the engine, it’s made Spark a first-class citizen on its Bluemix PaaS and will be adding its SystemML machine learning algorithms to Spark as open source. This is all part of IBM’s strategy to shed its mainframe-to-PC era legacy and become a cloud, analytics, and cognitive services giant.

Until now, IBM has leveraged Spark by making it a component of already established services — e.g., Bluemix. IBM’s next step, though, will be to provide Spark and a slew of related tools in an environment that is more free-form and interactive: the IBM Data Science Experence. It’s essentially an online data IDE, where a user can interactively manipulate data and code — Spark for analytics, Python/Scala/R for programming — add in data sources from Bluemix, and publish the results for others to examine.

If this sounds a lot like Jupyter for Python, that is one of the metaphors IBM had in mind — and in fact, Jupyter notebooks are a supported format. What’s new is that IBM is trying to expose Spark (and the rest of its service mix) in a way that complements Spark’s vaunted qualities — its overall ease of use and lowering of the threshold of entry for prospective data scientists.

Snowflake

Cloud data warehouse startup Snowflake is making Spark a standard-issue component as well. Its original mission was to provide analytics and data warehousing that spared the user from the hassle of micromanaging setup and management. Now, it’s giving Spark the same treatment: Skip the setup hassles and enjoy a self-managing data repository that can serve as a target for, or recipient of, Spark processing. Data can be streamed into Snowflake by way of Spark or extracted from Snowflake and processed by Spark.

Spark lets Snowflake users interact with their data in the form of a software library rather than a specification like SQL. This plays to Snowflake’s biggest selling point — automated management of scaling data infrastructure — rather than merely providing another black-box SQL engine.

Databricks

With Databricks, the commercial outfit that spearheads Spark development and offers its own hosted platform, the question has always been how it can distinguish itself from other platforms where Spark is a standard-issue element. The current strategy: Hook ’em with convenience, then sell ’em on sophistication.

Thus, Databricks recently rolled out the Community Edition, a free tier for those who want to get to know Spark but don’t want to monkey around with provisioning clusters or tracking down a practice data set. Community Edition provides a 6GB microcluster (it times out after a certain period of inactivity), a notebook-style interface, and several sample data sets.

Once people feel like they have a leg up on Spark’s workings, they can graduate to the paid version and continue using whatever data they’ve already migrated into it. In that sense, Databricks is attempting to capture an entry-level audience — a pool of users likely to grow with Spark’s popularity. But the hard part, again, is fending off competition. And as Spark is open source, it’s inherently easier for someone with far more scale and a far greater existing customer base to take all that away.

If there’s one consistent theme among these moves, especially as Spark 2.0 looms, it’s that convenience matters. Spark caught on because it made working with gobs of data far less ornery than the MapReduce systems of yore. The platforms that offer Spark as a service all have to assume their mission is twofold: Realize Spark’s promise of convenience in new ways — and assume someone else is also trying to do the same, only better.

Salesforce targets ‘citizen developers’ with new tools and training

Posted by on 8 June, 2016

This post was originally published on this site

If there’s an overriding trend in the world of enterprise software lately, it’s democratization, as tools previously reserved for experts are put in the hands of average users. On Tuesday, Salesforce.com climbed on board with new software, training and support services that aim to help more users — not just professional developers — build applications for the Salesforce platform.

There aren’t enough trained developers to create apps for the business world, the company says, so it wants to help users in all parts of the organization make their own. More than 2.8 million developers have already built some 5.5 million apps based on the company’s customer relationship management software, it says, and at its TrailheaDX developer event in San Francisco, it made several announcements to expand that further.

The Summer ’16 release of Salesforce’s Lightning development platform, for instance, includes more than 200 new features for both expert and nontechnical developers.

Lightning’s component-based architecture makes it easier to combine app elements using drag-and-drop software. Now, new Lightning LockerService, Lightning Inspector and Lightning CLI tools give professional developers broader capabilities for building custom components. Other tools make it easier for anyone, including “citizen developers,” to create apps without needing to know coding, by mixing and matching custom and standard components. Within Lightning App Builder, for example, users can create and edit Lightning Record Pages, customize Home Pages for different user profiles and assign filters to a report chart component using visual tools.

In Salesforce’s free gaming-style online learning environment, called Trailhead, new “superbadges” aim to encourage users to learn more Salesforce skills. Superbadges are credentials that can be proudly displayed on a resume or on LinkedIn. They’re awarded for solving real-world business challenges such as scaling the infrastructure of a growing business or automating a sales process. Superbadges in Lightning Experience, Apex, Reports & Dashboards and Security are available now; others are on the way.

Aiming to nurture entrepreneurs who build on the Salesforce platform, the CRM company’s new ventures arm launched a $50 million investment fund for those who create Lightning apps and components for it. Similarly, a new incubator program focuses on early-stage companies that use Salesforce Lightning to create products. Due to open in the San Francisco Bay Area by year’s end, it will provide a physical space within Salesforce offices along with access to Salesforce technology and mentorship.

Office 365’s brand-new apps: Planner and advanced security

Posted by on 8 June, 2016

This post was originally published on this site

Microsoft continues to evolve Office 365. Among them are the new Planning tool, which had been in beta since December, and Advanced Security Management tool, which has hot, new features.

Planner is a team-planning and collaboration tool that uses visual dashboards to let teams create new plans; organize, assign, and collaborate on tasks; set due dates; update status; and share files. Users with an eligible subscription plan (E1 through E5, Business Essentials, Premium, and Education) will automatically see the Planner tile appear in the Office 365 app launcher in their browser; Office 365 admins don’t need to enable this new online app.

Planner can be great for driving better productivity because it can provide clear way of seeing the progress of your team, for organizations that don’t already have such tools in place.

On the security side, Advanced Security Management (ASM), offers some user behavior analysis through anomaly-detection policies. Office 365 can watch your users for any odd behavior based on 70 indicators — for example, if a user account is signed into thousands of miles apart at about the same time or if their document download rate has suddenly jumped from a handful a day to hundreds.

It can also check for failed login attempts, third-party connections, and other red flags that you configure through policies. It generates a risk score that IT can use to determine what (if any) action should be taken.

ASM also has an app discovery dashboard, which can detect about 1,000 applications in your environment without IT needing to install anything on users’ systems. This feature will let you identify the apps that users have installed outside your sanctioned app portfolio, so you can bring some of the shadow IT action into the light.

ASM is included at no extra cost in the E5 plan, Office 365’s priciest plan. It costs $3 per user per month if you want to add it to other plans. Keep in mind that ASM is still a work in progress, so more features are coming.

Cloud washing goes beyond the Oracle lawsuit

Posted by on 7 June, 2016

This post was originally published on this site

“Cloud washing” is back. It’s the practice of inflating financial results for a company’s cloud business, usually by redefining existing services and products to fit the cloud umbrella. The SEC accused IBM of cloud washing its financial results several years ago. And some have criticized Microsoft for nebulous cloud financial reporting. Now Oracle is facing a lawsuit from a former senior finance manager, Svetlana Blackburn, who claims she was fired for not going along with its cloud washing. (Oracle denies any cloud washing and has countersued Blackburn for malicious prosecution.)

Who knows what happened between Blackburn and Oracle? I’ll let the courts make the call on that one. But I can tell you that cloud washing has been a systemic problem for the last several years. 

At issue is what exactly constitutes a cloud. Big iron providers have been spinning their hardware and software as “cloud based” for years. Why not? Private clouds (meaning on-premises hardware and/or software) are clouds, so these providers can say they are selling a private cloud.

Of course, there is no multitenancy, nor auto- or self-provisioning in these private clouds. But those are concepts that not too many people understand. Let me put them in perspective: It’s like buying a car, then learning it can’t go faster than 10mph and only gets 3mpg. It’s technically still a car, right?

It’s obvious what’s going on: Companies say, “The cloud is hot, so we sell cloud.” If customers and investors believe it, the company becomes more valuable. Of course, providers position any or all their software or hardware as a cloud service, no matter whether it really is.

But there’s a cost to the deception. First, it’s confusing to the buyer, who  can’t be sure what they’re actually purchasing. Second, customers risk not getting the value they seek from the cloud, because they may not receive the cloud advantages they were expecting.

I hope cases like the ones between Oracle and Blackburn create useful legal precedents that might provide a legal definition of cloud computing and stop all the cloud washing. Unchecked, it will hurt both customers and their providers.

How the latest enterprise tech surge changed everything

Posted by on 6 June, 2016

This post was originally published on this site

Technology development comes in waves. Judging by the recent slippage in venture capital investment, the most recent wave of new enterprise tech has crested and is now tapering off. And it’s leaving a wealth of useful new stuff in its wake, from containers to NoSQL databases to streaming analytics to machine learning APIs.

The most striking thing is not just the quantity of innovation, but how interrelated so many of those new technologies have turned out to be. Put them together and you have the makings of the new enterprise architecture, built on these four principles:

1. Self-service: In today’s enterprise, lines of business must be able to build what they need quickly without having to file requests in triplicate to a central IT bureaucracy. Either IT provides the automation necessary to deliver self-service internally, or stakeholders look outside the organization to get what they need.

2. Scalability: The abstraction of hardware as software-defined compute, storage, and network resources — call it “the cloud” if you like — enables capacity to be applied where it’s needed almost instantaneously. It establishes an all-purpose platform for a multitude of new applications that can scale on a dime.

3. Service-based development: Big, monolithic applications can be broken down into API-accessible microservices, each independently scalable and updatable. With microservices, applications can be built or changed much faster than before — and developers can easily incorporate external services when needed.

4. Continuous change: Internet-based applications redefine the nature of software, so that enhancements and additions can be applied on an ongoing basis rather than frozen in periodic releases. Constant monitoring of applications and how customers use them provides the guidance for rolling improvements.

So what sort of architecture does this all add up to? The consultancy ThoughtWorks (home to Martin Fowler of Agile Manifesto fame) calls it “evolutionary architecture” — a phrase coined by ThoughtWorks CTO Rebecca Parsons. Recently I spoke with Mike Mason, head of technology at ThoughtWorks, who offered this insight:

Evolutionary architecture is really about how you do agile software architecture. It’s about how do you do enough enterprise architecture that you are being responsible, yet you are embracing the fact that you need to incorporate new technologies and new decisions as you go and that you will learn about the stuff that you’re building over time.

In digital businesses, the classic, top-down approach to enterprise architecture, with its vast flowcharts mapping business processes with their associated technologies, becomes a straightjacket. Evolutionary architecture, by contrast, provides a framework for continuous change, where stakeholders can build what they need without stepping on each other or making poor decisions. Microservices can play a central role in that framework, says Mason:

I think that microservices are quite compatible with evolutionary architecture. I have a colleague at ThoughtWorks who actually described microservices as the first cloud-native architecture, because one of the things about microservices is that if they’re micro enough you can throw one away and start again. That’s radical thinking: “Oh my God, we’d throw away a piece of software? Why would we do that?”

The main reason, of course, is that you need to continually replace old, subpar functionality with better stuff to attract and retain customers. And as long as the API to that microservice is backward-compatible, you can develop a new microservice (using any language or framework you like) and swap it in without disruption.

In the process, you may not need to do much coding at all. If external services have the functionality you need, you can ping them instead of building your own. Plus, we live in the GitHub era, so if you need a new microservice you’re likely to find open source code for it in a repo somewhere. Microservices, like today’s physical servers, are disposable commodities rather than precious assets.

We still face technology challenges on this new frontier. Docker containers may be ideal for microservices, for example, but container orchestration and management solutions like Kubernetes and Mesos are still evolving. We also need better tools for logging, monitoring, testing, and debugging decentralized, loosely coupled applications.

Those solutions will come in time, perhaps sooner than we think. But we’re lucky to live in an era when, by accident or design, a surprisingly coherent collection of new technology has already arrived to transform the enterprise.

Page 1 of 3123

Social Media

Bulk Deals

Subscribe for exclusive Deals

Recent Post

Archives

Facebook

Twitter

Subscribe for exclusive Deals




Copyright 2015 - InnovatePC - All Rights Reserved

Site Design By Digital web avenue