Monday, January 24, 2011

Why Utility Computing Failed (But Cloud Computing Didn't)

Since about 2006 I’ve been involved with data center IT automation. Back then I started with Cassatt, one of the first companies trying to automate infrastructure components in the data center.  Rob Gingell, the CTO, had a design principle of “service-level automation”, where the variable monitored and maintained was the service, not the server. That was a revolutionary thought.

The technology behind this was a combination of orchestrating physical and virtual devices, which automatically composed appropriate infrastructure stacks to keep the service SLA within pre-defined bounds. And it absolutely worked!  The best market description we had for this technology was “Utility Computing,” and drew from the analogy of electrical utilities: No matter what the draw (load), the supply would always be generated/retired (elasticity) to keep up with it.
 
But selling the Utility Computing model, and service-level automation technology, was hard, if not impossible. We’d frequently have successful POC’s, and demonstrate the product, but the sales inevitably stalled.  The reasons were many and varied, frequently tied to the ‘psychographic’ of the buyers.  But overall, we could point to a few frequent problems:
  • Automation was scary: The word “Automation” frequently scared-off  IT administrators.  They were accustomed to complete control of their hand-crafted infrastructure, and visibility into every layer.  If they couldn’t make and see the change, they didn’t trust that the system actually worked.
  • Lack of  market reference points:  Peers in the market hadn’t tried this stuff either – and there was no broad acceptance that utility computing was being adopted
  • Inflexible Process: The use of ITIL and ITSM procedures were designed to govern manual IT control, and had no way to incorporate automatic approaches to (for example) configuration management.
  • Organizational fear: There was usually the un-stated fear that the utility computing automation systems would obviate the need for certain jobs, if not entire IT organizations. Plus, the systems spanned multiple IT organizations, and it was never clear which existing organization should be put in charge of the new automation.
  • Multiple Buyers: Because Utility Computing touched so many IT organizations, the approval process necessarily included many of them. Getting the thumbs-up from a half-dozen scared organizations was hopeless. Even if the CxO mandated utility computing, implementation was inevitably hog-tied.
Enter Virtualization

Somewhere around 2007, OS virtualization began to go mainstream. And its value proposition was simple and uncomplicated: Consolidate applications, reduce hardware sprawl. It was a no-brainer.

But just below the surface, virtualization had an interesting effect on IT managers: It began to make them more comfortable to break the binding of physical control and physical management of servers, transitioning instead to being more at ease with logical control of servers.

As consolidation initiatives penetrated data centers, additional virtualization management tools followed. And with them, more automated functions. And with each new function came IT’s incremental comfort with automating logical data center configurations.

And Then, Commercial Examples

At just about the same time, Amazon Web Services had begun to commercially offer these virtual machines in their EC2 – Elastic Compute Cloud. This could be had for the use of a credit card, and charged-for on an hourly basis. IT end-users now had simple – if sometimes only experimental – access to a truly automated, logical infrastructure. And one where all “hands-on” aspects of configuration were literally masked inside a black box.

Now the industry had its proof-point: There were times when full-up IT automation, without visibility into hardware implementation, worked and was useful.

Use of EC2 (initially) lay outside the control bounds of IT management and IT’s organizational boundaries. Developers and 1-off projects could leverage it without fear of pushback from IT – usually because IT never even knew about its use.

Once IT management acquiesced that EC2 (and similar services) was being used, they finally had reason to look more closely. And the revelations were telling: How was it that the annualized cost basis for a medium-sized server was lower than an in-house implementation could possibly hope to achieve? How come configuration and tear-down was so simple? Finally IT had to look in the mirror at the fact that this thing called cloud computing might be here to stay?

Looking Back

While it’s clear that the concept of cloud computing isn’t new, some important industry changes – more psychological and organizational than technological – had to take place before widespread adoption would happen.  And even then, it took some simple commercial implementations to prove the point. Too bad these weren't around a few years earlier during the "utility computing" era.

Watching this unfold, lessons *I* learned – or at least some explanations of this effect I’ve examined have been
  • Psychology/Attitude shifted:  the more broadly OS virtualization was adopted, the more IT’s attitudes became accepting of automation and of logical control.
  • Technology change was replaced by operational change: The new approach is more a change to the operational approach than a technology upheaval. The way users interacted with the cloud was appealing and nearly viral.
  • Value was Immediate: The “new” cloud economic evidence was/is usually so compelling that it has forced IT to take a second look. This started with simple consolidation economics, but has expanded well beyond that.
  • Broad availability accelerated adoption: Even only a few commercially available cloud providers helped provide immediate proof-points that the new model was here to stay. And purchasing this technology was as simple as entering a credit card number
Going forward, I would expect these 4 (perhaps more) “pressure points” will continue to help accelerate the use and adoption of internal clouds, public clouds, etc.   In future Blogs I’ll begin to look at how to further mainstream Cloud (and automation) adoption, as it serves to accelerate improvements to business’ bottom line.

5 comments:

Tim Johnson said...

More questions than comments.

Who was the operational change for? How difficult was it to enact those changes and how were the improvements measured? Who noticed the improvements?

How was the value measured and who noticed? How were they impacted by the changes?

I'm very interested in the business drivers behind the move to cloud and how companies measure them.

Thanks.

Tj

B. Riley said...

Excellent post. I would also submit that advances in the equipment we use to provide these services has a lot to do with it.

Back during Cloud 1.0, in order to provide enough performance for customers who were multi-tenant on hardware and storage, you had to make them non-multi-tenant. Then it became cost prohibitive b/c the "cloud" provider is buying the same thing you'd have to buy for a non-cloud install. And then he has to try and make money.

Now, with VMware's huge advances, and the major performance boost given to us by the latest chips, and storage, we can provide multi-tenancy, while still giving the client excellent levels of performance that aren't impacted as much by the other tenants.

This reason alone is a huge factor in why it happened 5 years after the first attempt.

Ken Oestreich said...

Tim -

All good quantitative questions. I wish I'd documented them. Back at Cassatt we'd done a study of how forms of Automation vastly reduced the manual change management steps in ITIL v3 (I think I have a blog post on that somewhere).

The big takeaway was simplicity and speed. And some of the quantitative ways I'd measure it would be *reduction in runbook steps *reduction in manual ITIL/ITSM intervention steps *reduction in time to accomplish individual change management steps and the like.

Is that what you're asking?

Ken Oestreich said...

Brandon -

Agree that more powerful processing resources also helped spur the cloud movement.

But even with modestly-power processors (and even with non-virtualized processors) you can still create cloud-like simplicity, performance and scale-out.

But to your point, the advent of powerful processors made virtualization & consolidation finally worthwhile. And that, of course, started the tipping-point for the industry's acceptance of logical control and automation.

GREEN IT said...

Useful Post.. with example i will try to implement this one..

Ofcourse Ecomnets ia a green it service company which as conduct a webinar on this 20th April-2011 On Green IT SUMMIT.