Monday, January 24, 2011

Why Utility Computing Failed (But Cloud Computing Didn't)

Since about 2006 I’ve been involved with data center IT automation. Back then I started with Cassatt, one of the first companies trying to automate infrastructure components in the data center.  Rob Gingell, the CTO, had a design principle of “service-level automation”, where the variable monitored and maintained was the service, not the server. That was a revolutionary thought.

The technology behind this was a combination of orchestrating physical and virtual devices, which automatically composed appropriate infrastructure stacks to keep the service SLA within pre-defined bounds. And it absolutely worked!  The best market description we had for this technology was “Utility Computing,” and drew from the analogy of electrical utilities: No matter what the draw (load), the supply would always be generated/retired (elasticity) to keep up with it.
 
But selling the Utility Computing model, and service-level automation technology, was hard, if not impossible. We’d frequently have successful POC’s, and demonstrate the product, but the sales inevitably stalled.  The reasons were many and varied, frequently tied to the ‘psychographic’ of the buyers.  But overall, we could point to a few frequent problems:
  • Automation was scary: The word “Automation” frequently scared-off  IT administrators.  They were accustomed to complete control of their hand-crafted infrastructure, and visibility into every layer.  If they couldn’t make and see the change, they didn’t trust that the system actually worked.
  • Lack of  market reference points:  Peers in the market hadn’t tried this stuff either – and there was no broad acceptance that utility computing was being adopted
  • Inflexible Process: The use of ITIL and ITSM procedures were designed to govern manual IT control, and had no way to incorporate automatic approaches to (for example) configuration management.
  • Organizational fear: There was usually the un-stated fear that the utility computing automation systems would obviate the need for certain jobs, if not entire IT organizations. Plus, the systems spanned multiple IT organizations, and it was never clear which existing organization should be put in charge of the new automation.
  • Multiple Buyers: Because Utility Computing touched so many IT organizations, the approval process necessarily included many of them. Getting the thumbs-up from a half-dozen scared organizations was hopeless. Even if the CxO mandated utility computing, implementation was inevitably hog-tied.
Enter Virtualization

Somewhere around 2007, OS virtualization began to go mainstream. And its value proposition was simple and uncomplicated: Consolidate applications, reduce hardware sprawl. It was a no-brainer.

But just below the surface, virtualization had an interesting effect on IT managers: It began to make them more comfortable to break the binding of physical control and physical management of servers, transitioning instead to being more at ease with logical control of servers.

As consolidation initiatives penetrated data centers, additional virtualization management tools followed. And with them, more automated functions. And with each new function came IT’s incremental comfort with automating logical data center configurations.

And Then, Commercial Examples

At just about the same time, Amazon Web Services had begun to commercially offer these virtual machines in their EC2 – Elastic Compute Cloud. This could be had for the use of a credit card, and charged-for on an hourly basis. IT end-users now had simple – if sometimes only experimental – access to a truly automated, logical infrastructure. And one where all “hands-on” aspects of configuration were literally masked inside a black box.

Now the industry had its proof-point: There were times when full-up IT automation, without visibility into hardware implementation, worked and was useful.

Use of EC2 (initially) lay outside the control bounds of IT management and IT’s organizational boundaries. Developers and 1-off projects could leverage it without fear of pushback from IT – usually because IT never even knew about its use.

Once IT management acquiesced that EC2 (and similar services) was being used, they finally had reason to look more closely. And the revelations were telling: How was it that the annualized cost basis for a medium-sized server was lower than an in-house implementation could possibly hope to achieve? How come configuration and tear-down was so simple? Finally IT had to look in the mirror at the fact that this thing called cloud computing might be here to stay?

Looking Back

While it’s clear that the concept of cloud computing isn’t new, some important industry changes – more psychological and organizational than technological – had to take place before widespread adoption would happen.  And even then, it took some simple commercial implementations to prove the point. Too bad these weren't around a few years earlier during the "utility computing" era.

Watching this unfold, lessons *I* learned – or at least some explanations of this effect I’ve examined have been
  • Psychology/Attitude shifted:  the more broadly OS virtualization was adopted, the more IT’s attitudes became accepting of automation and of logical control.
  • Technology change was replaced by operational change: The new approach is more a change to the operational approach than a technology upheaval. The way users interacted with the cloud was appealing and nearly viral.
  • Value was Immediate: The “new” cloud economic evidence was/is usually so compelling that it has forced IT to take a second look. This started with simple consolidation economics, but has expanded well beyond that.
  • Broad availability accelerated adoption: Even only a few commercially available cloud providers helped provide immediate proof-points that the new model was here to stay. And purchasing this technology was as simple as entering a credit card number
Going forward, I would expect these 4 (perhaps more) “pressure points” will continue to help accelerate the use and adoption of internal clouds, public clouds, etc.   In future Blogs I’ll begin to look at how to further mainstream Cloud (and automation) adoption, as it serves to accelerate improvements to business’ bottom line.