Monday, October 6, 2008

Would you buy an Amazon EC2 appliance?

Before you scream "a what?" I'm only posing this as a thought experiment...

But the concept was recently put forth as an illustration
at last week's SDForum by an attendee. I kind of thought about it for a few minutes, and realized that the concept isn't as crazy as it first sounds. In fact, it implies major changes for IT are on the way.

First of all, the idea of a SaaS provider or web service provider creating a physical appliance for the enterprise is not new. There's the Google search appliance, but I also expect providers like Salesforce.com to do the same in the near future. (There are some very large enterprises that want to be 100% sure that their critical/sensitive data is resident behind their firewall, and they want to bring the value of their SaaS provider inside.)

So I thought, what would I expect from an Amazon EC2/S3 appliance to do? Similar to Google's appliance providing internal search, I'd expect an Amazon appliance to create an elastic, resilient set of compute and storage services inside a company, and it could support any/all applications no matter what the user demand. It would also have cost-transparency, i.e. I'd know exactly what it cost to operate each CPU (or virtual CPU) on an hourly basis. Same goes for storage.

This approach would have various advantages (plus a small limitation) to how IT is operated today. The limitation would be that its "elasticity" would be limited by the poolable compute horsepower within an enterprise. But the advantages would be huge -- who wouldn' t like a cost basis ~$0.10/CPU-hour from their existing resources? Who wouldn't like to shrug-off traditional capacity planning? etc. etc. AND they'd be able to maintain all of their existing compliance and security architectures, since they were still using their own behind-the-firewall facilities.

Does it still sound crazy so far?

NOW what if Amazon were to take one little extra step. Remember that limitation above -- the what-if-I-run-out-of-compute-resources issue? What if Amazon allowed the appliance user to permit reaching-out to Amazon's public EC2/S3? Say you hit peak compute demand. Say you had a large power outage or a series of hardware failures. Say you were rolling-out a new app and you couldn't accurately forecast demand. This feature would be valuable to you because you'd have practically infinite "overflow" -- and it would be valuable to Amazon since it would drive incremental business to their public infrastructure.

To be honest, I have no idea what Amazon is planning. But I DO know that the concept of commercially-available software/hardware to create internal "clouds" is happening today. And not just in the "special case" of VMware's "VDC-OS", but in a more generalized approach.

Companies like Cassatt can -- today -- take an existing compute environment, and transform its operation so that it acts like an EC2 (an "internal cloud"). It responds to demand changes, it works around failures, and it optimizes how resources are pooled. You don't have to virtualize applications if you don't want to; and if you do, you can use whatever VM technology you prefer. It's all managed as an "elastic" pool for you. And metered, too.

To be sure, others are developing similar approaches to transforming how *internal* IT is managed. But if you are one of those who believes in the value of a "cloud" but wouldn' t use it, maybe you should think again.

Sound crazy?

No comments: