
Another morning keynote (which I unfortunately missed most of) was delivered by Lew Tucker, Sun Microsystems' new CTO of Cloud Computing (and also a friend and former colleague). He's quite a visionary, and went so far as to suggest that computing resources of tomorrow will be brokered/arbitraged based on specializations, costs, etc.
One particularly lively panel was hosted by Chris Primesberger of E-Week, with panelists from Salesforce.com, Intacct, SAP, RingCentral and Google. There was some light discussion about cloud differentiation, interaction, and standard approaches to describing cloud SLAs. Most generally agreed that there would in fact be 3rd-party businesses brokering between providers at some point. The other enlightening discussion focused on capacity planning for the cloud -- what if a user scaled from ten to ten-thousand servers in a few days or weeks? Could services like Amazon handle this? In a consistent - and impressive - way, the panelists agreed that these sorts of scale issues were "a drop in the bucket" when you consider the vastness of what these large service provide on a daily basis.
In what drew the most spontaneous applause was a question asked to the panel (but probably directed to Rajen Sheth of Google) by a member of the audience. Essentially, how could we *not* assume there would be service lock-in, when Force.com had one platform model, and Google App Engine had another? (a good point elucidated by James Urquhart some time ago). The Google response focused on "providing the best possible service for customers" but was clearly a dodge. (BTW, the author herein suggests that SaaS and PaaS models will follow the same proprietary/fragmentary model as did Linux and Unix).
In an afternoon panel led by David Brown of AMR research, the main question addressed was whether (or to what degree) cloud computing was disruptive. The panel consisted of hardware, software and services vendors from Elastra, Egenera, Joyent and Nirvanix. The panel agreed that there were different types of disruption, depending on where you sit. From an infrastructure management perspective, internal cloud architectures can be disruptive to IT Ops, since it changes how resources are applied and shared, and the fundamentals of capacity planning. Cloud architectures can also be disruptive to traditional forms of hosting and outsourcing, due to their pay-as-you-go approach.
I will say that Jason Hoffman, Founder of Joyent, stood out in the panel clearly as a visionary in this field. Keep an eye on this guy. His take on disruption was that if "cloud" means Infrastructure-as-a-Service, then it's really just another form of hosting, and not very disruptive. But if how "clouds" are applied to support business needs using policy (i.e. to dynamically communicate SLAs, Geographic compute locations, costs, replication, failover,etc.) then they become very disruptive. IT administration would shift from scripting and fire-fighting, to policy-development and policy modification.
Finally, I will point out that many more folks showed-up who would use clouds and/or broker cloud services than who would actually *make* the clouds (IaaS) in the first place, again attesting to the point I made earlier this week that it's a lot harder to do, and only really sophisticated vendors will be taking that on.
No comments:
Post a Comment