1. Software virutalization.
No surprise here. Abstracting software from the underlying CPU yields mobility, consolidation, and degrees of scalability. It also simplifies automated management and portability of workloads, such as with virtual appliances / AMIs. Aside from a few managerial kinks being worked-out, this technology is already accepted de-facto, esp. because we're already seeing it become a commodity play, and sedimenting into other products.
2. Infrastructure orchestration / unified computing.
This is the one we're all beginning to hear about. As I recently outlined, this technology is a perfect complement to software virtualization -- it essentially gives "mobility" to infrastructure, rather than to software. It allows IT operations to define I/O, storage connectivity and networking entirely in software, resulting in stateless and re-configurable CPUs. Egenera was the pioneer in this area, but the market is now getting an adrenaline shot-in-the-arm from Cisco's UCS announcement.
Unified computing / Infrastructure orchestration is valuable because it enables a highly-reliable, scalable and re-configurable infrastructure -- a perfect platform for physical *and* virtual software. It permits IT to "wire-once" and then create CPU configurations (virtual NICs, HBAs, networks/VLANs, storage connections) using a unified/consolidated networking fabric. Plus, it is a simple, elegant, cleary-more-efficient approach. Think of this as provisioning hardware using software. This approach has numerous positiv properties - not the least of which are that you can clone hardware configurations when you need to (a) scale, (b) migrate, (c) fail-over, and (d) recover from entire system failures [disasters]. Again, regardless of physical or virutal payloads.
We'll see this technology begin to accelerate in the market. Promise.
3. Intelligent software provisioning
"Huh?" I hear you say? Yes! While I'm not sure what this market segment may eventually be called, it represents the third critical data center managemnet component. #1 gives software mobility; #2 yields infrastructure flexibility. And #3 is how the actual software (physical, virtual, appliances, AMIs, etc.) is constructed and "doled-out" as a workload on #1 and #2.
My eyes were opened after learning more about a company called FastScale. Picture an intelligent software provisioning system that knows minimum-amount-required of software libraries needed to run an OS or application. As it turns out this is usually something only around 10%-15% of the multi-gig bag-of-bits you try to boot every time you bring up a server. And that even includes the VM, where virtual systems are involved.
The result? 3 really nice properties:
a) Speed. Getting applications up-and-running an order-of-magnitude faster. Not having to move as many bits over the network to boot a given server is a real time and money saver.
b) More efficient consolidation. With smaller software footprints, more VMs, appliances, etc. can fit on a given memory footprint. That means that denser consolidation is frequently possible -- not to mention $ savings on those gigs of memory you have to buy when you consolidate.
c) Inherent configuration management. With a database of all libraries and bits, plus knowing where you put them, you can always monitor configurations and verify compliance, etc. Plus, you can track what patches went where (and frequently, you may find you don't even need the patch if it's not a critical component to the reduced libraries you're using!)
d) Ability to provision into any form of container: In other words, this system can provision onto a bare-metal CPU, into a VM, or for that matter, into an appliance like an AMI if you're using a compute cloud. Wow, very neat.
This intelligent provisioning approach is also highly-complementary to existing compliance and config management products like OpsWare (HP) or BladeLogic (BMC).
Summary
So what if you have all 3 of these technologies? You'd have a data center where
- Workloads were portable, and relatively platform-independent
- Infrastructure was instantly re-configurable and adapted to business conditions, failures, etc.
- Software could be distributed and brought-up on the order of seconds, allowing near-instantaneous adaptation to scale, business demand or failures.
2 comments:
Ken:
Agreed that the intelligent provisioning is a key technology not only for unified computing but also for cloud computing models. The stumbling block always seems to be how well existing software ports into this model. Folks always test against the usual suspects such (MS, SAP, etc) but I often find its the home grown apps or custom, that a business critical, that cause problems. I hear this lament from customers as they simply try to move to virtual servers. I see a number of vendors working on this "stack" concept, so hopefully someone will be able to crack the code.
As I posted on my blog a few weeks ago, I think the real magic starts to happen when we start to see apps and systems specifically written for this type of infrastructure environment.
Omar Sultan
Cisco
Thanks Omar. Agreed that this technology is still nascent. But the *real* trick is getting nearly *any* app to run efficiently in a hosted/elastic environment.
Post a Comment