Monday, January 12, 2009

IT's Big Blind Spots for 2009 (Volume 2)

Last week I wrote about observations of "Big blind spots" I've noticed that IT Operations -- and vendors -- suffer from. My opinion is that these blind spots are largely due to marketing hype around the more glitzier products and technologies - to the demise of data center operations. They still may not recognize where the biggest unsolved problems still lie.

Without being too provocative, I'll try to highlight some observations I've made during discussions with analysts, customers and end-users. During the past few months, it's become clearer where the industry is still suffering from the BSSs (big blind spots), or at least, from chronic myopia. Knowing of the blind spots makes for better decision-making, and hopefully, better products.

1. The industry assumes “agility” =“virtualization”
This is plain misleading. True, virtualization of software & OSs (via hypervisors or containers or what have you) yields significant mobility benefits. But this agility is at the software level only, and limited by certain factors.

Here's the Big Blind Spot: Virtualization vendors fail to mention the manual administration needed for physical infrastructure. Take, for example, a consolidated server that has a dozen VMs on it. It's probably been outfitted with 4 or more NICs, each of which could sit on different VLANs. So, if you want to have a failover or DR strategy for this server, or you want to migrate VMs off of this server, you're screwed unless you have another identical physical server pre-configured as a host... *including* the 4 or more identical NICs already inserted and ready-to-go. So the "agility" claim for virtualization comes with a caveat -- that your physical hardware, I/O and networking is agile too. hmmm.

2. The industry assumes “virtualization” = “simplification”
We've heard all of this before. There is certainly simplification created in the ability to *create* new virtual servers. From a development & test perspective, this is a huge breakthrough for developers needing to build-up and tear-down resources.

But here are the Big Blind Spots: As many have begun to point out:
(a) virtualization creates more objects to lifecycle manage, more to layer-on security, and more to simply account for. Sure there are management tools out there, and automated tools are on the way. But nothing changes the growing "VM sprawl"
(b) consolidated servers require more I/O per physical server. As I pointed out above, you'll find that NICs, HBAs, and cabling density will probably also increase. So will your networking headaches.
(c) virtualization puts more VMs at risk if/when HW fails. Yep; this can be solved for (see below) but it doesn't necessarily bolster the idea that virtualization simplifies.
(d) virtualization of part of your data center means that you now have at least two managment silos... those for your virtual infrastructure, and those for your physical servers. That doesn't bolster the simplification argument, either.

3. The industry associates “provisioning” with “software”
This one really annoys me. The high profile created by Opsware (now HP) and BladeLogic (now BMC) associated "automated" provisioning with software. True, there have been huge steps in the past few years that advanced configuration control and provisioning of images to servers.

But consider this Big Blind Spot: You still have to provision the Iron. In a virtualized environment, every time anything but a relatively minor change happens, the physical infrastructure that has to follow it must change. When will physical provisioning of the NICs, HBAs, out-of-band management, network, and storage get the higher-profile that software provisioning gets? (Stay tuned regarding I/O virtualization)

4. The industry assumes HA & DR is solved by VMs
True, that the #1 or #2 reason virtualization is being adopted is for its ability to do HA... that is, if the software fails, another virtual host can be tapped for the job. Same goes for DR - products like VMware SRM and other virtualization providers aspiring to break into the DR space as well.

But take note of IT's Big Blind Spot: This assumption presumes that (a) 100% of what you want to provide HA/DR for is 100% virtualized, (b) all apps must be virtualized on identical VM vendor technology, and (c) recovery equipment has to be pre-provisioned with VM software and pre-configured nearly identically to the primary servers. As an example, consider an SAP (or other composite application) implementation. You've got a bunch of servers/services, possibly including DBs. If they are certain Oracle database servers, you *can't* virtualize them (thanks to licensing restrictions). So, to cover this SAP app with HA, you're either screwed, or need to use two or more HA products to cover it. Net-net: for certain environments, VM-based availability is certainly a help, but don't "drink the cool-aide" that it's a panacea.

-----

Now, all is not lost. Looking across these "blind spots" it becomes pretty clear that the limiting factor is the infrastructure's ability to adapt to the changing software workloads placed on it.

Mobility and agility has been addressed "above" the hardware by VMs and containers. Now mobility and agility have to be addressed "below" the hardware -- by virtualizing and/or orchestrating I/O, NICs, HBAs and the network. The market is beginning to produce point-products to solve for these issues, and vendors like Egenera have been integrating them into orchestration products for some time now.

Stay tuned for a review of the technology market that will perfectly complement virtualization in 2009 and beyond: infrastructure virtualization and orchestration.

1 comment:

Anonymous said...

Ken,

great article. I started to get fed up with this 'virtualize up the clouds' and 'everything works from startup to enterprise'-hype articles. I hope there will be a third Volume of your 'IT blindspots'.

Roland