Tuesday, October 6, 2009

Differing Target Uses for IT Automation Types

One of the most oft-repeated themes at this year's VMworld was that of "automation." Everybody claimed they had it, but on closer investigation it had any number of poorly-defined meanings.

A specific angle I want to address here is that of infrastructure automation; that is, the dynamic manipulation of physical resources (virtualized or not) such as I/O, networking, load balancing, and storage connections - Sometimes referred to as "Infrastructure 2.0". Why is this important? Although automation of software (such as provisioning & manipulation of VMs/applications) usually captures attention, remember that there is a whole set of physical datacenter infrastructure layers that IT Ops has to deal with as well. When a new server (physical or virtual) is created, much of this infrastructure also has to be provisioned to support it.

There are 2 fundamental approaches to automation I'll compare/contrast: Let's loosely call them "In-Place" Infrastructure Automation, and Virtualized Infrastructure Automation.

Confession: I am a champion of IT automation. The industry has evolved into a morass of technologies and resulting complexity; the way applications (and datacenters) are constructed today is not the way a greenfield thinker would do it. Datacenters are stove-piped, hand-crafted, tightly-controlled and reasonably delicate. Automating how IT operates is the only way out -- hence the excitement over cloud computing, utility infrastructure, and the "everything-as-a-Service" movement. These technology initiatives are clear indications that IT operations desires a way to "escape" having to manage its mess.

At a high-level, automation has major top-level advantages: Lower steady-state OpEx, greater capital efficiency, and greater energy efficiency. And, automation also presents challenges typical of paradigm changes: distrust, organizational upheaval, financial and business changes. The art/science of introducing automation into an existing organization is to reap the benefits, and mitigate the challenges.

As infrastructure automation moves forward, it appears to be bifurcating along two different philosophies. Each is valid, but appropriate for differing types of uses:
  • "In-place" infrastructure automation: (distinct from run-book automation) Seeks to automate existing physical assets, deriving its value from masking the operational and physical complexity via orchestrating in-place resources. That is, it takes the physical topology (servers, I/O, ports, addressing, cabling, switches, VMs etc.) and orchestrate things to optimize a variable such as an SLA, energy consumption, etc.
  • Virtualized Infrastructure automation: Seeks to first virtualize the infrastructure (the assets as above) and then automate their creation, configuration and retirement. That is, I/O is virtualized, networking is frequently converged (i.e. a Fabric), and network switches, load balancers, etc. are virtualized as well.
Each of these two approaches has properties with pros and cons with which I'm familiar -- having worked for companies in each space. I'll try to elucidate a few of the "high points" for each:

"In-Place" Infrastructure Automation:
Examples: Cassatt (now part of CA), Scalent
  • Automates existing assets: Usually, there is no need to acquire new network or server hardware (although not all hardware will be compatible with the automation software). Thus "in-place" assets are generally re-purposed more efficiently than they would be in a manually-controlled scenario. Clearly this is one of the largest value propositions for this approach - automate what you already own.
  • Masking underlying complexity: A double-edged sword, I suppose, is that while "in-place" automation simplifies operation and streamlines efficiency, the datacenter's underlying complexity is still there - e.g. the same redundant (and sometimes sub-optimal) assets to maintain, same cabling, same multi-layer switching, same physical limitations, etc.
  • Alters security hierarchy: Since assets such as switches will now be controlled by machine (i.e. the automation SW automatically manipulates addresses and ports) this architecture will necessarily modify the security hierarchy, single-point-of-failure risks, etc. All assets fall under the command of the automation software controller.
  • Broad, but not complete, flexibility: Because this approach manipulates existing physical assets, certain physical limitations must remain in the datacenter. For example, physical server NICs and HBAs are what they are, and can't be altered. Or, for example, certain network topologies might not be able to be perfectly replicated if physical topologies don't closely match...or, if physical load balancers aren't available, servers/ports won't have access to them. Nonetheless, if properly architected, some of these limitations can be mitigated.
  • Use with OS virtualization: This approach usually takes control of the VMM as well, e.g. takes control of the VM management software, or directly controls the VMs itself. So, for example, you'd allow the automation manager to manipulate VMs, rather than vSphere.
  • Installation: Usually more complex to set up/maintain because all assets, versions, and physical topography necessarily need to be discovered and cataloged. But once running, the system will essentially maintain its own CMDB.

Virtualized Infrastructure Automation:
Examples: Cisco UCS, Egenera, Xsigo
  • Reduction/elimination of IT components: The good news here is that through virtualizing infrastructure, redundant components can be completely eliminated. For example, only a single I/O card with a single cable is needed per server, because they can be virtualized/presented to the CPU as any number of virtual connections and networks. And, a single virtualized switching node can present itself as any number of switches and load balancers for both storage and network data.
  • Complete flexibility in configuration: By abstracting infrastructure assets, they can be built/retired/repurposed on-demand. e.g. networking, load balancing, etc. can be created at-will with essentially arbitrary topologies.
  • Consistent/complementary to OS Virtualization models: If you think about it, virtualized infrastructure control is pretty complementary to OS virtualization. While OS virtualization logically defines servers (which can be consolidated, moved, duplicated, etc.), infrastructure virtualization similarly defines the "plumbing" and allows I/O and network consolidation, as well as movement/duplication of physical server properties to other locations.
  • New networking model: One thing to keep in mind is that with a completely virtualized/converged network, the way the network (and its security) is operationally managed changes. Organizations may have to re-think how (and who) creates and repurposes network assets. (Somewhat similar to coping with "VM Sprawl" in the software virtualization domain)
  • Use with OS virtualization: This approach is usually 'agnostic' to the software payload of the physical server, and is therefore neutral/indifferent to the VMM in place. Frequently the two can be coordinated, however.
  • Installation: Usually relatively simple. Few components per server, few cables, especially in a 'green field' deployment. Installation of software/BIOS on physical servers is probably not what you're used to, though.
Ideal use of these two approaches differs too. Obviously, "In-Place" Infrastructure Automation is probably best-suited for an existing set of complex datacenter assets - especially in a Dev/Test environment. As you'd expect , a number of existing lab automation products out there target this market. On the other hand Virtual Infrastructure Automation can certainly be deployed on existing assets, but its real value is for new installations where minimal hardware/cabling/networking can be designed-in from the ground up. Most of these products are designed for production data centers, as well as cloud/utility infrastructures.

My overall sense of the market is that adoption of "in-place" automation will be driven primarily by progressive IT staffs that want a taste of automation and service-level management. Virtualized Infrastructure Automation adoption, on the other hand, will tend to ride the technology wave driven both by networking vendors and OS virtualization vendors.

Stay tuned for additional product analyses in this space...


SAM said...

"In place" vs. Virtualized seems like a false distinction.
In the Gartner RBA 2.0 document and in a February Gartner piece on virtualization, they make the points that RBA 2.0 products have the intelligence to be a 'service governor' and make decisions as well as take actions, and that integrating the management of physical and virtual systems is a best practice, respectively.

This unified approach is the way many MSPs are driving costs out of their operations. If IT departments don't want to be overrun by MSPs and outsourcers, wouldn't they do well to adopt their strategies and tactics?

Ken Oestreich said...

Thanks Sam - I agree that unified management of physical + virtual is a "best practice". What I am trying to draw a distinction between is how service governors actually affect change on the infrastructure. There are two approaches: (1) manipulate what's there vs. (2) virtualize what's there.

And yes (!) you'd think that IT departments would want to adopt the approaches that MSPs already recognize as effective/efficient.

ronny said...

nice reading article, I enjoyed reading it. Lots of useful info. Thanks