This means that if you run a development lab, idled machines might be automatically power-down after a period of time (say, after a test run) to save power. It means that a server farm can automatically trigger provisioning additional servers if service levels drop below a pre-determined threshold - saving time. It means as different application demands ebb and flow, the data center will adapt to demand by re-purposing (or retiring) bare-metal hardware -- making the best use of capital. And all of this whether-or-not virtualization is present, regardless of the underlying platform, and without adding software layers.
You'll find this concept filed under Gartner's Real-Time Infrastructure (RTI) category, under Forrester's Organic IT concept, or sometimes under Utility Computing. You've even seen big vendors predicting it as a vision. But I'm happy to point out that we're doing some of it already today. Think of this as another step toward a greener data center, because it really optimizes all forms of operational and capital efficiency...
The simplest application of this Demand-Based Policy Management is with Active Power Management of data center servers to curb rampant power waste. (Check out Who's Recommending Power Management) The concept has been long-used on desktops (check out 1E or Verdiem; there are over a half-million desktops under power management today). In IT environments such as dev/test, we've seen opportunities to cut gross power consumption by 30% or more in a few months. All this by simply monitoring the server activity and then gracefully shutting-down idled hardware (where idleness can be defined any way data center managers prefer). Check out the endorsements of power management from the EPA's Andrew Fanara as well as from Jon Koomey in the Cassatt press release. You can also watch a webcast about Active Power Management
A more sophisticated use of demand-based policy is to automatically maintain the service-level of applications. Take the server farm example... or for that matter, a SOA service. In either example, demand on the service may be cyclical or unpredicatable. Instead of massively over-provisioning hardware, one could use a service level metric (again, of your determination) to provide control. If service level drops (say, due to increased demand, or perhaps because of equipment failure) the Cassatt system will simply power-up or re-purpose another piece of hardware and create a new server to increase compute capacity. And you don' t need a virtualization platform to do this. (Or you could. Previously we announced compatibility with VMware ESX as well as with Xen.) Check out the 10 min webcast on demand-based policy management.
The benefits here are massive: Besides the power saved when not using a piece of equipment, you're maximizing the use of capital because it's dynamically repurposed. Similar control is used for Cloud Computing infrastructure like Amazon Web Services (AWS) and they've achieved a compute price-point unheard-of in the industry: $0.10 per CPU per hour. Try that with your existing infrastructure.
My, my. I've overlooked the remainder of the Cassatt announcement:
- a new interface to interact with external systems - either management systems, or equipment like Load Balancers. Take the example of using an F5 load balancer in an environment with dynamic repurposing. As new servers are provisioned, Active Response can communicate in real-time to the balancer and provide the new VIPs in seconds as the servers are brought online.
- a new set of platform compatibilities, including power distribution units, used to remotely power-manage servers. This is in addition to a massive list of supported hardware, OSs, applications and VM technologies. This solution will work with what you have today :)