Saturday, May 10, 2008

Cassatt scoops PARC?

Kudos to Data Center Knowledge for their report that Xerox PARC is developing predictive data center management software that increases efficiency by 30% -- and that it's based on management software for high-speed printers. (While I get the concept, it's a little like saying NASA has breakthrough space shuttle software based on a toaster...) As reported in GreenTechMedia:
PARC has developed software that can reduce servers’ energy usage by 30 percent (or, more likely, allow data centers to provide 30 percent more service using the same energy)... The software basically predicts demand, allowing data centers to prioritize and manage jobs more efficiently... Similar control software could be used to monitor and control electricity demand on the grid or in buildings, he said, but the first application is likely to be increasing data-center efficiency.

Further reported in the C|Net coverage,
In [the] ongoing project, PARC is trying to take the adaptive control systems that effectively manage the inside operations of printers and apply it to controlling data centers. Instead of slowing down the paper feed, for example, the adaptive system might shut down a bank of servers to cool off part of a data center, according to Nitin Parekh, director of business development in the hardware systems group.
Now, I know of a company that is already doing this in practice, and it happens to be where I work -- Cassatt. Cassatt's Active Response software. While not predictive, it does apply a continuous optimization algorithm to a huge swath of resources -- multivendor servers, O/Ss, applications, networking hardware. I like to say that it achieves savings via Efficient Operation of equipment, not through efficient equipment itself. And the savings number PARC quotes -- 30% -- is conservative by our own estimates.

Our explanation: Our system's goal is to maintain the service-level of any and all software applications in a large data center. If those levels should change -- because of a shift in demand, or due to an equipment failure -- the software takes action to correct for it. This action could be to provision a new piece of hardware/software and to re-route a network. Or it could mean shutting-down a server if an application is over-provisioned.

But wait: there's more. To take the PARC example above: one could also assign additional variables (we call them custom attributes) to each server or application. These attributes might have to do with temperature, power, or location. So, if in the course of maintaining an SLA for an application, our software finds that a bank of active servers is in a hot part of the data center, it might instead migrate the application to a server bank in a cooler area.

It's a lot cooler than anything inside a high-speed printer. I like to think of it as an operating system - but for an entire data center.

2 comments:

Anonymous said...

I always knew there was something to this automation thing.

Anonymous said...

The NASA/toaster comment is spot on. These articles are pretty vapid and I couldn't find anything about this technology on the PARC web site. Until I see more about this I won't get too excited.
From conversations with colleagues back when I was at HP I know that controlling high-speed printers is very complex (paper does stretch at high speeds btw, and ink is, well, messy). I have no dout that, like HP, Xerox has some very cool technology to address these situations. But I am very dubious of a direct application to data center optimization.
Again, let's see a technical paper instead of these PR-driven stunts.