I was giving a presentation to an analyst today, describing how an optimized IT infrastructure is inherently energy efficient.
And then it occurred to me: The entire IT monitoring and reporting sector (those guys who write software that pages you when something goes wrong in your data center) is perpetuating waste.
The software assumes that there's a problem only when service level agreements (SLAs) are too low -- but never when they are too high. This implies that alert storms get triggered when you're under-provisioned. But when you're over-provisioned, it's bad too... too much capital being wasted delivering an SLA that's better than needed. This scenario is probably replayed during every off-peak hour a data center operates.
What you don't measure, you can't manage. And therein lies the waste being perpetuated by IT: it's been implicitly assumed that too much infrastructure is OK.
Actually, what we need is a monitoring and control system that maintains an optimal service level -- not too high, not too low. And, when demand changes, automatically adjusts resources to re-optimize the SLAs. That adjustment might include re-allocating or de-allocating hardware, or re-provisioning servers on-the-fly.
Just once, I'd like one of my IT friends to get an alarm delivered to his pager that reads "system critically over-provisioned: wasting power"