I've had a number of conversations with a really insightful Architect with a major healthcare provider. He runs over 1,000 servers (each, I might add, consuming 200+ Watts) to deliver applications to hospital desktops/kiosks. On top of what he's using, he's got a built-in 15% buffer for peaks & failures. Oh -- and did I mention a remote failover site where this configuration is essentially duplicated?
And here's the kicker: He knows that during nights/weekends, user demand drops off precipitously.
Conventional wisdom has it that *all* of these servers should be on and ready 100% of the time. Hmmm. That's 400,000W (think 400 blow driers) and probably another 400,000W or so to keep them cool.
Fortunately, our friend realizes that even during peak hours, not all of these servers need to be on... and off-peak, many fewer. What he *does* need is to ensure availability -- that at any time he has headroom for a momentary peak, and that he can bring on more at a rate that keeps up with demand.
This is not a solution that virtualization, or more efficient power supplies can impact; but by breaking with conventional wisdom and power controlling the servers based on usage/demand, we hope that his organization will save $$ and even regain precious power/cooling headroom.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment