Here's what I mean: there are times when server use is low - like during weekends or during the evening. There are also "events" (like the power emergencies we tend to get here in California) where you'd like to minimize power use when your electrical utility tells you to.
So I'm thinking - why shouldn't data centers respond to electrical cost/availability/demand the same way they respond to compute availability/demand? When "events" happen, we turn off the office lights, right?
It turns out that power companies (like PG&E here in Sunny CA) have "traditional" programs to encourage energy efficiency (like rebates for efficient light bulbs, and even efficient servers). But they also have special demand-response programs and incentives for firms that react to electrical demand during "events" by additional short-term reductions in power use (like turning off lights & AC).
Couple that with server automation software, and you've got a combination that's pretty neat: Data Centers that can do things like turn-off low-priority servers, or perhaps move critical applications to other data centers during power events. Cassatt's identified a couple of interesting scenarios:
- Low-priority servers automatically powered-off during power "emergencies"
- Standby servers that remain powered-off ("cold") until needed
- "Follow-the-moon" policies where compute loads are moved to geographies with the least-expensive power
- Policies/direction to use the most power-efficient servers first
- "Dynamic" consolidation, where virtual machines are constantly moved to achieve a "best-fit" to maintain utilization levels (minimizing powered-up servers)
If building operators can automatically turn off non-critical lights and HVAC systems during electrical emergencies, then why don't data centers?
No comments:
Post a Comment