Just saw this article on an intelligent appliances communicating with the local power grid. Basically, there's now a clothes dryer that checks-in to see if power is "expensive" (i.e. is it a peak period, or is there a power emergency) and only turns on during off-peak times to save money. Intelligent power consumption is a reality.
This says to me that the concept of intelligently balancing Supply/Demand is ever-more sophisticated - it's not just about one meeting the other: It's about optimizing the economics of the deal.
So What If IT economics had a similar governing economic decision-making process? It can exist today - whereby the how, where and when of compute resources could be governed by a number of variables (i.e. cost of hardware, availability of resources, importance of the SW service) including the cost of the power/cooling/facilities. You might have a "follow-the-moon" system, whereby data centers might have jobs routed to them when they're off-peak, and resources are cheap. Like at 3:00am, when your clothes dryer is running.
Monday, January 22, 2007
Friday, January 19, 2007
Why Don't Servers Self-Consolidate?
With the mad rush to virtualize and consolidate servers, there's quite a market for "Consolidation planning" tools. (i.e. PlateSpin's PowerRecon, Provment's Capacity Planner, and others). But what I don' t get is that, once you've figured out how to optimize your consolidation -- like a "best fit" for puzzle pieces -- things will change. So, a few months down the line, most datacenters have to re-consolidate.
With all of the automation tools becoming available, you'd think that some of them (I have one in mind) ought to be able to continuously monitor the active services, priorities, resource needs and available resources... and then continuously shuffle-around the virtualized applications to ensure "best-fit" consolidation. That way, you never need more than a maximum number of machines for any given service demand level. And, if you're really smart, these automation tools ought to physically power-down the unused servers to save on power & cooling.
With all of the automation tools becoming available, you'd think that some of them (I have one in mind) ought to be able to continuously monitor the active services, priorities, resource needs and available resources... and then continuously shuffle-around the virtualized applications to ensure "best-fit" consolidation. That way, you never need more than a maximum number of machines for any given service demand level. And, if you're really smart, these automation tools ought to physically power-down the unused servers to save on power & cooling.
Friday, January 12, 2007
The world only needs 5 computers
Back in November 2006, Greg Papadopoulos, Sun's CTO, offered up a Blog positing that the compute world will consolidate down to 5 big compute facilities (i.e. Google, Yahoo!, Amazon, eBay, Salesforce.com). I might humbly also add a carrier like Verizon... Not that anybody won't have laptops, and not that enterprises won't have data centers. It's just that the really large, generalizable, economy-of-scale computing will become outsourced and centralized.
And hey, it's happening already - and any IT professional (and even small business owner?) has to start thinking about it:
How are we going to get there? 2 ways - "Build" and "transition".
And hey, it's happening already - and any IT professional (and even small business owner?) has to start thinking about it:
- Applications are being delivered as a "service" both on salesforce.com - and on their sister site, AppExchange. In fact, AppExchange is essentially a community where 3rd-parties can contribute compatible applications that salesforce.com then hosts as a service.
- Google (and others) are dipping their toe in the water (and more!) by providing applications like calendars, spreadsheets & productivity tools, and others, in a hosted environment - they store all data - and not just documents. i.e. I used their Browser Synch to store all bookmarks, personal preferences, etc. -- so, no matter what computer I sit down at, I have my entire browser preferences available. Doesn't this really begin to blur the line between local computing and what happens in the "cloud"? hey, and maybe stay tuned for the much-rumored "Gdrive"...
- Amazon.com - here's the big entrant - offering-up their "Elastic Compute Cloud" (EC2) as well as their "simple storage service" (S3), and even a queueing service. Users can leverage Amazon's huge IT infrastructure by creating virtual machines of any flavor and deploying applications of their choice on them, using the EC2 storage, etc. It's getting to the point where I won't need a backup drive, and IT managers don't need a single in-house server.
How are we going to get there? 2 ways - "Build" and "transition".
- Build - well, that's exactly what the above companies are doing - it requires lots of capital, and a huge user base. And the race is off...
- Transition - by this I mean, decreasing the barriers-to-adoption of using these resources. For example, on the consumer side, look at things like JungleDisk, a new entrant, that makes it simple & accessible for anyone with a PC to use the Amazon S3 - and for only $0.15/month/GB. I might not need a hard drive soon. And for example, on the enterprise side, consider utility automation controllers like Collage, where compute requirements will be assigned to the most economically-advantageous resources -- either in-house, or perhaps, to Amazon's EC2!
Thursday, January 4, 2007
Virtualization is dead: Long-live virtualization!
Although it's a red-hot topic now, I believe virtualization is just a stepping-stone to bigger disruptions and changes in how IT infrastructure is managed. I'm betting virtual machines become a ho-hum topic in a few short years, and disappear into the background as free, generic components.
First-off, most folks are viewing virtualization simply as using a hypervisor to de-couple the OS from the hardware... while allowing multiple OS's to share the same hardware. This begins to reduce the importance of the underlying hardware (making it more of a generic resource) and also allows for a more "fluid" approach to locating software applications. So, at a basic level, people are swarming around virtualization to "consolidate" software, making better overall use of existing hardware.
But, don't forget that networks can be virtualized -- i.e. VLAN switches, Routers and even naming/address spaces can be changed on-the-fly to make better use of resources, and storage can be virtualized too -- i.e. LUNs, file systems, file names etc. can be abstracted away to make better use of resources as well. Check out Xsigo that is virtualizing NICS and the network fabric, or 3Par that's virtualizing storage
Anyhow, pretty soon, virtual machines will probably disappear and become free utilities, part of the OS -- or, more probably, part of applications themselves. (check out the companies like rPath that are creating "software appliances")
Virtualization's _real_ value is as the enabler to allow for automation engines to create and control shared computing, network and storage pools. In effect, virtualization enables the automatic "impedence matching" that IT operations folks have been craving for years. The result? The ability to provide utility-style computing: rules-based optimization and re-allocation of resources in a data center - including usage metering and more.
My prediction? Virtualiztion will be red-hot for another 2 years - and then fade into the background like so many other technologies, to be replaced by all of the white-hot automation products to follow.
First-off, most folks are viewing virtualization simply as using a hypervisor to de-couple the OS from the hardware... while allowing multiple OS's to share the same hardware. This begins to reduce the importance of the underlying hardware (making it more of a generic resource) and also allows for a more "fluid" approach to locating software applications. So, at a basic level, people are swarming around virtualization to "consolidate" software, making better overall use of existing hardware.
But, don't forget that networks can be virtualized -- i.e. VLAN switches, Routers and even naming/address spaces can be changed on-the-fly to make better use of resources, and storage can be virtualized too -- i.e. LUNs, file systems, file names etc. can be abstracted away to make better use of resources as well. Check out Xsigo that is virtualizing NICS and the network fabric, or 3Par that's virtualizing storage
Anyhow, pretty soon, virtual machines will probably disappear and become free utilities, part of the OS -- or, more probably, part of applications themselves. (check out the companies like rPath that are creating "software appliances")
Virtualization's _real_ value is as the enabler to allow for automation engines to create and control shared computing, network and storage pools. In effect, virtualization enables the automatic "impedence matching" that IT operations folks have been craving for years. The result? The ability to provide utility-style computing: rules-based optimization and re-allocation of resources in a data center - including usage metering and more.
My prediction? Virtualiztion will be red-hot for another 2 years - and then fade into the background like so many other technologies, to be replaced by all of the white-hot automation products to follow.
Subscribe to:
Posts (Atom)