Surfing has its benefits. I tripped over SmugMug CEO Don MacAskill's Blog today. SmugMug is a rough competitor to Flickr and other (lower-grade) photo archiving sites. They archive about 130,000,000 photos right now. And They use Amazon's S3.
This is a perfect commercial example of server-less IT I spoke about last week. And it's proof that the economics of Utility Computing are compelling. MacAskill estimated that he's saving about $500,000 anually by not buying and managing his own storage (he computes the number in his blod). And he expects that number to increase. Amazon has taken the traditional approach to managing storage (S3) and computing (EC2) and applied a utility automation paradigm -- enabling a completely new cost model. How else could they be offering such pricing to users?
What's this going to enable in the future, Read on: MacAskill's other Blog: "Amazon+2 Guys = The Next YouTube". (aka the server-less web service!)
I gotta keep wondering: When is corporate IT going to catch onto this utility computing approach, and make "compute clouds" out of their own stuff?
Saturday, April 28, 2007
Sunday, April 22, 2007
Prediction: Server-Less IT Services
Folks like Greg Papadapoulous at Sun say that a small number of companies will invest in creating a huge infrastructure of computing power. (See my blog of 12 Jan. 2007). And folks like Amazon are already doing so with their Electronic Compute Cloud (EC2), while others like Google, eBay, Yahoo etc. are likely to follow.
To Wit: Carriers like Verizon have announced intentions to do so, and SalesForce.com recently announced its existing ability to host more than just CRM applications. But what will really signal the shift toward "compute cloud" use will be the third-party vendors that make use of these resources.
So Here's my prediction: As the infrastructure vendors build-out their compute and storage farms, a new class of computing "brokers" will emerge. These players will adapt the needs of users and IT departments to make seamless use of these compute and storage "clouds". Everything from backing-up your laptop for pennies a GB, to hosting and failover services that don't own a single server.
And here's proof it's happening, with "mashups" of the following just around the corner:
Lastly, from somewhat of a self-serving perspective, Cassatt essentially creates a "cloud" out of existing resources within corporate IT. At that point, shifting loads between "clouds" (internal or external) becomes a simply policy-based procedure.
To Wit: Carriers like Verizon have announced intentions to do so, and SalesForce.com recently announced its existing ability to host more than just CRM applications. But what will really signal the shift toward "compute cloud" use will be the third-party vendors that make use of these resources.
So Here's my prediction: As the infrastructure vendors build-out their compute and storage farms, a new class of computing "brokers" will emerge. These players will adapt the needs of users and IT departments to make seamless use of these compute and storage "clouds". Everything from backing-up your laptop for pennies a GB, to hosting and failover services that don't own a single server.
And here's proof it's happening, with "mashups" of the following just around the corner:
- JungleDisk: offering a simple windows interface to allow individuals to create a "web drive" onto Amazon's S3 storage
- Weoceo: offering a product that allows existing servers to "overflow" peak computing needs onto Amazon's EC2 cloud
- Enomalism: providing services to provision and migrate virtual "elastic" servers, even onto and off-of the Amazon EC2 cloud
- Elasticlive: which essentially provides virtual hosting services - as predicted - (and works with Enomalism, above). Plus, they charge by the "instance-hour", not by the server type!
- Geoelastic: a beta group of "global hosting providers" who will be creating a "global elastic computing cloud" and presumably balancing loads between physical centers.
- Distributed Potential: beginning to deliver pay-per-use grid computing capacity (powered by Elasticlive and Enomalism technologies, above)
- Distributed Exchange: Also powered by (and presumably founded by) ElasticLive and Enomalism; claiming to "broker" excess compute capacity between providers
- Dozens of 3rd-parties creating even more applications on S3
Lastly, from somewhat of a self-serving perspective, Cassatt essentially creates a "cloud" out of existing resources within corporate IT. At that point, shifting loads between "clouds" (internal or external) becomes a simply policy-based procedure.
Thursday, April 19, 2007
Moving Virtualization Up a Notch
I had the opportunity to speak the other day with Dan Kusnetzky, who interviewed Cassatt for his ZDnet blog which reports on virtualization trends. And boy, he really gets the trend.
Right off, he started with observing that "virtualization" isn't just one thing (Consider: Hypervisors, zones, containers, LPars, network VLANs and virtualized storage). We also quickly observed that virtualization probably isn't an end-game-in-itself for IT. Rather, it represents the most critical enabler that will ignite transformation in the IT industry.
That transformation represents a new way to look at managing IT: Today, we have specialized hardware, software, HA/failover software, monitoring & performance analysis systems, and dozens more. Tomorrow, the transformation will look like managing all of these systems holistically, much the way an Operating System manages components within a server. The automation will be technology agnostic, made possible through virtualization. A number of Dan's earlier interviews all point to this inevitability as well.
He had a bunch of great observations, but the last I liked best: "It's important to take the broadest possible view and avoid point solutions. From this vantage point, a failure of some resource must be handled in the same way as any other condition that causes the configuration to no longer meet service level objectives."
For me, the takeaway from the conversation was something I've said before: take the "long view" on implementing virtualization... it may yield you quick HW savings today, but if its automated in an "IT-as-utility" context, its future savings will dwarf what the industry is seeing now.
Right off, he started with observing that "virtualization" isn't just one thing (Consider: Hypervisors, zones, containers, LPars, network VLANs and virtualized storage). We also quickly observed that virtualization probably isn't an end-game-in-itself for IT. Rather, it represents the most critical enabler that will ignite transformation in the IT industry.
That transformation represents a new way to look at managing IT: Today, we have specialized hardware, software, HA/failover software, monitoring & performance analysis systems, and dozens more. Tomorrow, the transformation will look like managing all of these systems holistically, much the way an Operating System manages components within a server. The automation will be technology agnostic, made possible through virtualization. A number of Dan's earlier interviews all point to this inevitability as well.
He had a bunch of great observations, but the last I liked best: "It's important to take the broadest possible view and avoid point solutions. From this vantage point, a failure of some resource must be handled in the same way as any other condition that causes the configuration to no longer meet service level objectives."
For me, the takeaway from the conversation was something I've said before: take the "long view" on implementing virtualization... it may yield you quick HW savings today, but if its automated in an "IT-as-utility" context, its future savings will dwarf what the industry is seeing now.
Friday, April 6, 2007
D'oh. Turning off idle servers
A question just dawned on me. Data centers automate service-levels according to policies... like "always make sure application X has priority" or "always keep server utilization at-or-above Y". So, why not treat power consumption the same way?
Here's what I mean: there are times when server use is low - like during weekends or during the evening. There are also "events" (like the power emergencies we tend to get here in California) where you'd like to minimize power use when your electrical utility tells you to.
So I'm thinking - why shouldn't data centers respond to electrical cost/availability/demand the same way they respond to compute availability/demand? When "events" happen, we turn off the office lights, right?
It turns out that power companies (like PG&E here in Sunny CA) have "traditional" programs to encourage energy efficiency (like rebates for efficient light bulbs, and even efficient servers). But they also have special demand-response programs and incentives for firms that react to electrical demand during "events" by additional short-term reductions in power use (like turning off lights & AC).
Couple that with server automation software, and you've got a combination that's pretty neat: Data Centers that can do things like turn-off low-priority servers, or perhaps move critical applications to other data centers during power events. Cassatt's identified a couple of interesting scenarios:
If building operators can automatically turn off non-critical lights and HVAC systems during electrical emergencies, then why don't data centers?
Here's what I mean: there are times when server use is low - like during weekends or during the evening. There are also "events" (like the power emergencies we tend to get here in California) where you'd like to minimize power use when your electrical utility tells you to.
So I'm thinking - why shouldn't data centers respond to electrical cost/availability/demand the same way they respond to compute availability/demand? When "events" happen, we turn off the office lights, right?
It turns out that power companies (like PG&E here in Sunny CA) have "traditional" programs to encourage energy efficiency (like rebates for efficient light bulbs, and even efficient servers). But they also have special demand-response programs and incentives for firms that react to electrical demand during "events" by additional short-term reductions in power use (like turning off lights & AC).
Couple that with server automation software, and you've got a combination that's pretty neat: Data Centers that can do things like turn-off low-priority servers, or perhaps move critical applications to other data centers during power events. Cassatt's identified a couple of interesting scenarios:
- Low-priority servers automatically powered-off during power "emergencies"
- Standby servers that remain powered-off ("cold") until needed
- "Follow-the-moon" policies where compute loads are moved to geographies with the least-expensive power
- Policies/direction to use the most power-efficient servers first
- "Dynamic" consolidation, where virtual machines are constantly moved to achieve a "best-fit" to maintain utilization levels (minimizing powered-up servers)
If building operators can automatically turn off non-critical lights and HVAC systems during electrical emergencies, then why don't data centers?
Labels:
Green IT,
Power Management,
Utility Computing
Subscribe to:
Posts (Atom)