I certainly have to highly-recommend this upcoming CIO-magazine sponsored webinar: Utility Computing: How to Leverage Industry Advances to Optimize Data Center Economics. It's scheduled for September 19, 2:00 PM EDT.
Along with BearingPoint,m I'll be discussing (yes, live and in-person!) what we mean by utility computing, and how the technologies that make "utility computing" possible are available today.
It's more than virtualization. It's about intelligently pooling all HW resources you own today to radically cut operational and capital costs -- and attain a level of agility that current HW/SW models inherently block.
As I've said before, CIOs are doing this already, you just don't know it. Look at Amazon's EC2. Look at Google's infrastructure. Look at Sun's Grid system. It's possible to do with the IT infrastructure you have sitting in your data center today.
What could you do if your total operational cost basis (fully-loaded, everything) was $0.10 per instance-hour for all of your compute services?
Friday, August 17, 2007
Pursuing Data Center Efficiency: TelaData's Convergence Conference
Heads-up: TelaData - essentially the premier consultants for designing data centers - is holding a Technology Convergence conference on October 16. I'll be giving a talk on Active Power Management, and Bill Coleman our CEO, will be keynoting. There's also an amazing lineup of speakers on infrastructure, power, and convergence topics. More of my thoughts on policy-based power management.
Bob Brown, CEO of TelaData, is a visionary on this conference. He sees a massive convergence of technologies... technologies within the data center (i.e. the move toward IP-based video & audio) and the convergence of data center design itself (i.e. facilities, cabling, power management, etc.).
The two together have to be taken into consideration when designing new facilities. If you don't, then you risk mis-estimating compute, power, cabling and other layout requirements. And the $100+ million building you construct is obsolete before it's complete.
And these guys are the pros. While it's confidential (I think) they're advising some of the biggest data center users and web 2.0 companies in the business on data center construction.
Bob Brown, CEO of TelaData, is a visionary on this conference. He sees a massive convergence of technologies... technologies within the data center (i.e. the move toward IP-based video & audio) and the convergence of data center design itself (i.e. facilities, cabling, power management, etc.).
The two together have to be taken into consideration when designing new facilities. If you don't, then you risk mis-estimating compute, power, cabling and other layout requirements. And the $100+ million building you construct is obsolete before it's complete.
And these guys are the pros. While it's confidential (I think) they're advising some of the biggest data center users and web 2.0 companies in the business on data center construction.
Monday, August 6, 2007
The CMDB - An anemic answer for a deeper crisis
My fist dabble in an occasional series of "A contrarian in the Data Center"....
I know that this is quite a provocative subject, but take a moment to consider where I'm going:
My thesis: CMDBs will be doomed either to (a) a short-lived existence as they sediment into other data center products, or (b) disappearing altogether as the industry finally realizes that utility computing (using generic hardware and standard stacks) obviates the need for an a la carte solution which tracks which-asset-is-where-and-doing-what-for-whom.
My evidence: Do you think that Amazon Web Services' EC2 compute "cloud" went out and purchased a commercial CMDB to manage their infrastructure and billing? Do you think Google maintains a central CMDB to track what department owns what machine? Isn't it odd that an umteen-volume ITIL process ultimately relies on the existence of a conceptual CMDB? (In fact, doesn't it ring strange that such a "panacea" technology needs a so many volumes of paper just to make it work?)
My logic: CMDBs are essentially a "band aid" for a larger (and growing) problem - complexity. They inherently do nothing to reduce the underlying complexity, configuration variances, or hand-crafted maintenance of the underlying infrastructure. In short, they are just another point-solution product that center managers think will help them drive to a simpler lifestyle -- and they're dead wrong. Instead, they'll be buying another complexity layer - but this time, one that requires them to re-work process as well.
"But wait!" you say; CMDBs are needed because how else do you get your head around infrastructure variances? On what do you base configuration management? What do compliance systems use as a basis? Incident management processes have to "check in" somewhere, don't they?
Well, yes and no. By saying yes to most of the questions above, you're unconsciously complying with the status quo mindset of how data centers are architected and run. With layers of special-purpose tools, each supposedly simplifying the tasks-at-hand. But collectively, they themselves create complexity, redundancy, and the need for more tools like themselves. Every one of these tools maintain the assumption of continued complexity, configuration variances, and hand-crafted maintenance of underlying infrastructure
So? BREAK THE MODEL!
My conclusion: What if the data center had an "operating system" ? This would automatically pool, re-purpose and re-provision all types of physical servers, virtual machines, networking and storage infrastructure. It would optimize how these resources were applied and combined (even down to selecting the most power- and compute-efficient hardware). It would respond to failures by simply managing around them and re-provisioning alternate resources. It would react to disasters by selecting entirely different physical locations for compute loads. And all completely platform-agnostic.
Now - if this system existed (and, of course, it does), then why would you need a CMDB?
Say "Yes" to treating the data center at the system-level scale, not at the atomic scale.
I know that this is quite a provocative subject, but take a moment to consider where I'm going:
My thesis: CMDBs will be doomed either to (a) a short-lived existence as they sediment into other data center products, or (b) disappearing altogether as the industry finally realizes that utility computing (using generic hardware and standard stacks) obviates the need for an a la carte solution which tracks which-asset-is-where-and-doing-what-for-whom.
My evidence: Do you think that Amazon Web Services' EC2 compute "cloud" went out and purchased a commercial CMDB to manage their infrastructure and billing? Do you think Google maintains a central CMDB to track what department owns what machine? Isn't it odd that an umteen-volume ITIL process ultimately relies on the existence of a conceptual CMDB? (In fact, doesn't it ring strange that such a "panacea" technology needs a so many volumes of paper just to make it work?)
My logic: CMDBs are essentially a "band aid" for a larger (and growing) problem - complexity. They inherently do nothing to reduce the underlying complexity, configuration variances, or hand-crafted maintenance of the underlying infrastructure. In short, they are just another point-solution product that center managers think will help them drive to a simpler lifestyle -- and they're dead wrong. Instead, they'll be buying another complexity layer - but this time, one that requires them to re-work process as well.
"But wait!" you say; CMDBs are needed because how else do you get your head around infrastructure variances? On what do you base configuration management? What do compliance systems use as a basis? Incident management processes have to "check in" somewhere, don't they?
Well, yes and no. By saying yes to most of the questions above, you're unconsciously complying with the status quo mindset of how data centers are architected and run. With layers of special-purpose tools, each supposedly simplifying the tasks-at-hand. But collectively, they themselves create complexity, redundancy, and the need for more tools like themselves. Every one of these tools maintain the assumption of continued complexity, configuration variances, and hand-crafted maintenance of underlying infrastructure
So? BREAK THE MODEL!
My conclusion: What if the data center had an "operating system" ? This would automatically pool, re-purpose and re-provision all types of physical servers, virtual machines, networking and storage infrastructure. It would optimize how these resources were applied and combined (even down to selecting the most power- and compute-efficient hardware). It would respond to failures by simply managing around them and re-provisioning alternate resources. It would react to disasters by selecting entirely different physical locations for compute loads. And all completely platform-agnostic.
Now - if this system existed (and, of course, it does), then why would you need a CMDB?
- The "Data base" and the "configuration-of-record" would have to already be known by the system, therefore present from the start, and constantly updated in real-time
- Any infrastructure variances would be known in real-time - or eliminated in real-time, as the system re-configured and optimized
- Configuration management, as we understand it today, would be obviated altogether. The system would be given a set of policies from which it would be allowed to choose only approved configurations (all standard, or not). The approved configurations would be constantly monitored and corrected if needed. There would be no "configuration drift" because there would be no human interactions directly with machines - only policies which consistently delivered upgrades, patches and/or roll-backs.
- Compliance (per above) would essentially be governed by policy as well. The system's internal database (and historic record) could be polled by any external system which wanted to ensure that compliance was enforced over time.
- Traditional incident management processes would essentially be a thing of the past, since most would be dealt with automatically. In essence, trouble tickets would be opened, diagnosed, corrected and closed automatically, and in a matter of seconds or minutes. Why then a massive ITIL encyclopedia to govern a non-existent human process?
Say "Yes" to treating the data center at the system-level scale, not at the atomic scale.
Subscribe to:
Posts (Atom)