With all of the Intel/Nehalem (Xeon 5500 series) processor announcements this week, one stands out.
As Dell announced it's new 11th-generation (11G) server product line, Egenera also announced PAN Manager support for Dell hardware, such as Dell m610 11G blades as well. This means that Dell blades are exceptionally well-suited for high-performance, highly-consolidated, virtualized workloads -- all with five-9's of reliability. It also means that Nehalem-based hardware, networking & infrastructure can now be managed using infrastructure orchestration (what others are calling unified computing) today.
That's a huge shift from only a year ago - Dell finally has an enterprise-grade, mission-critical level-of-reliability offering for virtual, physical, or mixed applications. Citrix, Microsoft, VMware, Linux, Unix.
Very cool, guys.
Tuesday, March 31, 2009
Tuesday, March 24, 2009
Videos: What is Infrastructure Orchestration
My first endeavors into online video. No pictures of me, but at least you get the audio.
Here is a brief, high-level overview of what's meant by Infrastructure Orchestration, including some messy annotations of mine while I speak:
And here's another video of it in action: The Dell PAN system that you can buy today:
Here is a brief, high-level overview of what's meant by Infrastructure Orchestration, including some messy annotations of mine while I speak:
And here's another video of it in action: The Dell PAN system that you can buy today:
Monday, March 23, 2009
Join me at NY Cloud Computing Expo
This week I'll be putting the finishing-touches on my Day #1 presentation about "Building a Compute Cloud Infrastructure" and where to begin (3:25pm next Monday, to be exact). It's really about my experiences of how enterprises and SPs alike have been approaching building a foundational "Infrastructure-as-a-Service" -- regardless of whether they're planning a virtual, physical, or mixed environment.
Syscon has put together quite an assembly of keynotes - lead-off by Werner Vogels of Amazon and Kristof Kloeckner of IBM.
My angle is simple: I find that whenever and wherever people talk about clouds they usually draw a little stack diagram... and at the bottom of the stack is usually a box that says "commodity infrastructure" on which all of the virtualiztion, PaaS and SaaS stuff sits. Well, what *IS* that foundation made from? How do you assure a scalable, five-9's infrastructure for your cloud (or for anything else for that matter)? So clearly I'll also be alluding to Infrastructure Orchestration (aka unified computing) as well.
And that's it. I plan to first discuss the blind assumptions many of us in IT have been following when architecting systems, and how to get around these 'old world' assumptions. I'll then transition into how enterprises been constructing these more adaptive, resilient IaaS foundations. Also, I'll touch on how they work and what they look like in real life. That's it.
Hope to see you Monday in NY at the Roosevelt Hotel.
Syscon has put together quite an assembly of keynotes - lead-off by Werner Vogels of Amazon and Kristof Kloeckner of IBM.
My angle is simple: I find that whenever and wherever people talk about clouds they usually draw a little stack diagram... and at the bottom of the stack is usually a box that says "commodity infrastructure" on which all of the virtualiztion, PaaS and SaaS stuff sits. Well, what *IS* that foundation made from? How do you assure a scalable, five-9's infrastructure for your cloud (or for anything else for that matter)? So clearly I'll also be alluding to Infrastructure Orchestration (aka unified computing) as well.
And that's it. I plan to first discuss the blind assumptions many of us in IT have been following when architecting systems, and how to get around these 'old world' assumptions. I'll then transition into how enterprises been constructing these more adaptive, resilient IaaS foundations. Also, I'll touch on how they work and what they look like in real life. That's it.
Hope to see you Monday in NY at the Roosevelt Hotel.
Unified computing is so easy - 6 easy steps
With all of the chatter now about "Unified Computing" - as well as all of the skeptics thinking it's blue-sky future, I wanted to outline how incredibly simple it really can be. Even I can do it :)
At Egenera, we've been in this business since 2001, terming this technology "Infrastructure Orchestration."
When I asked my SEs how to describe using PAN Manager with the Dell PAN system to abstract data center infrastructure in software, they gave me back a surprisingly simple set of instructions. Not technical acronyms or jargon. I personally watched (and participated in) getting a compute environment, complete with high-availability fail-over and DR, up-and-running in under 15 minutes. And the cool part is that it included both native OSs, as well as VMs.
Using our GUI, the Administrator then
Any engineer would appreciate how elegant a solution -- and set of user instructions -- this is.
At Egenera, we've been in this business since 2001, terming this technology "Infrastructure Orchestration."
When I asked my SEs how to describe using PAN Manager with the Dell PAN system to abstract data center infrastructure in software, they gave me back a surprisingly simple set of instructions. Not technical acronyms or jargon. I personally watched (and participated in) getting a compute environment, complete with high-availability fail-over and DR, up-and-running in under 15 minutes. And the cool part is that it included both native OSs, as well as VMs.
Using our GUI, the Administrator then
- Defines resources –Identify available individual building-block resources which include pools of blades, internal switches, external switches, disks/LUNs, and OS images.
- Organizes resources – Define logical groupings & access privileges for different pools and/or allocations as-needed by the business. Each group and its resources are distinct and secure from the others.
- Builds profiles and servers – Assign physical blades; assign network connectivity; assign disks (each LUN is presented as a SCSI device); assign an OS (which could a native OS, as well as a VM host OS like VMware ESX); finally, boot the server profile
- Assigns HA policies – Specify specific failover blades or shared pools, before or after building/booting the server
- Defines DR policies – Entire server environment configurations (or subsets) can be defined and instantiated either on-demand, on-schedule, or any other reason.
- (Optional) Reassigns servers – As simple as point/click/reboot. More than one server profile can be assigned to each blade. Change can be triggered via schedule or other commands.
Any engineer would appreciate how elegant a solution -- and set of user instructions -- this is.
Wednesday, March 18, 2009
A data center trifecta? (Prediction)
Once in a while, I sit back and think about what are the real transformational forces that will change how IT operates. And I've come to the conclusion that there are 3 fundamental technological movements that will do this... and the 3rd will surprise you.
1. Software virutalization.
No surprise here. Abstracting software from the underlying CPU yields mobility, consolidation, and degrees of scalability. It also simplifies automated management and portability of workloads, such as with virtual appliances / AMIs. Aside from a few managerial kinks being worked-out, this technology is already accepted de-facto, esp. because we're already seeing it become a commodity play, and sedimenting into other products.
2. Infrastructure orchestration / unified computing.
This is the one we're all beginning to hear about. As I recently outlined, this technology is a perfect complement to software virtualization -- it essentially gives "mobility" to infrastructure, rather than to software. It allows IT operations to define I/O, storage connectivity and networking entirely in software, resulting in stateless and re-configurable CPUs. Egenera was the pioneer in this area, but the market is now getting an adrenaline shot-in-the-arm from Cisco's UCS announcement.
Unified computing / Infrastructure orchestration is valuable because it enables a highly-reliable, scalable and re-configurable infrastructure -- a perfect platform for physical *and* virtual software. It permits IT to "wire-once" and then create CPU configurations (virtual NICs, HBAs, networks/VLANs, storage connections) using a unified/consolidated networking fabric. Plus, it is a simple, elegant, cleary-more-efficient approach. Think of this as provisioning hardware using software. This approach has numerous positiv properties - not the least of which are that you can clone hardware configurations when you need to (a) scale, (b) migrate, (c) fail-over, and (d) recover from entire system failures [disasters]. Again, regardless of physical or virutal payloads.
We'll see this technology begin to accelerate in the market. Promise.
3. Intelligent software provisioning
"Huh?" I hear you say? Yes! While I'm not sure what this market segment may eventually be called, it represents the third critical data center managemnet component. #1 gives software mobility; #2 yields infrastructure flexibility. And #3 is how the actual software (physical, virtual, appliances, AMIs, etc.) is constructed and "doled-out" as a workload on #1 and #2.
My eyes were opened after learning more about a company called FastScale. Picture an intelligent software provisioning system that knows minimum-amount-required of software libraries needed to run an OS or application. As it turns out this is usually something only around 10%-15% of the multi-gig bag-of-bits you try to boot every time you bring up a server. And that even includes the VM, where virtual systems are involved.
The result? 3 really nice properties:
a) Speed. Getting applications up-and-running an order-of-magnitude faster. Not having to move as many bits over the network to boot a given server is a real time and money saver.
b) More efficient consolidation. With smaller software footprints, more VMs, appliances, etc. can fit on a given memory footprint. That means that denser consolidation is frequently possible -- not to mention $ savings on those gigs of memory you have to buy when you consolidate.
c) Inherent configuration management. With a database of all libraries and bits, plus knowing where you put them, you can always monitor configurations and verify compliance, etc. Plus, you can track what patches went where (and frequently, you may find you don't even need the patch if it's not a critical component to the reduced libraries you're using!)
d) Ability to provision into any form of container: In other words, this system can provision onto a bare-metal CPU, into a VM, or for that matter, into an appliance like an AMI if you're using a compute cloud. Wow, very neat.
This intelligent provisioning approach is also highly-complementary to existing compliance and config management products like OpsWare (HP) or BladeLogic (BMC).
Summary
So what if you have all 3 of these technologies? You'd have a data center where
1. Software virutalization.
No surprise here. Abstracting software from the underlying CPU yields mobility, consolidation, and degrees of scalability. It also simplifies automated management and portability of workloads, such as with virtual appliances / AMIs. Aside from a few managerial kinks being worked-out, this technology is already accepted de-facto, esp. because we're already seeing it become a commodity play, and sedimenting into other products.
2. Infrastructure orchestration / unified computing.
This is the one we're all beginning to hear about. As I recently outlined, this technology is a perfect complement to software virtualization -- it essentially gives "mobility" to infrastructure, rather than to software. It allows IT operations to define I/O, storage connectivity and networking entirely in software, resulting in stateless and re-configurable CPUs. Egenera was the pioneer in this area, but the market is now getting an adrenaline shot-in-the-arm from Cisco's UCS announcement.
Unified computing / Infrastructure orchestration is valuable because it enables a highly-reliable, scalable and re-configurable infrastructure -- a perfect platform for physical *and* virtual software. It permits IT to "wire-once" and then create CPU configurations (virtual NICs, HBAs, networks/VLANs, storage connections) using a unified/consolidated networking fabric. Plus, it is a simple, elegant, cleary-more-efficient approach. Think of this as provisioning hardware using software. This approach has numerous positiv properties - not the least of which are that you can clone hardware configurations when you need to (a) scale, (b) migrate, (c) fail-over, and (d) recover from entire system failures [disasters]. Again, regardless of physical or virutal payloads.
We'll see this technology begin to accelerate in the market. Promise.
3. Intelligent software provisioning
"Huh?" I hear you say? Yes! While I'm not sure what this market segment may eventually be called, it represents the third critical data center managemnet component. #1 gives software mobility; #2 yields infrastructure flexibility. And #3 is how the actual software (physical, virtual, appliances, AMIs, etc.) is constructed and "doled-out" as a workload on #1 and #2.
My eyes were opened after learning more about a company called FastScale. Picture an intelligent software provisioning system that knows minimum-amount-required of software libraries needed to run an OS or application. As it turns out this is usually something only around 10%-15% of the multi-gig bag-of-bits you try to boot every time you bring up a server. And that even includes the VM, where virtual systems are involved.
The result? 3 really nice properties:
a) Speed. Getting applications up-and-running an order-of-magnitude faster. Not having to move as many bits over the network to boot a given server is a real time and money saver.
b) More efficient consolidation. With smaller software footprints, more VMs, appliances, etc. can fit on a given memory footprint. That means that denser consolidation is frequently possible -- not to mention $ savings on those gigs of memory you have to buy when you consolidate.
c) Inherent configuration management. With a database of all libraries and bits, plus knowing where you put them, you can always monitor configurations and verify compliance, etc. Plus, you can track what patches went where (and frequently, you may find you don't even need the patch if it's not a critical component to the reduced libraries you're using!)
d) Ability to provision into any form of container: In other words, this system can provision onto a bare-metal CPU, into a VM, or for that matter, into an appliance like an AMI if you're using a compute cloud. Wow, very neat.
This intelligent provisioning approach is also highly-complementary to existing compliance and config management products like OpsWare (HP) or BladeLogic (BMC).
Summary
So what if you have all 3 of these technologies? You'd have a data center where
- Workloads were portable, and relatively platform-independent
- Infrastructure was instantly re-configurable and adapted to business conditions, failures, etc.
- Software could be distributed and brought-up on the order of seconds, allowing near-instantaneous adaptation to scale, business demand or failures.
Monday, March 16, 2009
What is Infrastructure Orchestration / Unified Computing?
Infrastructure Orchestration and Unified Computing are both terms referring to a technology whereby server CPU, I/O, storage connectivity, and network, are all able to be defined and configured in software. The advantage of using this approach allows IT operators to rapidly repurpose CPUs without the constraints of physically having to reconfigure each of the I/O components by hand – and without the requirement of a hypervisor. It massively reduces the quantity and expense of the physical I/O and networking components (because much of the I/O is consolidated) as well as the time required to configure them. In return, it offers an elegant, simple-to-manage approach to data center infrastructure administration.
From an architectural perspective, this approach is also referred to as a “compute fabric” or a “Processing Area Network” since the physical CPUs are made stateless because their physical addressing (of I/O, Network and Storage naming) is completely abstracted away. And, by abstracting the I/O, both data and storage connections can be converged, further simplifying the network infrastructure. What is left is a collection of raw, pooled CPUs that can be assigned on-demand, and whose logical configurations and network connections can be instantly defined.
Infrastructure Orchestration is very different from – but highly complementary to – hypervisor-based virtualization. Think of hypervisors as operating “above” the CPU, abstracting software (applications and O/S) from the CPU thereby giving the software portability. Think of Infrastructure Orchestration as operating “below” the CPU, abstracting network and storage connections, and thereby giving the CPU itself portability. Note that a major difference is that the Infrastructure Orchestration does not operate via a software “layer” the way that a hypervisor does.
The complementarity between Infrastructure Orchestration and virtualization is significant. Take an example such as a VM host failure, where the entire physical machine, network and storage configuration needs to be replicated on a new physical server. This can be accomplished with a spare “bare metal” server where a new host can be created on the fly, all the way down-do the same NIC configuration as the original server.
Now, expand this example to the scenario of an entire environment failure. Infrastructure orchestration can re-create the physical machine hosts as well as their networking, on an otherwise “cold” bare-metal and non-dedicated infrastructure at a different location.
Best of all, the properties of Infrastructure orchestration, such as the ability to provision a new server quickly, apply to both physical servers as well as to virtual servers. So this is an ideal technology to use when managing mixed physical and virtual environments, including “cloud computing” infrastructure.
Finally, Infrastructure Orchestration (unified computing) is a central technology to creating highly-reliable, dynamic data center. The technology is also core to a “Real Time Infrastructure”(as defined by Gartner Research) as well as to “Organic IT” (as defined by Forrester Research), both computing architectures that rapidly responds to changes in demand, to failures, and to unpredictable business demands.
From an architectural perspective, this approach is also referred to as a “compute fabric” or a “Processing Area Network” since the physical CPUs are made stateless because their physical addressing (of I/O, Network and Storage naming) is completely abstracted away. And, by abstracting the I/O, both data and storage connections can be converged, further simplifying the network infrastructure. What is left is a collection of raw, pooled CPUs that can be assigned on-demand, and whose logical configurations and network connections can be instantly defined.
Infrastructure Orchestration is very different from – but highly complementary to – hypervisor-based virtualization. Think of hypervisors as operating “above” the CPU, abstracting software (applications and O/S) from the CPU thereby giving the software portability. Think of Infrastructure Orchestration as operating “below” the CPU, abstracting network and storage connections, and thereby giving the CPU itself portability. Note that a major difference is that the Infrastructure Orchestration does not operate via a software “layer” the way that a hypervisor does.
The complementarity between Infrastructure Orchestration and virtualization is significant. Take an example such as a VM host failure, where the entire physical machine, network and storage configuration needs to be replicated on a new physical server. This can be accomplished with a spare “bare metal” server where a new host can be created on the fly, all the way down-do the same NIC configuration as the original server.
Now, expand this example to the scenario of an entire environment failure. Infrastructure orchestration can re-create the physical machine hosts as well as their networking, on an otherwise “cold” bare-metal and non-dedicated infrastructure at a different location.
Best of all, the properties of Infrastructure orchestration, such as the ability to provision a new server quickly, apply to both physical servers as well as to virtual servers. So this is an ideal technology to use when managing mixed physical and virtual environments, including “cloud computing” infrastructure.
Finally, Infrastructure Orchestration (unified computing) is a central technology to creating highly-reliable, dynamic data center. The technology is also core to a “Real Time Infrastructure”(as defined by Gartner Research) as well as to “Organic IT” (as defined by Forrester Research), both computing architectures that rapidly responds to changes in demand, to failures, and to unpredictable business demands.
Thursday, March 12, 2009
First fruits from the Dell/Egenera deal
It appears the Egenera/Dell deal is moving forward with velocity. A first joint customer was announced today, the US Department of Veterans Affairs (VA) Corporate Data Center Operations (CDCO).
This is the first of a pipeline of customers buying-into Infrastructure Orchestration (also referred to as Fabric Computing or Unified Computing) - first offered in 2001 by Egenera with their high-end BladeFrame + PAN Manager software, and now being mainstreamed as the Dell PAN System by combining PAN Manager with Dell hardware.
The VA's CDCO is really a hosting facility - much the way an xSP hosts applications for third-parties. In this case, they're hosting a mission-critical application for an influenza early-warning system. I've met with their CTO, who's a pretty forward-thinking guy. He recognizes the fact that his "customers" frequently change requirements, and computing demands frequently change too. So, for an environment comprising physical databases and virtualized instances, the Egenera/Dell system provided significant agility (ability to re-provision quickly) while maintaining a mission-critical level of availability.
The Dell/Egenera deal has been getting some profile lately - as it takes on similar technologies such as IBM's Open Frabric Manager, and HP's Insight Orchestration products. Stay tuned for some more juicy news.
This is the first of a pipeline of customers buying-into Infrastructure Orchestration (also referred to as Fabric Computing or Unified Computing) - first offered in 2001 by Egenera with their high-end BladeFrame + PAN Manager software, and now being mainstreamed as the Dell PAN System by combining PAN Manager with Dell hardware.
The VA's CDCO is really a hosting facility - much the way an xSP hosts applications for third-parties. In this case, they're hosting a mission-critical application for an influenza early-warning system. I've met with their CTO, who's a pretty forward-thinking guy. He recognizes the fact that his "customers" frequently change requirements, and computing demands frequently change too. So, for an environment comprising physical databases and virtualized instances, the Egenera/Dell system provided significant agility (ability to re-provision quickly) while maintaining a mission-critical level of availability.
The Dell/Egenera deal has been getting some profile lately - as it takes on similar technologies such as IBM's Open Frabric Manager, and HP's Insight Orchestration products. Stay tuned for some more juicy news.
Wednesday, March 11, 2009
Energy efficiency and dynamic infrastructures
I just read an interesting blog by Rob Aldrich on the advantages of "unified fabrics" when pursuing energy efficiency.
This story is more than just about gaining energy efficiency by reducing the amount of network hardware. Rather, it's all about using (and re-purposing) compute hardware more effectively.
A "unified fabric" is part of what we term "infrastructure orchestration" and "unified computing" - it is the abstraction of compute, I/O, storage and network infrastructure into a dynamically-configurable "fabric". In that way, servers and their associated infrastructure, can be logically-created and/or re-configured. It's a fantastic complement to virtualization. One way to think about it is that VMs provide a logical way to create or re-configure new software server stacks. In turn, infrastructure orchestration (the "fabric") is the way to provide a logical way to create or re-configure I/O, network and storage. So when you're moving VMs around (on purpose, or in response to an unplanned event) you can create compatible infrastructure on-the-fly.
What's the efficiency story here? The ability to re-purpose entire compute, network and storage systems on-the-fly, in response to compute demand. By using these resources more efficiently, data centers ultimately need fewer physical assets -- and those assests consume less power.
Going a step further, these assets can be re-purposed in response to energy efficiency triggers. Workloads can be moved a few feet in response to "hot spots", or to entirely different geographic locations based on power cost/availability.
This story is more than just about gaining energy efficiency by reducing the amount of network hardware. Rather, it's all about using (and re-purposing) compute hardware more effectively.
A "unified fabric" is part of what we term "infrastructure orchestration" and "unified computing" - it is the abstraction of compute, I/O, storage and network infrastructure into a dynamically-configurable "fabric". In that way, servers and their associated infrastructure, can be logically-created and/or re-configured. It's a fantastic complement to virtualization. One way to think about it is that VMs provide a logical way to create or re-configure new software server stacks. In turn, infrastructure orchestration (the "fabric") is the way to provide a logical way to create or re-configure I/O, network and storage. So when you're moving VMs around (on purpose, or in response to an unplanned event) you can create compatible infrastructure on-the-fly.
What's the efficiency story here? The ability to re-purpose entire compute, network and storage systems on-the-fly, in response to compute demand. By using these resources more efficiently, data centers ultimately need fewer physical assets -- and those assests consume less power.
Going a step further, these assets can be re-purposed in response to energy efficiency triggers. Workloads can be moved a few feet in response to "hot spots", or to entirely different geographic locations based on power cost/availability.
Thursday, March 5, 2009
What if software virtualization is the tip of the iceberg?
Yesterday I attended the IDC Directions conference in San Jose. Chock-full of great presentations, networking and predictions, topped-off with a closing keynote from Nick Carr on The Big Switch.
Over lunch, I was able to sit-in on a discussion with two IDC analysts discussing how virtualization is expanding the role of the network, and the steps IT operations were taking to simplify the data center. It was all about unifying and consolidating the data center network, they said.
"But aren't we unifying and consolidating servers too?" I thought. And then it occurred to me that perhaps there is a logical progression of simplification that IT is doing, without knowing it's doing it. Maybe it's the "consolidation maturity model" or just stages of IT's "simplification progression." Either way, it seems to dovetail with how the industry is progressing.
Seems to me these 4 stages happen roughly in order:
1. Hardware consolidation. We're in the thick of this today. When most IT professionals think "virtualization" they are mostly wanting to consolidate/reduce hardware. Software is abstracted away from the CPU, making management of applications more efficient. Players: VMware, Citrix, MSFT, Parallels, etc.
2. I/O consolidation: As I've pointed out in the past, I/O has unfortunately been tightly-bound to the CPU, creating complexity in configuration, addressing, wiring, etc. Instead, I/O can be abstracted in software, and then logically instantiated/assigned. The industry is now beginning to realize that I/O consolidation -- especially in the world of many VMs per host -- is helping to simplify IT management. Players: 3Leaf, Xsigo, etc.
3. Network consolidation: We're just beginning to scratch the surface of network consolidation, with a few competing technologies in the space. But essentially it's about converging data, storage, and even out-of-band management information along a single (high bandwidth, low-latency) wire. This concept is an ideal complement to I/O consolidation. All major networking vendors, plus peripheral vendors like Qlogic and Emulex
4. Compute consolidation: Perhaps the final, but highest-value, step... and it's very different from Hardware consolidation. This is about being able to create stateless CPUs, with the abililty to logically assign them on-demand, to different workloads. It is about creating a pool of CPUs (similar to a pool of network resources or a pool I/O, above) that can be used most efficiently, because they are ultimately re-assignable. Players: HP, IBM, Egenera
Now, this progression doesn't have to be followed strictly. Most of the industry is certainly doing #1, with their favorite VM vendor. Point-product vendors are pushing #2, while networking vendors are pushing #3. And the forward-thinking folks are already pursuing all four to create truly dynamic IT foundations.
But it is seeming to me that what we see as Virtualization (hardware consolidation) is just one of a few upcoming "waves" of change about to simplify how IT is operated and managed.
Over lunch, I was able to sit-in on a discussion with two IDC analysts discussing how virtualization is expanding the role of the network, and the steps IT operations were taking to simplify the data center. It was all about unifying and consolidating the data center network, they said.
"But aren't we unifying and consolidating servers too?" I thought. And then it occurred to me that perhaps there is a logical progression of simplification that IT is doing, without knowing it's doing it. Maybe it's the "consolidation maturity model" or just stages of IT's "simplification progression." Either way, it seems to dovetail with how the industry is progressing.
Seems to me these 4 stages happen roughly in order:
1. Hardware consolidation. We're in the thick of this today. When most IT professionals think "virtualization" they are mostly wanting to consolidate/reduce hardware. Software is abstracted away from the CPU, making management of applications more efficient. Players: VMware, Citrix, MSFT, Parallels, etc.
2. I/O consolidation: As I've pointed out in the past, I/O has unfortunately been tightly-bound to the CPU, creating complexity in configuration, addressing, wiring, etc. Instead, I/O can be abstracted in software, and then logically instantiated/assigned. The industry is now beginning to realize that I/O consolidation -- especially in the world of many VMs per host -- is helping to simplify IT management. Players: 3Leaf, Xsigo, etc.
3. Network consolidation: We're just beginning to scratch the surface of network consolidation, with a few competing technologies in the space. But essentially it's about converging data, storage, and even out-of-band management information along a single (high bandwidth, low-latency) wire. This concept is an ideal complement to I/O consolidation. All major networking vendors, plus peripheral vendors like Qlogic and Emulex
4. Compute consolidation: Perhaps the final, but highest-value, step... and it's very different from Hardware consolidation. This is about being able to create stateless CPUs, with the abililty to logically assign them on-demand, to different workloads. It is about creating a pool of CPUs (similar to a pool of network resources or a pool I/O, above) that can be used most efficiently, because they are ultimately re-assignable. Players: HP, IBM, Egenera
Now, this progression doesn't have to be followed strictly. Most of the industry is certainly doing #1, with their favorite VM vendor. Point-product vendors are pushing #2, while networking vendors are pushing #3. And the forward-thinking folks are already pursuing all four to create truly dynamic IT foundations.
But it is seeming to me that what we see as Virtualization (hardware consolidation) is just one of a few upcoming "waves" of change about to simplify how IT is operated and managed.
Tuesday, March 3, 2009
Egenera PAN now available on Dell blades
Since about 2001, Egenera has been selling its high-end hardware/software combination that unified re-purposing of physical and virtual servers. It combined super-high-performance blades, with their PAN Manager software. PAN stood for Processing Area Network, akin to a SAN in its ability to logically re-allocate CPUs, and its ability to provide ultra-high reliability. Hundreds of customers at thousands of locations knew this to be the creme-de-la-creme of solutions.
In his blog today, Egenera's CTO talks about PAN Manager now available on Dell Blades.
That means a whole-new level of affordability/performance. It also helps put Dell's blade strategy square in the data center space - and able to take on HP and IBM. Out-of-the-box, Dell blades can support mission-critical levels of availability, regardless of P or V payloads.
It also endorses Egenera's historic approach to Infrastructure Orchestration and the network fabric (what others are also calling unified computing).
In his blog today, Egenera's CTO talks about PAN Manager now available on Dell Blades.
That means a whole-new level of affordability/performance. It also helps put Dell's blade strategy square in the data center space - and able to take on HP and IBM. Out-of-the-box, Dell blades can support mission-critical levels of availability, regardless of P or V payloads.
It also endorses Egenera's historic approach to Infrastructure Orchestration and the network fabric (what others are also calling unified computing).
IT's Blind Spots in the data center
A few weeks ago I wrote about where IT Ops might be missing-the-boat regarding implicit assumptions we're making as an industry. I called these the "big blind spots". More from Egenera was published today in the Wall Street Journal's MarketWatch website.
I arrived at these observations when I speak with data center managers who admit that they don't have solutions to these issues. When I tell them about Egenera, their eyes widen, and they go "you can do that?"
The net-net of these "blind spots" is:
1. Virtualization does not equal agility.
I arrived at these observations when I speak with data center managers who admit that they don't have solutions to these issues. When I tell them about Egenera, their eyes widen, and they go "you can do that?"
The net-net of these "blind spots" is:
1. Virtualization does not equal agility.
Sure, VMs get you part of the way, but what about agile infrastructure? This issue seems to have been overlooked. We still have to run-out to the floor to change NICs, switches, etc.2. Virtualization does not equal simplification
In many ways, virtualization simplifies sofware. But we also fail to notice the complexity it creates for managing I/O, storage connectivity and other physical-world management issues.3. Not all High Availability (HA) and Disaster Recovery (DR) is solved by VM technology.
Implicitly assumed in the market is that VMs are the panacea. But the reality is that not everything gets virtualized... therefore there are "siloed" needs for HA & DR4. Provisioning is assumed only for software.
Everyone seems fixated on Provisioning = Software. But hardware, I/O, storage and network have to be provisioned too.5. Not everything is, or will, be virtual.
c'mon folks. Depending on who you speak with, only 20%-25% of new servers are virtualized today. And many large data center operators admit that some servers will *never* be virtualized. So, what are we going to do about unifying P+V management?
Subscribe to:
Posts (Atom)