Tuesday, December 21, 2010

Hosting/Cloud Index outperforms NASDAQ Nearly 4x

Exactly a year ago, in 2009, I started looking into whether the market for Hosting and Cloud Computing could be measured in the stock market - with my first Blog on the topic. I revisited again in January 2010, and later in February 2010.

I created a number of model portfolios based on leading public hosting companies. The list included: Digital Realty Trust; DuPont Fabros; Equinix; Internap; Iomart; Macquarie Telecom; Navisite; Rackspace; Savvis; Switch & Data; Telecity; Terremark. During 2010, this index has gained 45% to the Nasdaq's 17%

I also created a sub-group of those companies with explicit cloud computing offerings. That included Equinix; Navisite; Rackspace; Savvis; Terremark. During 2010, this sub-index has gained 57% to the Nasdaq's 17%

I'm not surprised by either outcome... Assuming the market doesn't lie because it takes into account valuations and expectations, I draw a few personal conclusions:
  • More enterprises are turning to outsourcing their IT. Whether or not it involves cloud computing, I suspect enterprises find it advantageous to hand-over IT (management, or at least co-location) to businesses where this technology is core. It may also indicate the skyrocketing consumption of computing power by enterprises.
  • It would appear that hosting firms with Cloud Computing offerings are being valued higher than their counterparts. I haven't looked at cashflows or balance sheets (yet) to determine whether this is actual value, or speculative value.
What's next for 2011?  I'm going to guess more of the same, if not an acceleration as more companies move to outsource non-core IT operations.  I'll also be watching consolidation of data center operators, as recently evidenced by Rackspace acquiring cloudkick, and Cologix acquiring Navisite. 'Guess I'll have to update my portfolio companies...

Monday, December 13, 2010

IO Virtualization: The “Hypervisor” for Your Infrastructure

An Explosive Technology, But Don't Treat as a Standalone Product
More than ever in 2010, IO Virtualization (IOV) has been showing-up in products, written about, spoken about. Because I’ve had a few years’ experience with this technology, I wanted to give a very brief explanation of the concept, and focus more on why it will be increasingly important.

In particular, I want to draw an analogy where you should view IOV as a critical enabling feature of future IT Management…  but not as a stand-alone product. Why? It's similar in concept to how the hypervisor is an enabler (but usually not used as a stand-alone product) of data center management services. 

This blog is related to my 2009 installment on Fabric as an IT Enabler.

What is IOV?

Today's Physical Infrastructure
IO Virtualization is an approach whereby physical IO components such as Network Interface Cards (NICs) Host Bus Adaptors (HBAs) and Keyboard/video/Mouse ports (KVM) are reproduced logically rather than physically.  In other words, a physical IO port (Ethernet, Infiniband, PCI, etc.) might logically represent itself to the O/S as different configurations.

Clearly this is convenient because it (a) eliminates multiple costly IO devices that also consume power and installation time. But it’s also convenient because IO – and it’s associated addressing such as IPs, MACs, Worldwide Names, etc. – can be instantly configured with a mouse.

The other consequence of IOV is that a single physical port means a single physical cable.  In essence, a server’s logical IO is consolidated down to a single (physical) converged network which carries data, storage and KVM traffic.   So this means that no matter how many logical IO devices you configure for a server, there is still only a single cable out the back.  So IOV yields the ideal “wire-once” server environment that’s still infinitely re-configurable.

The overall value of IOV becomes clear fast:  Fewer physical IO devices to buy, fewer cables to install, zero re-cabling, fewer physical ports to buy, and instantly re-configurable IO. 

Differing Implementation Approaches

Infrastructure With
IO Virtualization

In brief, there are a few differing approaches to IO virtualization:
  • Existing on-board Ethernet with new IO drivers: (e.g. Egenera)
  • Converged Networking Adapters (e.g. Qlogic, Emulex)
  • Appliances + high-throughput IO devices (e.g. Xsigo)
  • Existing physical IO but with address hardware-based mapping/virtualization (e.g. HP VirtualConnect)
Putting IOV in Perspective

You should think of IOV using the following analogy: The way in which the hypervisor abstracts software in the application domain, IOV abstracts IO and networking in the infrastructure domain.  (However, to be clear, IOV is not a software layer as-is the hypervisor)

This analogy leads to a few more observations:

  1. Where the hypervisor added software portability in the software domain IOV will do the same for the infrastructure domain.  Higher-order services like HA and consolidation were made possibly by the hypervisor.  Similarly, HA, DR and migration can be accomplished with IOV. And what’s more, a hypervisor is not required for IOV, so you can use IOV with native applications too.
  2. The hypervisor used to be the focus, but now it’s merely an enabling feature embedded within higher-level IT management products. Those products leverage the hypervisor to perform tasks such as migration, fail-over and consolidation. You should view IOV similarly: it is an enabling feature that will allow for analogous IO consolidation, migration and fail-over.
  3. Where hypervisor implementations and performance used to be hotly-debated, nobody really cares anymore.  Today the real *value* is not in the hypervisor, but in the management tools surrounding it.  Similarly, IOV should be judged less on how it is implemented, and more on the management tools and automation which manage it.
Forrester analyst Galen Schreck made a similar observation recently:
….Aside from benefits like reducing cabling and switch ports, I think the most interesting aspect of virtualized IO is the ability of a physical server's personality to be moved to any other server in the data center. In addition to the underlying network technology, the thing that makes this possible is integrated management of the server and data center fabric. In most cases, this won't be a stand-alone product that you acquire (though you can build your own solution from InfiniBand and PCI Express products on the market). This capability will most likely be an integrated part of whatever server and network environments you select, but now is the time to begin planning how you'll tie it in with the rest of your system management environment.
IO Virtualization in the IT Management Landscape

How might IO virtualization be used as part of the IT ecosystem in an integrated manner?

In much the same way that the hypervisor has since been embedded in tools like VMware’s vCenter, IOV can (and has been) embedded with higher-level management tools.

Taking an example I’m rather familiar with, Egenera’s PAN Manager Software surrounds IOV technology with facilities such as integrated with converged fabric networking, server boot control and storage connectivity.  When used alongside these and other services, IOV enables:

  • Server High Availability– In the case of hardware failure, a server’s infrastructure state (IO addressing, storage naming, network topology and workload) can be re-instantiated on another bare-metal server. This provides a ‘universal’ style of failover that doesn’t require clustering software. And what’s more, the failed-over server workload could be a native OS, or a VM host.  IOV is agnostic to the workload!
  • Disaster Recovery – expanding on the example above, if an entire domain of servers fails, the entire group of server IO states, networking states, etc. can be recovered onto another domain (assuming shared/replicated storage).  This approach to DR is elegant because it fails-over not just workloads but  the entire logical server/environment configuration as well.
  • Scaling-Out – where a series of server profiles can be instantly replicated into an instant cluster. Workloads, NICs, HBAs, networking addressing and storage connections (complete with fabric-based load balancing) can all be cloned… starting with the IO and networking profiles, made possible through IOV.
In future blogs I’ll dive more deeply into how software-based IOV operates as part of the IT management ecosystem, and why it is a popular approach because of its cross-platform compatibility in a heterogeneous data center.

Thursday, August 12, 2010

Converged Infrastructure, Part 3

Converged Infrastructure: What it Is, and What it Isn't

In my two earlier posts, I first took a stab at an overview of converged infrastructure and how it will change IT management, and in the second installment, I looked a bit closer at converged infrastructure's cost advantages. But one thing that I sense I neglected was to define what's meant by converged infrastructure (BTW, Cisco terms it Unified Computing). Even more important, I also feel the need to highlight what converged infrastructure is not. Plus, there are vendor instances where The Emperor Has No Clothing -- e.g. where some marketers have claimed that they suddenly have converged infrastructure where the fact remains that they are vending the same old products.

Why splitting hairs in defining terms? Because true converged infrastructure / unified computing has architectural, operational, and capital cost advantages over traditional IT approaches. (AKA Don't buy the used car just because the paint is nice)

Defining terms - in the public domain
Obviously, it can't hurt to see how the vendors self-describe the offerings... here goes:
Cisco's Definition (via webopedia)
"...simplifies traditional architectures and dramatically reduce the number of devices that must be purchased, cabled, configured, powered, cooled, and secured in the data center.  The Cisco Unified Computing System is a next-generation data center platform that unites compute, network, storage access, and virtualization into a cohesive system..."

Egenera's Definition
"A technology where CPU allocation, data I/O, storage I/O, network configurations, and storage connections are all logically defined and configured in software. This approach allows IT operators to rapidly re-purpose CPUs without having to physically reconfigure each of the I/O components and associated network by hand—and without needing a hypervisor."
HP's Definition
"HP Converged Infrastructure is built on a next-generation IT architecture – based on standards – that combines virtualized compute, storage and networks with facilities into a single shared-services environment optimized for any workload."
Defining terms - by using attributes
Empirically, converged infrastructure needs to have two main attributes (to live up to its name): It should reduce the quantity and complexity of physical IT infrastructure, and it should reduce the quantity and complexity of IT operations management tools. So let's be specific:

Ability to reduce quantity and complexity of physical infrastructure:
  • virtualize I/O, reducing physical I/O components (e.g. eliminate NICs and HBAs)
  • leverage converged networking, reducing physical cabling and eliminating re-cabling
  • reduce overall quantity of servers, (e.g. ability to use free pools of servers to re-purpose for scaling, failure, disaster recovery, etc.)
Ability to reduce quantity and complexity of operations/management tools:
  • be agnostic with respect to the software payload (e.g. O/S independent)
  • fewer point-products, less paging between tool windows (BTW, this is possible because so much of the infrastructure become virtual and therefore more easily logically manipulated)
  • reduce/eliminate the silos of visualizing & managing physical vs virtual servers, physical networks vs virtual networks
  • simplified higher-level services, such as providing fail-over, scaling-out, replication, disaster recovery, etc.
To sum-up so far, if you're shopping for this stuff, you need to
a) Look for the ability to virtualize infrastructure as well as software
b) Look for fewer point products and less windowing
c) Look for more services (e.g. HA, DR) baked-into the product.

Beware.... when the Emperor Has No Clothes...
In closing, I'll also share my pet peeve: When vendors whitewash their products to fit the latest trend. I'll not name-names, but beware of the following stuff labeled "converged infrastructure":
  • If the vendor says "Heterogeneous Automation" - that's different. For example, it could easily be scripted run-book automation.  This doesn't reduce physical complexity in the least.
  • If the vendor says "Product Bundle, single SKU" - Same as above. "Shrink wrapped" does not equal "converged"
  • If the vendor says "Pre-Integrated" - This may simplify installation, but does not guarantee physical simplicity nor operational simplicity
 Thanks for reading the series so far.  I'm pondering a fourth-and-final installment on where this whole virtualization and converged infrastructure thing is taking us - a look at possible future directions.

Friday, June 25, 2010

Postcards from the IT Financial Management Association

This week marks the third time I have been invited to speak at the ITFMA World of IT Financial Management conference.  This is a really amazing/unique conference, created nearly single-handedly by Terry Quinlan, their Executive Director. Quick overview:
The IT Financial Management Association (ITFMA) was established in 1988 and founded the IT Financial Management profession at that time. ITFMA is the only association dedicated to this profession and provides a comprehensive education program on the principles and practices used to financially manage Information Technology (IT) organizations. ITFMA is the national leader in the education of IT financial management professionals and the only recognized provider of certification in the various financial disciplines of IT financial management.
The attendees are largely non-technical, but are comprised of financial managers, controllers, project managers and purchasing managers all in the IT field mainly with F1000 companies.

And what sets this conference apart for me is the fact that 90% of the topics of conversation are non-technical. It's not about the speeds-and-feeds, but rather about the project management, cost accounting, charge-back, managerial and regulatory issues facing IT.  It gave me pause that, while technologists focus on keeping the electrons moving, there are also folk who keep the paper and the money moving.

On particularly illustrative conversation I had in mind -- with an IT financial manager from the State of Oregon, who oversees the state's shared/hosted IT infrastructure.  They were promised by a large national consulting company that through consolidation of equipment and data centers, the state would save tons of $$ and reduce the managerial headcount as well. As it was described to me, the technical consolidation was largely a success, but the consultant failed to accurately account for the business and managerial staffs associated with the IT. And over time, while the square feet of data center shrank, the overall IT staffing continued to grow. Lest we commit the sin of assuming that all of IT is technologists.

Overall, the ITFMA is a "must-attend" -- especially now that IT is going through such large changes as data center consolidation, virtualization, automation and cloud computing. All of these have non-linear impacts on IT finances, and all can cause disruptive effects on topics like capital forecasting, project management, expense vs  investment projections, etc. Not to mention the newer issues caused by cloud computing such as data ownership, security, operations control, etc.

The event is a relative bargain to attend, and Terry always finds classic, historic venues for the conferences.

Monday, June 7, 2010

Converged Infrastructure Part 2.

Part 2. Converged Infrastructure’s Cost Advantages

In my first installment about converged Infrastructure, I  gave an outline of what it is, and how it will change the way in which IT infrastructure is managed.

In this installment, I’ll go a bit deeper and explain the source of capital and operational improvements converged Infrastructure offers – and why it’s such a compelling opportunity to pursue.

But first, the most important distinction to make between converged infrastructure and “the old way of doing business” is that management – as well as the technology – is also converged.  Consider how many point-products you currently use for infrastructure management (i.e. other than managing your software stack). 

This diagram at right  has resonated with customers and analysts alike. It highlights, albeit in a stylized fashion, just how many point-products an average-sized IT department is using.  This results in clear impact in
  • Operational complexity – coordinating tool use, procedures, interdependencies and fault-tracking
  • Operational cost – the raw expense it costs to acquire and then annually maintain them
  • Capital cost – if you count all of the separate hardware components they’re trying to manage
That last bullet, the thing about hardware components, is also something to drill down into.  Because every physical infrastructure component in the “old” way of doing things has a cost.  And I mean I/O components like NICs and HBAs, not to mention switches, load balancers and cables.

What might be possible if you could virtualize all of the physical infrastructure components, and then have a single tool to manipulate them logically?

Well, then you’d be able to throw-out roughly 80% of the physical components (and associated costs) and reduce the operational complexity roughly the same amount.

In the same way that the software domain has been virtualized by the hypervisor, the infrastructure world can be virtualized with I/O virtualization and converged networking. And, once the I/O and network are now virtualized, they can be composed/recomposed on demand.  This eliminates a large number of components needed for infrastructure provisioning, scaling, and even failover/clustering (more on this later).  And, if you can now logically re-define server and infrastructure profiles, you can also create simplified Disaster recovery tools too.

In all, we can go from roughly a dozen point-products down to just 2-3 (see diagram above).  Now: What’s the impact on costs?

On the capital cost side, since I/O is consolidated, it literally means fewer NICs and elimination of most HBAs since they can be virtualized too.  Consolidating I/O also implies converged transport, meaning fewer cables (typically only 1 per server, 2 if teamed/redundant). And a converged transport also allows for fewer switches needed on the network.  Also remember that with few moving (physical) parts, you also have to purchase few software tools and licenses. See diagram below.

On the operational cost side, there are the benefits of simpler management, less on-the-floor maintenance, and even less power consumption. With fewer physical components and a more virtual infrastructure, entire server configurations can be created more simply, often with only a single management tool. That means creating and assigning NICs, HBAs, ports, addresses and world-wide names. It means creating segregated VLAN networks, creating and assigning data and storage switches. And it means automatically creating and assigning boot LUNs. The server configuration is just what you’re used to – except it’s defined in software. And all from a single unified management console.   The result: Buying, integrating and maintaining less software.

Referencing the diagram at right, here's what this looks like on a physical level is fewer components: Costly NIC and HBA cards are virtualized, with their physical transport now consolidated over Ethernet ports, and switches/cables now replaced by a logically-configured switch.

Ever wonder why converged infrastructure is developing such a following? It’s because physical simplicity breeds operational efficiency. And that means much less sustained cost and effort. And an easier time at your job.

Next installment: What Converged Infrastructure is not.

Thursday, May 6, 2010

Converged Infrastructure. Part 1

Since joining Egenera, I've been championing what's now being termed Converged Infrastructure (aka unified computing). It's an exciting and important part of IT management, demonstrated by the fact that all major vendors are offering some form of the technology. But it sometimes takes a while for folks (my analyst friends included) to get their heads around understanding it.  So I'm going to take a stab at a multi-part Primer on the topic.
Part 1: What is Converged Infrastructure, and how it will change data center management

Converged Infrastructure and Unified Computing are both terms referring to technology where the complete server profile, including I/O (NICs, HBAs, KVM), networking (VLANs, IP load balancing, etc.), and storage connectivity (LUN mapping, switch control) are all abstracted and defined/configured in software. The result is a pooling of physical servers, network resources and storage resources that can be assigned on-demand.

This approach lets IT operators rapidly repurpose servers – or entire environments – without having to physically reconfigure I/O components by hand—and without the requirement of hypervisors.  It massively reduces the quantity and expense of the physical I/O and networking components as well as the time required to configure them. A converged infrastructure approach offers an elegant, simple-to-manage approach to data center infrastructure administration. 

From an architectural perspective, this approach may also be referred to as a compute fabric or Processing Area Network. Because the physical CPU state (i.e. naming and configuration of I/O, networking and storage naming) is completely abstracted away, the CPUs become stateless and therefore can be reassigned extremely easily creating a “fabric” of components, analogous to how SANs assign logical storage LUNs.  And, through I/O virtualization, both data and storage transports can also be converged, further simplifying the physical network infrastructure down to a single wire.

 The result is a “wire-once” set of pooled bare-metal CPUs and network resources that can be assigned on demand, defining their logical configurations and network connections instantly.

BTW, there is another nice resource -- a white paper commissioned by HP (!) executed by Michelle Bailey at IDC. In it she defines what is a converged system:
"The term converged system refers to a new set of enterprise products that package server, storage, and networking architectures together as a single unit and utilize built-in service-oriented management tools for the purpose of driving efficiencies in time to deployment and simplifying ongoing operations. Within a converged system, each of the compute, storage, and network devices are aware of each other and are tuned for higher performance than if constructed in a purely modular architecture. While a converged system may be constructed of modular components that can be swapped in and out as scaling requires, ultimately the entire system is integrated at either the hardware layer or the software layer.
Converged Infrastructure and Software Virtualization

A Converged Infrastructure is different from—but analogous to—hypervisor-based server virtualization.  Think of hypervisors as operating “above” the CPU, abstracting software (applications and O/S) from the CPU; think of a Converged Infrastructure as operating “below” the CPU, abstracting network and storage connections. However, note that converged Infrastructure doesn't operate via a software layer the way that a hypervisor does. And converged Infrastructure is possible whether or not server virtualization is present.

Converged Infrastructure and server virtualization can complement each other producing significant cost and operational benefits. For example, consider a physical host failure where the entire machine, network and storage configuration needs to be replicated on a new physical server. Using Converged Infrastructure, IT Ops can quickly replace the physical server using a spare “bare metal” server.  A new host can be created on the fly, all the way down to the same NIC, HBA and networking configurations of the original server.

A Converged Infrastructure can re-create a physical server (or virtual host) as well as its networking and storage configuration on any “cold” bare-metal server.  And in addition, it can re-create an entire environment of servers using bare-metal infrastructure at a different location as well. Thus it is particularly well-suited to provide both high-availability (HA) as well as Disaster Recovery (DR) in mixed physical/virtual environments – eliminating the need for complex clustering solutions. And in doing so, a single Converged Infrastructure system can replace numerous point-products for physical/virtual server management, network management, I/O management, configuration management, HA and DR.

Converged Infrastructure - Simplifying Management for “The other half” of the Data Center

In the manner that server virtualization has grown to become the dominant data center management approach for software, converged infrastructure is poised to become the dominant management approach for “the other 50%” of the data center – its infrastructure.

However adoption will take place gradually, for a few reasons:
  • IT can only absorb so much at once. Most often, converged infrastructure is consumed after IT has come up the maturity curve after having cut their teeth on OS virtualization. Once that initiative is under way, IT then begins looking for other sources of cost take-out.... and the data center infrastructure is the logical next step.
  • Converged infrastructure is still relatively new. While the market considers OS virtualization to be relatively mature, converging infrastructure is less-well understood.
But there is one universal approach that can overcome these hesitations -- money.  So, in my next installment, I'll do a deeper dive into the really fantastic economics and cost take-out opportunities of converging infrastructure...

Tuesday, March 16, 2010

IT Industry Analysts - Falling Into the Bond Rating Agency Trap?

One of the leading causes of our recent economic melt-down was that "independent" credit rating agencies had a conflict-of-interest with the firms they were supposed to be watching.The very firms tasked with objectively gauging risk were also being paid by the firms they were evaluating...And in the end, the big losers weren't either of them... it was the public.

Well, beware that some of the same could be happening in the IT space.

I'll change the names to protect the innocent -- but let's say that I recently attended a day-long IT analyst event, one where all of the senior analysts trot-out their recent research. And to be honest, most of it was of very high quality.

But in one session which focused on an up-and-coming trend in IT, the analyst only cited the major IT vendors (think: HP, CSCO, IBM, Dell etc.) as the leading innovators and players in the space. It was complete Bunk. Of the four "leading" vendors mentioned, only one of them had any significant innovation in the space. Two others so coated their offerings with "marketecture" that real innovation was tough to discern.  And the final crime was that 2-3 smaller vendors I know who actually pioneered the space weren't mentioned at all. And they're the ones providing *real* products with real value today.

Yes, the analyst had a responsibility to his customers (IT end-users) to watch the big players in the industry. And to be sure, the big vendors dominate most market spaces. But the analyst also has a responsibility to truly master his market space and to report-back on the true leaders, innovators, and visionaries. Instead, I believe he unwittingly fell prey to the big vendors that pay much of his firm's bills in order to stay in the analyst's limelight. The failing here is industry-wide, and the IT consumers of the analyst's information are the real losers. Innovation isn't recognized, and therefore value isn't really transferred. And nearly all large industry analysts are guilty of this at some level.

In contrast, another friend of mine is a technology industry analyst with a major international financial institution. When he interviews me on my industry, company and product, he's clear that his reports are not commissioned by vendors, nor even by his bank's clients. There cannot be so much as a hint of conflict-of-interest in his work. Think about it.


Major IT Industry analysts have been my friends for years. I've worked for IT vendors small and large, and IT analysts have been (and mostly still are) great sounding boards for new ideas, helped identify market opportunities, and have added lots of marketing value if/when they approve of your product. And IT analysts add value on the IT consumer side too -  by identifying trends, pointing-out leading vendors, and recommending best-practices.

But sometimes these folks fundamentally fail at what they're "paid" to do. My advice: Always get a second opinion.

Monday, February 1, 2010

Hosting & Cloud Computing market index: Update

This month's updates to my original index indicate that the hosting market - and particularly those companies that are in the cloud hosting market - are doing quite well at holding their own against the falling NASDAQ index. Although the NASDAQ component was down ~ $7, my broad hosting component was down less than $1, while my cloud computing component was actually up ~ $7.

One other point of note: Apparently Merriman Curhan Ford also believed that coverage of this space was now warranted:
"We believe Cloud Computing represents a fundamental shift with regards to how IT organizations manage and source data center computing resources.  Companies such as Terremark, Rackspace, SAVVIS and NaviSite are at the forefront of this development," said Alex Kurtz, senior vice president and technology equity research analyst of Merriman Curhan Ford.  "Our core differentiator in covering this space is leveraging our expertise within our existing coverage of IT systems vendors, who are competing for the same IT budget dollar and impacted by the same macro trends as a Terremark or a Rackspace."

Tuesday, January 26, 2010

If you think Converged Infrastructure & Fabrics are niche, guess again

A few weeks ago, I Tweeted about an analyst conversation where it was looking like the market for Fabric Computing / Unified Computing would be growing rapidly in the foreseeable future.

Another analyst friend of mine quickly commented back – sarcastically – that the market was sure to be in the billions of dollars.

I was feeling a little unsure about this market until a few weeks later when I was shown a technology report from Thomas Weisel Partners. Although the market definition for converged infrastructure (also known as Unified Computing) was still forming, TWP felt that sales of Converged Infrastructure solutions could rise as high as $15 billion by the end of 2014.  Billion with a “b”?  Right-on…

Then there is a report by Gartner Research on fabric-based computing… which estimated that by the end of 2012, roughly 30% of the world’s top 2000 companies would have some form of fabric-based computing architecture. (Under the heading of “fabric” falls Unified Computing as well as Converged Infrastructure).

So, why is the market (for fabric computing, converged infrastructure, unified computing) still considered so new in the market, yet forecast to be so booming in 2-4 years?

First of all, what we’re talking about here are systems like Cisco UCS, Egenera PAN Manager, HP VirtualConnect, IBM Open Fabric Manager, and a few others. At the heart of each system is technology (sometimes HW, sometimes SW, sometimes mixed) that virtualizes I/O and leverages converged networking.

And why are vendors all chasing this approach? For a number of reasons --
  1. It’s incredibly complementary to virtualization: in the same way that the hypervisor changed how SW is abstracted, provisioned, managed and migrated, Converged Infrastructure changes how IO/networking/connectivity is assembled and managed. This gives vendors a valuable set of new offerings, and can tie management of infrastructure to management of VMs – yielding end-to-end abstraction of the entire data center. Roughly as much $ is spent managing infrastructure as it is managing software… to the TAM is huge here.
  2. It changes how availability is delivered: By manipulating IO addressing, networking and connectivity, Converged Infrastructure Management can re-provision failed hardware – either in the form of physical servers, or indeed, entire environments. Thus, Converged Infrastructure has the potential to displace a big chunk of traditional clustering software… (nearly a $ billion, if you follow IDC’s estimates)
  3. It changes how networks are physically wired and managed: Converged Infrastructure uses fewer IO components (either a single LOM or a single CNA), converged network protocols, fewer cables, and generally fewer switches. This yields a lower CapEx investment, and a commensurate lower OpEx to manage. The opportunity to sell alternative approaches to each of these technologies is immense.
  4. Converged Infrastructure is highly complementary to shared storage: the pervasiveness of SAN storage is a major enabler of a more virtual/flexible data center. As physical/virtual servers move, migrate and scale, storage simply follows.  An increasing ratio of servers – especially blades – are being shipped with HBAs, indicating that SAN use is on the upswing.
As to evidence that this market is shaping-up, we need only look to the magnitude of investment that Cisco, Egenera, HP, IBM – and even Emulex and Qlogic – are pouring into this market. Methinks we’ll see the hockey-stick shortly.

Monday, January 4, 2010

Hosting & Cloud Computing market index: Update

Last month I proposed that another way to measure adoption of cloud computing (or, at least, expectations of adoption) was to look at the stock market performance of a bundle of publicly-traded service provider companies.

Since then, I've expanded the list, and carried the range back 24 months.

The total list (the "broad hosting index") consists of: Digital Realty Trust, DuPont Fabros, Equinix, Internap, Iomart, Macquarie Telecom, Navisite, Rackspace, Savvis, Switch & Data, Telecity, and Terremark.  I also baselined my "virtual" fund against the NASDAQ index. I created another virtual fund (a subset list of the above) consisting of Equinix, Navisite, Rackspace, Savvis and Terremark - representing service providers with substantial businesses in the Cloud hosting space as well.

Here are some interesting observations of the value change of US$100 invested equally across each index:

Since Jan 2008:
  • Nasdaq:  + ~14%
  • Broad Hosting index: + ~90%
  • Cloud Subset: + ~50%
But, I also looked at the change since the market bottomed-out in March of 2009. Since then, the picture is a tad different:
  • Nasdaq: + ~ 55%
  • Broad Hosting index:  + ~135%
  • Cloud Subset:  +~ 115%
These numbers tell me that the performance (or at least, expectations of performance) in the hosting space far exceeds the broader NASDAQ technology sector.  Interestingly, the "cloud index" under-performs the broader index. No explanation here other than the fact that we're dealing here with a statistically low number of companies, and a few "high performers" in the broader index seem to be lifting it above the cloud index.

I'll plan on updating this periodically. Comments, additions, etc. welcome!