Wednesday, December 16, 2009

Hosting & Cloud Computing: Numbers Don't Lie

There's lots of chatter in the market today regarding the value of using outside data centers, hosting services and cloud computing.  But listening to pundits/analysts trying to objectively predict true value left me hollow.

While I'm not an investment professional, I do know that the stock market doesn't lie.... so instead, I thought I'd look at a bundle of stocks from publicly-traded data center companies in the data center space, and compare against a market benchmark.

I chose companies on publicly-traded markets in both the US as well as in Europe. My criteria were somewhat subjective, but basically the companies had to have a primary business operating data centers. I also excluded Telcos because it is difficult to separate their carrier revenues relative to hosting revenues. So, my initial "virtual fund" consists of 12 companies: Digital Realty Trust; DuPont Fabros; Equinix; Internap; Iomart; Macquarie Telecom; Navisite; Rackspace; Savvis; Switch & Data; Telecity; Terremark.


I also took a 5-company subset of these public companies that had significant offerings in the cloud computing space (Equinix; Navisite; Rackspace; Savvis; Terremark). I labeled this "virtual fund" a cloud-only index.

The chart at right is my best attempt to (a) tabulate historic end-of-month closing price of each stock; (b) calculate month-to-month percentage gains for each; (c) create "virtual funds" where $100 would be invested equally across each vehicle (initially $8.33 in each of the 12 hosting stocks, and initially $20 in each of the 5 cloud-related stocks). The benchmark I used is the Nasdaq index, also assuming an initial $100 investment.

Not surprisingly (for me, anyway) both "indexes" are outperforming the Nasdaq -- perhaps proving the thesis that datacenter operation and application outsourcing is indeed a growth market (or at least a speculative growth market?) as compared to the general technology market. What would be equally useful (but not an analysis I've done) is to chart gross revenues for the Index companies. This would be a telling barometer of actual business.

I'll continue to update this index at the end of each month. Comments, additions and suggestions welcome!

Tuesday, December 8, 2009

Emergence of Fabric as an IT Management Enabler

Last week I attended Gartner's annual Data Center Conference in Las Vegas. Four days packed with presentations and networking (of the social kind). Lots of talk about cloud computing, IT operations, virtualization and more.

Surprisingly a number of sessions directly referenced compute Fabrics -- including "The Future of Server Platforms" (Andy Butler), "Blade Servers and Fabrics - Evolution or Revolution" (Jeff Hewitt), and "Integrated Infrastructure Strengths and Challenges" (Paquet, Dawson, Haight, Zaffros). All very substantive analyses of what fabrics _are_... but very little discussion of why they're _important_. In fact, Compute fabrics might just be the next big thing after OS virtualization.

Think of it this way: Fabric Computing is the componentization and abstraction of infrastructure (such as CPU, Memory, Network and Storage). These components can then be logically re-configured as-needed. This is very much analogous to how OS virtualization componentizes and abstracts OS and application software stacks.

However, the focus by most fabric-related vendors thus far is simply on the most fundamental level of fabric computing, which is simply virtualizing I/O and using a converged network. This is the same initial level of sophistication when the industry believed that OS visualization was only about the hypervisor. Rather, we need to take a longer view of fabric computing and think about higher-level value we create by manipulating the infrastructure similar to how we manipulate VMs. A number of heady thinkers supporting the concept of Infrastructure 2.0 are already beginning to crack some of these revolutionary issues.

Enter: Fabric as an Enabler


If we think of "fabric computing" as abstraction and orchestration of IT components, then there is a logical progression of what gets abstracted, and then, what services can be constructed via logically manipulating the pieces:

1. Virtualizing I/O and converging the transport
This is just the first step, not the destination. Virtualizing I/O means no more stateful NICs and HBAs on the server; rather, the I/O presents itself to the OS as any number of configurable devices/ports, and I/O + data flow over a single physical wire. Transport can be Ethernet, FCoE, Infiniband, or others. In this manner, the network connectivity state of the physical server can be simplified and changed nearly instantaneously.
2. Virtual networking
The next step is to define in software the converged network, its switching, and even network devices such as load balancers. The result is a "wire-once" physical network topology, but with an infinitely reconfigurable logical topology. This permits physically flatter networks. Provisioning of the network, VLANs, IP load balancing, etc. can all be simplified and accomplished via software as well.
3. Unified (or Converged) Computing
Now things get interesting: Now that we can manipulate the server's I/O state and its network connections, we can couple that with creating software-based profiles of complete server configurations -- literally defining the server, its I/O, networking, storage connections, and even what software boots on it. (Software being either a virtual host, or a traditional native OS). Having defined the entire server profile in software, we can even define the entire environment's profile.
Defining servers and environments in software allows us to provide (1) High Availability: With a hardware failure, we can simply re-provision a server configuration to another server in seconds -- whether or not that server was running a VM host, or a native OS. (2) Disaster Recovery: we can re-constitute an environment of server profiles, including all of their networking, ports, addresses, etc., even if that environment hosts VMs and native OS's.
 4. Unified Management
To achieve the ultimate in an agile IT environment, there's one remaining step: To orchestrate the management of infrastructure with the management of workloads. I think of this as an ideal Infrastructure-as-a-Service -- physical infrastructure that adapts to the needs of workloads, scaling up/out as conditions warrant, and providing workload-agnostic HA and DR.  From an IT agility perspective, we would now be able to abstract nearly all components of a modern data center, and logically combine them on-the-fly as business demands require.
Getting back to the Gartner conference, I now realize one very big missing link -- while Gartner has been promoting their Real-Time Infrastructure (RTI) model now for some time, they have yet to link it to the coming revolution that will be enabled by fabric computing.  Maybe we'll see some hint of this next year.