I then realized that I’d always related to storage as a “big, fat, dumb disk in the sky”, and assumed that it was merely subservient to the compute stack.
Well, not exactly.
My re-think was that attributes of Compute, Storage, and yes, Network, all had to be reconsidered in the context of a holistic cloud-based infrastructure.
Most will agree that the following attributes describe the operational profile of a generic cloud: (HT to IDC)
- Shared, standard service. Built for a market (public), not a single customer
- Solution packaged. A “turnkey” offering, integrates required resources
- Self-service. Admin, provisioning; may require some “onboarding” support
- Elastic scaling. Dynamic and fine grained
- Usage-based pricing. Supported by service metering
- Accessible via the Internet. Ubiquitous (authorized) network access
- Standard UI technologies. Browsers, RIA clients, and underlying technologies
- Published service interface/API. Web services and other common Internet APIs
- Consolidation: ability to make optimal use of lower-level resources
- Automation: ability to self-configure to provide the required service
- Self-healing/failover: ability to correct for failure with little or no service interruption
- Multi-tenancy, Multi-tiered-SLA: ability of resources to securely house individual services & service-levels across a shared infrastructure
- Global availability: ability to provide a shared service across multiple availability zones
The first assumption most make is that these traits apply exclusively to the compute layer (physical servers, VMs and the like). But pause and consider the storage (and network) facilities need to embody most, too.
But consider this: In a virtualized world, servers are files, and files are just data.
So, when we talk about cloud-related scaling, service migration, server fail-over etc., we must also implicitly speak of managing data dynamics, data replication and data mobility. When we talk about automation, self-service provisioning and service elasticity, we’re implicitly talking about dynamic data/storage provisioning and expansion. When speaking of multi-tenancy and tiered SLAs, we’re also speaking of shared storage facilities performing identical functions in lock-step with the compute facilities.
From a broader perspective , begin to consider implications of global availability and hyper-scale. The terabytes of data that embody virtual servers and their data might need to be migrated to (or duplicated in) multiple hemispheres- not a trivial task from an integrity and latency perspective. We can know (or hope) that the physical servers will be there… but it’s the bits that still have to travel.
The next idea these observations triggered was the need to keep compute, network and storage stacks in lock-step when rolling-out cloud services. The answer (not surprisingly) is converged infrastructure... An approach where the desired cloud attributes are assigned to the 3 stacks simultaneously. More about that in a future Blog.
But I'm now encouraging everyone to view storage of bits in a completely different light – one where the functional and operational attributes of storage must be architected to embrace the core attributes of cloud computing. For without the bits, there can be no servers, no data, and no services. More about that in a future post as well :)