Friday, June 25, 2010

Postcards from the IT Financial Management Association

This week marks the third time I have been invited to speak at the ITFMA World of IT Financial Management conference.  This is a really amazing/unique conference, created nearly single-handedly by Terry Quinlan, their Executive Director. Quick overview:
The IT Financial Management Association (ITFMA) was established in 1988 and founded the IT Financial Management profession at that time. ITFMA is the only association dedicated to this profession and provides a comprehensive education program on the principles and practices used to financially manage Information Technology (IT) organizations. ITFMA is the national leader in the education of IT financial management professionals and the only recognized provider of certification in the various financial disciplines of IT financial management.
The attendees are largely non-technical, but are comprised of financial managers, controllers, project managers and purchasing managers all in the IT field mainly with F1000 companies.

And what sets this conference apart for me is the fact that 90% of the topics of conversation are non-technical. It's not about the speeds-and-feeds, but rather about the project management, cost accounting, charge-back, managerial and regulatory issues facing IT.  It gave me pause that, while technologists focus on keeping the electrons moving, there are also folk who keep the paper and the money moving.

On particularly illustrative conversation I had in mind -- with an IT financial manager from the State of Oregon, who oversees the state's shared/hosted IT infrastructure.  They were promised by a large national consulting company that through consolidation of equipment and data centers, the state would save tons of $$ and reduce the managerial headcount as well. As it was described to me, the technical consolidation was largely a success, but the consultant failed to accurately account for the business and managerial staffs associated with the IT. And over time, while the square feet of data center shrank, the overall IT staffing continued to grow. Lest we commit the sin of assuming that all of IT is technologists.

Overall, the ITFMA is a "must-attend" -- especially now that IT is going through such large changes as data center consolidation, virtualization, automation and cloud computing. All of these have non-linear impacts on IT finances, and all can cause disruptive effects on topics like capital forecasting, project management, expense vs  investment projections, etc. Not to mention the newer issues caused by cloud computing such as data ownership, security, operations control, etc.

The event is a relative bargain to attend, and Terry always finds classic, historic venues for the conferences.

Monday, June 7, 2010

Converged Infrastructure Part 2.

Part 2. Converged Infrastructure’s Cost Advantages

In my first installment about converged Infrastructure, I  gave an outline of what it is, and how it will change the way in which IT infrastructure is managed.

In this installment, I’ll go a bit deeper and explain the source of capital and operational improvements converged Infrastructure offers – and why it’s such a compelling opportunity to pursue.

But first, the most important distinction to make between converged infrastructure and “the old way of doing business” is that management – as well as the technology – is also converged.  Consider how many point-products you currently use for infrastructure management (i.e. other than managing your software stack). 


This diagram at right  has resonated with customers and analysts alike. It highlights, albeit in a stylized fashion, just how many point-products an average-sized IT department is using.  This results in clear impact in
  • Operational complexity – coordinating tool use, procedures, interdependencies and fault-tracking
  • Operational cost – the raw expense it costs to acquire and then annually maintain them
  • Capital cost – if you count all of the separate hardware components they’re trying to manage
That last bullet, the thing about hardware components, is also something to drill down into.  Because every physical infrastructure component in the “old” way of doing things has a cost.  And I mean I/O components like NICs and HBAs, not to mention switches, load balancers and cables.

What might be possible if you could virtualize all of the physical infrastructure components, and then have a single tool to manipulate them logically?

Well, then you’d be able to throw-out roughly 80% of the physical components (and associated costs) and reduce the operational complexity roughly the same amount.

In the same way that the software domain has been virtualized by the hypervisor, the infrastructure world can be virtualized with I/O virtualization and converged networking. And, once the I/O and network are now virtualized, they can be composed/recomposed on demand.  This eliminates a large number of components needed for infrastructure provisioning, scaling, and even failover/clustering (more on this later).  And, if you can now logically re-define server and infrastructure profiles, you can also create simplified Disaster recovery tools too.

In all, we can go from roughly a dozen point-products down to just 2-3 (see diagram above).  Now: What’s the impact on costs?

On the capital cost side, since I/O is consolidated, it literally means fewer NICs and elimination of most HBAs since they can be virtualized too.  Consolidating I/O also implies converged transport, meaning fewer cables (typically only 1 per server, 2 if teamed/redundant). And a converged transport also allows for fewer switches needed on the network.  Also remember that with few moving (physical) parts, you also have to purchase few software tools and licenses. See diagram below.

On the operational cost side, there are the benefits of simpler management, less on-the-floor maintenance, and even less power consumption. With fewer physical components and a more virtual infrastructure, entire server configurations can be created more simply, often with only a single management tool. That means creating and assigning NICs, HBAs, ports, addresses and world-wide names. It means creating segregated VLAN networks, creating and assigning data and storage switches. And it means automatically creating and assigning boot LUNs. The server configuration is just what you’re used to – except it’s defined in software. And all from a single unified management console.   The result: Buying, integrating and maintaining less software.

Referencing the diagram at right, here's what this looks like on a physical level is fewer components: Costly NIC and HBA cards are virtualized, with their physical transport now consolidated over Ethernet ports, and switches/cables now replaced by a logically-configured switch.

Ever wonder why converged infrastructure is developing such a following? It’s because physical simplicity breeds operational efficiency. And that means much less sustained cost and effort. And an easier time at your job.

Next installment: What Converged Infrastructure is not.