Monday, May 18, 2009

New AWS enable "Real" Elastic Clouds

Yesterday Amazon announced a new set of services for their EC2 "elastic compute cloud" and these perhaps represent the real "holy grail" for cloud computing. While not new concepts, they illustrate how "real" cloud computing elasticity works, and challenge a few other virtualization & automation providers to do the same.
  • Amazon CloudWatch: A for-fee ($0.015 per AWS instance monitored) service that:
    provides monitoring for AWS cloud resources... It provides customers with visibility into resource utilization, operational performance, and overall demand patterns—including metrics such as CPU utilization, disk reads and writes, and network traffic...
  • Amazon Auto Scaling: a free service that:
    automatically scales your Amazon EC2 capacity up or down according to conditions you define. With Auto Scaling, you can ensure that the number of Amazon EC2 instances you’re using scales up seamlessly during demand spikes to maintain performance, and scales down automatically during demand lulls to minimize costs. Auto Scaling is particularly well suited for applications that experience hourly, daily, or weekly variability in usage. Auto Scaling is enabled by Amazon CloudWatch...

  • Amazon Elastic Load Balancing: A for-fee ($0.025/hour/balancer + $0.008/GB transferred) which
    automatically distributes incoming application traffic across multiple Amazon EC2 instances. It enables you to achieve even greater fault tolerance in your applications, seamlessly providing the amount of load balancing capacity needed in response to incoming application traffic. Elastic Load Balancing detects unhealthy instances within a pool and automatically reroutes traffic to healthy instances until the unhealthy instances have been restored...

To-date, users of Amazon EC2 have had to do these sorts of things manually, if at all. Now Amazon is building these services into AWS (as well as into Amazon's pricing and business model).

Not entirely new concept:
Note that what Amazon is doing is not entirely new. For example, if you're considering building these sorts of capabilities into your own "internal cloud" infrastructure, there are a few products that provide similar solutions. e.g. in 2008, Cassatt (RIP?) announced its own capacity-on-demand technology, which created/retired entirely new instances of a service based on user-definable demand/performance criteria. (I should add that Auto Scaling & CloudWatch operate similarly -- you can define a number of performance and SLA parameters to trigger grow/shrink scaling commands).

Similarly, Egenera's PAN Manager approach dynamically load-balances networking traffic between newly-created instances of an App. And, products such as 3Tera also enable users to define components (such as load balancers) in software. All of this adds-up to truly "adaptive infrastructure" that responds to loads, failures, SLAs etc. automatically.

The challenge to others:
So, if Amazon can instantiate these services in the "public" cloud, then I would expect others -- notably providers such as VMware, Citrix, MSFT etc. -- to provide similar technologies for folks building their own infrastructure.

For example, in VMware's "vCloud" I would expect to see services (some day) that provide similar monitoring, auto-scaling, and load balancing features. If virtualization providers are to take "internal cloud comuting" seriously, these are automation-related services they'll be required to provide.

Kudos to Amazon:
And finally, Amazon has done two saavy things in one move - (a) they've once again shown the world what a true "cloud" computing infrastructure ought to do, and (b) provided another nice value (and revenue stream!) to complement their per-instace EC2 fees. Remember: The easier they make it to scale your EC2 instances, the more $ they make...

Looking forward to how the industry will respond....

Tuesday, May 12, 2009

Profiling questions nobody's asking re: cloud applications

I find it odd that so much is being written about defining cloud terminology, cloud operation, and cloud construction... But so little attention is being paid to identifying & profiling which applications are best-suited to actually run in an external "cloud."

I'd like to see a comprehensive list (or decision-tree?) of what ideal application properties pre-dispose apps for being well-suited to run in a cloud. And, for that matter, what qualities might potentially disqualify apps from running in a cloud as well. (BTW, a great Blog by John Willis, Top 10 reasons for not using a cloud, was another initiator of my thought process)

Customers of mine are attracted to the value prop of a cloud (or a "utility" infrastructure)... but need guidance regarding what apps (or components) should be candidates for these environments. And recent conversations with prominent analysts haven't helped me... yet.

I'm also surprised that consulting/service companies aren't all over this issue... offering advice, analysis and infrastructure "profiling" for enterprises considering using clouds. Or am I missing something?

So, with no further ado, I've begun to jot down thoughts toward a comprehensive list of application properties/qualities where we could "rank" an application for its appropriateness to be "outsourced" to a cloud. I've chosen to annotate each factor with a "Y" if the app is appropriate for an external cloud, a "N" if not, and a "M" if maybe.

Dynamics/Cyclicality
  • Y Apps with highly dynamic (hourly/daily/etc.) or unpredictable in compute demand.
    A cloud's elasticity ensures that only enough capacity is allotted at any given time, and release when not needed. This avoids having to buy excess capital for "peak" periods.
  • M Apps where compute demand is "flat" and/or constant.
    Not clear to me if it makes sense to outsource an app if it's demand is "steady-state" - maybe just keep it in-house and purring along?
  • M Apps where demand is highly counter-cyclical with other applications
    In other words, if an application runs out-of-phase with other apps in-house (say, backup apps that run in the middle of the night when other apps are quiescent) then it might make sense to keep in-house... they make better use of existing capital, assuming that capital can be re-purposed.
Size / Temporality
  • Y Apps that are very "big" in terms of compute need, but which "go away" after a period of time
    Such as classic "batch jobs", whether they're daily trade reconciliations, data mining projects, scientific/computational, etc.
  • N Apps that are "small" and constant in nature
    Apps such as Edge apps like print services, monitoring services etc.
  • Y Apps that are part of test environments
    Apps - and even entire environments - which are being tested prior to roll-out in production. This approach eliminates costly/redundant staging environments
  • M Apps for dev and test uses
    For example, where environment and regression testing is concerned, and environments are built and torn-down frequently. However, certain environments are inherently bound to hardware and/or tested for performance, and these may need to remain "in-house" on specific hardware (see below)
Inherent application functionality
  • N Apps that are inherently "internal"
    Such as internal back-up software, "edge" applications like printer servers, etc.
  • N Apps that are inherently bound to hardware
    Such as physical instances of software for specific (e.g. high-performance) hardware, or physical instances remaining so for scale-out reasons. Also, physical instances on ultra-high-reliability hardware (e.g. carrier-grade servers) .
Responsiveness/Performance
  • N Apps needing high-performance, and/or time-bound requirements
    such as exchange trading algorithms, where response and deleay (even down to microseconds) is critical, and needs to be tightly monitored, controlled and optimized
Security / Auditability / Regulatory / Legal
NB: Also see an excellent Blog by James Urquhart on regulatory issues in this space.
  • M Apps where data must be maintained within (or outside of) specific county borders
    Data within certain borders may be subject to search/access by the government (e.g. Patriot Act). Data may be required to be maintained within sovereign borders... or it may be preferred that the data explicitly be maintained outside of sovereign borders to avoid such searches.
  • M Apps requiring tight compliance/auditability trails
    Ordinarily, I'd give this a "N", but some tools are coming out that help ensure compliance for apps that exist in the cloud. Apparently HIPAA regulations essentially prohibit use of clouds right now.
    Stay tuned here.
  • N Apps manipulating government data, e.g., where laws require direct data oversight
    Many government databases are required to be maintained within government facilities behind government firewalls.
  • N Apps where software licensing prohibits cloud
    e.g. some software licensing may be tied to specific CPUs; some licensing may not permit virtualization (as is found in most clouds); certain licensing may not permit use outside of specific physical domains.
Curious to hear whether these are some of the properties being taken into account... and what other pointers people have. And most of all, curious to hear whether (and if so, which) service providers & consultancies are currently using guidelines such as these.

Wednesday, May 6, 2009

In their own words: Valley CTOs' Blogs & Tweets

I noticed that I subscribe to feeds from a number of CTO-like folks, so I thought I'd publish a few of my favorites:

Sun's CTO,
Greg Papadopoulos - Blog
I love listening to Greg - definitely a visionary, definitely well-connected. He's the guy I referenced back in 2007 when he observed that The World Only Needs 5 Computers. Looking like this concept is really coming true.
Cisco's CTO, Padmasree Warrior - Blog - Twitter
Padmasree's blog started getting major traction when Cisco "leaked" their UCS system. As Cisco's visionary, she has unbelievable insight into where IT is going, with a great sense of "humanity" and realism thrown in.
Amazon's CTO, Werner Vogels - Blog - Twitter
What can I say? As the chief spokesperson for Amazon Web Services, he does a great job championing All Things Cloud. I had the opportunity to see him a few months ago at the Cloud Computing Expo in NY, and his vision is compelling.
Intel CTO, Justin Rattner - Blog
While only an occasional Blogger, he definitely reflects Intel's position on a number of issues and technologies.
BMC Software CTO, Tom Bishop - Blog
Tom has a great way of posing thought questions and industry issues from an Enterprise Management perspective. He doesn't seem to get lots of comments on his Blog, though. Wonder why. :|
Novell's CTO, Jeff Jaffe - Blog
A frequent blogger (and, I might add, so is Novell's CMO). I like this blog b/c I think it gives a real sense of where his (and Novell's) mind is at.
HP's Cloud Services CTO, Russ Daniels - Blog
Russ is HP's visionary in the cloud/SaaS space. Very cool guy. Unfortunately his Blog reads more like Twitter updates. But he's posted a few interesting videos to the web recently.

These are the guys I track... and I'm surprised that more tech companies either don't have official CTOs, or don't tend to condone Blogging/Tweeting.

Which others do you follow?

Tuesday, May 5, 2009

Infrastructure Orchestration in use within SPs & Hosting providers

For the past months I've held that new technologies are OK... but the litmus test is whether they're actually used and valuable in the real world.

One of those new technologies in the Enterprise Data Center space is what I call Infrastructure Orchestration (others term it
fabric computing or unified computing). HP, IBM and now even Cisco have solutions in the space, but I believe only Egenera has been doing it the longest, and has the broadest installed base of enterprises in the real-world using it and expanding footprint.

With the explosive growth of virtualization, this segment of technology is hotter than ever. Why? In the way virtualization abstracts & configures the software world (O/S, applications, etc.), Infrastructure Orchestration abstracts and defines/configures the infrastructure world (I/O, NIC cards, HBA cards, storage connectivity, LANs, switches, etc.). So, not only can you define a virtual server instantly, you can define a *physical* server (maybe a virtual host, or a physical machine) down to I/O, NICs, Storage and Network. By doing this, you can reconstruct an entire data center -- giving you a unified approach to HA and/or DR. Cool.

I've been pointing out applications for this technology in Healthcare as well as in the Financial sector, and I thought it would also be useful to illustrate value in the Service Provider / Hosting market.

For this segment, the Infrastructure Orchestration approach is essentially used to build Infrastructure-as-a-Service, or IaaS. In the past it's been called "utility computing" but in the era of cloud computing, this seems to be the term in use.

Savvis
In 2004, Savvis set a goal to become the industry’s first totally virtualized utility computing data center, integrating virtualized servers, storage, networks, and security into an end-to-end solution. Today, the service provider houses over 1,425 virtual servers running on 70 industrystandard Egenera servers, 370 terabytes of storage and 1,250 virtualized firewalls.

As a complement to its managed hosting and collocation business, the company has built huge, scalable service platforms that can be leveraged by multiple clients with full security. This utility approach enables them to charge customers for resources more closely tailored to their actual needs. Each year, more revenues and profits are generated from utility hosting contracts with business and government customers ranging from start-up entrepreneurs to the largest enterprises in the world, enabling Savvis to compete and win against traditional hosting providers and outsourcers.

Albridge Solutions:
Albridge Solutions migrated from UNIX servers to industry-standard servers running Linux and Egenera-based Infrastructure Orchestration. Initially, they considered building a virtualized environment by combining virtualization and management point-products. They discovered, however, that resulting complexity would be overwhelming. Servers from the industry’s largest vendors were also ruled out since their legacy architectures made virtualization and resource sharing impossible. Today, using industry-standard servers and Egenera's software, Albridge can run any application on any server at any time based on demand... regardless of whether those applications are virtual, or native.

Panasonic Electric Works Information Systems Co., Ltd.
Panasonic chose Egenera products to consolidate servers and reduce floor-space. Along with enabling server consolidation, the software is delivering superior high availability (HA) and disaster recovery (HA). Applications running in the data center include an order-processing service for the manufacturing industry, a content delivery system and Electronic Data Interchange (EDI). Based on results, Panasonic has designated Egenera software as its standard infrastructure virtualization management software for mission-critical processing.

Monday, May 4, 2009

Lessons from Glassblowing for High-Tech marketing

This past weekend I spent 6 hours learning the very basics of glassblowing. It's been on my "bucket list" for quite some time, and when a good friend suggested we try it, I jumped on the opportunity. But what I didn't realize was that there were lessons I got in the studio that are metaphors for my "day job" too.

BTW, the lesson was given at San Francisco's Public Glass studios - a fabulous resource near one of San Francisco's largest artists communities. Cool galleries but even cooler artisans at work. Glassblowing has been around since ancient Egypt, I think. And in
many ways it hasn't changed very much. The tools are still very simple, and the raw materials are still the same. And so begin my observations:

It's way harder than it looks: Nothing beats experience and experimentation, and no amount of watching beats actual doing. You notice this the second you take your first blob of glass from the furnace and simply try to keep it symmetrical and from falling to the floor. You have to develop an intuitive feel for temperature, malleability, and a muscle-memory for working with the material.
All the business books in the world only get you so far. You need to get your hands dirty. And frankly, nothing beats learning from a really good failure. Once you see a product cancelled (or for that matter, a company die) you finally gain a real appreciation for what to do, not just what not to do. Ya' can't get that from a book.
Heat is your friend (but be careful): You find that you only have a minute or two of work time before you need to re-heat a piece. But be wary - you're operating at temperatures above 2,000F (and as high as 2,500F sometimes) which means even standing in front of an oven - even 6 feet away - is something you can only tolerate for a few seconds. Going back-and-forth doesn't give you lots of time to cool-of and pat your forehead.
Hype, profile and momentum is what you strive for. But it can be fleeting. When you're "hot" you've got clout, but it dies-down quickly. Drumming-up conversations - or even controversy - in the social network realm is great. It keeps you in play. Just don't overdo it or you'll be toast. :)
From basic materials & tools can arise massively different implementations: Yes, there are a few different types of glass (some w/higher melting points, clarity, etc.) and a few different tools (basic steel pincers, scissors, wooden shaping cups, and yes - even wet newspaper to help shape). But that's it. Then the creativity begins. How you manipulate the glass viscosity, temperature gradients & selective cooling, layers of glass, color etc. is infinitely variable. The sky's the limit.

And even in tech, the basic marketing principles (the four P's, segmentation, etc.) haven't changed in a long time. But using them in clever/innovative ways is the trick. Making sure you stand-out in the crowd, above the noise-level, is still more of an art than a science. Play with the combinations, repeat them, think about re-combining in new ways. Be creative and brainstorm with others.
Keep things moving! Hot glass is essentially fluid, much like really think molasses. The second you stop spinning the glass, it'll start sagging. Plus, the really good artisans spin smoothly and transition back/forth smoothly too. Never stop.
And, never stop experimenting; never let-off on the accelerator with PR, AR, or marketing programs; keep the "buzz" going, keep the plates spinning. If you're complacent or don't have an agenda for next month or quarter, start now.
Cool slowly: Too much thermal change is bad. Big pieces experience internal thermal stresses, and will shatter if cooled too fast. Most pieces have to be cooled in a controlled manner over 24 hours.
And too much change is bad to any organization. Plan making your go-to-market changes slowly, over many quarters. I've seen organizations that want to radically change marketing themes and messages every quarter (or month!). Give the market at least 2-3 quarters to absorb new positioning/messaging. Unless you're consumer goods, customer buying cycles can be long... the changes will confuse them if too often.
Work as a team: Big pieces need at least two -- and sometimes as many as four -- people to help. Different pieces need to be prepped, warmed, blown, held, etc. It's choreographed in advance. Everyone knows their job. Running into someone holding a piece of glass at 2,000F can really spoil your day.
Ditto. In business, as in art, working as a team is critical -- Always good to have frequent status meetings, and over-communicate your actions/intentions. Just because a project looks like you can do it alone doesn't always mean you should. Socialize your efforts even as you're doing the project -- and even ask for input even if you may not need it -- getting early ownership from others means buy-in from them too.