I have spent the last few months working with the cabinet office on phase 2a of the UK’s G-Cloud and App Store programme. My position was as industry co-lead for the technical architecture work strand. The other lead is a public sector employee from NHS connecting for health, which, despite the flak they get, have done great work in marshalling and managing massive numbers of servers and PCs and the networks in between. Other work strands included Information Assurance, Commercial, Quick Wins, Service Management and Business Transition Planning.

Working on the project has given me a very clear insight into what the benefits of Cloud Computing to government and business really are, and also what a government Cloud would need to look like. That was essentially what we were describing (in broad terms) in our technical architecture strategy paper, which will be published soon. Therefore, I’m updating my definition of Cloud Computing in line with that work, and also incorporating the NIST definition, which has recently become something of a de-facto standard (although I don’t entirely agree with it).

Cloud != Utility

First off I wish to be clear: Cloud Computing is not the same thing as Utility Computing (aka. Infrastructure as a Service). Nor is Cloud the same thing as Grid Computing. Both terms are well-defined and there is no need to invent a new name for these decades-old concepts (my Dad was providing Utility Computing services from his computer bureau service before I was born!):

  • Grid Computing: The combination of computer resources from multiple administrative domains applied to a common task.
  • Utility Computing: The packaging of computing resources (computation, storage etc.) as a metered service similar to a traditional public utility.

Cloud is often confused with Platform or Software as a Service too, but they are just extensions of the Utility Computing concept, and again are nothing terribly new. I see Cloud Computing as the combination of those old concepts of utility and grid:

Cloud Computing = Grid Computing + (Utility Computing * N)

I shall explain. The real power of the Cloud Computing concept comes about when one views it as the mass-market for Utility Computing resources, and that is what the G-Cloud programme essentially asked the technical work stream to come up with; an architecture that would allow a number of different, but standardised, Infrastructure as a Service (IaaS), Platform-aaS (PaaS) and Software-aaS (SaaS) services to be make available in one central competitive market place (the App Store).

The clear desire was also for those services to be interoperable, especially at the infrastructure level. Additionally, and this is where the “Cloudiness” comes in, the desire was such that one could request computing resources to a specified service level agreement (SLA) and at a specific security impact level, and have a pre-certified range of options which could then be chosen based on price, or other factor.

That fits with what I believe most people mean when they say “host it in the Cloud” – referencing an amorphous, distributed collection of compute resource used in a way that you don’t really care where your application resides, so long as your requirements are met.

Therefore, I maintain that when we refer to “Cloud Computing” we should be talking about an open market for computing resources, created when you combine multiple interoperable compute utilities into one massive grid, hence Grid + (Utility * N).

NIST definition

I really like the new Cloud Computing definition from the US’s National Institute of Standards and Technology (NIST) for the most part. They define three service models, five essential characteristics, and four deployment models. I have represented their model on a cube, as below:

A well-managed data centre is not “a Cloud”!

The only part I take issue with is their “private Cloud” concept; something being conveyed with gay abandon by technology analysts the world over unfortunately. In most usage, “private Cloud” just refers to a partitioned off chunk of infrastructure within one utility computing provider in most cases, or worse-still just a well managed data centre with a bit of virtualisation if you ask some people!

The UK government, for example, wants a private Cloud for some higher-security requirements, but that would be a pool of resources from a number of utility computing facilities (probably partitioned off super-secure areas of providers’ data centres); an open market again, albeit one with specific requirements. As it stands, the “essential characteristic” of resource pooling is at-odds with the analyst-speak concept of a private Cloud; if it is private and dedicated to one organisation, you will only be pooling the resources of one organisation.

There are very few organisations that will have a sufficiently diverse usage profile to gain additional benefit from such an approach, however there are several with similar requirements that could club together as one private community, like UK government. Also, only NIST’s “Hybrid Cloud” encapsulates the full vision of what I believe Cloud Computing is about (interoperability etc). Therefore I would change the NIST deployment models as follows:

  • Private Compute Utility: An infrastructure physically dedicated to one organisation.
  • Private Community Cloud: An infrastructure spanning multiple administrative domains that is physically dedicated to a specific community with shared concerns.
  • Public Cloud: An infrastructure spanning multiple administrative domains that is made available to the general public / businesses, without physical partitioning of resource allocations. (There is arguably only one public Cloud – hence the phrase “host it in The Cloud”.)
  • Hybrid Cloud: A combination of public public and private compute utilities in order to allow “cloud bursting” for some requirements, or to allow a private compute utility owner to sell their spare capacity into The Cloud.

It’s all Amazon’s fault; misnaming their Plastic Compute Utility

The origin of the term “Cloud” comes from the diagrams we used to draw of the Internet back in the ’90’s; typically the automatically-routed internetwork was depicted by a big fluffy cloud in the middle of a network map, and it was just accepted that it would route things sensibly between the data centre and client (or other end points). The term then gained further traction with people using phrases like, “I’ll just host it in the Cloud”, now referring to the generally available computing / hosting resources connected to the ‘net.

Then, along came Amazon with their “Elastic Compute Cloud” (EC2), applying the term to something that (when considered on its own) is really just a massive plastic compute utility. ‘Plastic’ since you have to request more or less instances (resources do not elastically shrink) – the elasticity is a function of how you write your application to interface with their API. A ‘Compute Utility’ because it is really just one very large compute grid being sold as a utility service; why apply a new term when we have a perfectly good one?

I see Cloud Computing as the result of having multiple utility computing providers at your behest, with standardised APIs to allow provisioning from competing suppliers. That is pretty much here now, although the grid middleware to allow smooth interoperability is not quite industrial-strength.

IaaS vs. PaaS vs. SaaS

One of the nicely encapsulated outputs from the G-Cloud project to-date has been an agreement on what we actually mean by infrastructure, platform and software, and how they do differ a little from the old terms hardware, middleware and application, but that can wait for my next posting!