As one of the hottest concepts in IT today, ‘cloud computing’ proposes to transform the way IT is consumed and managed, with promises of improved cost efficiencies, accelerated innovation, faster time-to-market, and the ability to scale applications on demand. While the market is abundant with hype and confusion, the underlying potential is real—and is beginning to be realised.

In particular, SaaS applications and public cloud platforms have already gained traction with small and start-up businesses. These offerings enable companies to gain fast, easy, low-cost access to systems that would otherwise cost them millions of Euros to build. At the same time, cloud computing has drawn the cautious but serious interest of larger enterprises in search of its efficiency and flexibility benefits.

However, as companies begin to implement cloud solutions, the reality of the cloud itself comes to bear. Most cloud computing services are accessed over the Internet, and thus fundamentally rely on an inherently unpredictable and insecure medium. In order for companies to realise the potential of cloud computing, they will need to overcome the performance, reliability, and scalability challenges the Internet presents.

Simply defined, cloud computing refers to computational resources (‘computing’) made accessible as scalable, on-demand services over a network (the ‘cloud’). And yet, cloud computing is far from simple. It embraces a confluence of concepts—virtualisation, service-orientation, elasticity, multi-tenancy, and pay-as-you-go—manifesting as a broad range of cloud services, technologies, and approaches in today’s marketplace.

But, when wrapped up in the hype of cloud computing, it is easy to forget the reality—that cloud computing’s reliance on the Internet is a double-edged sword. On one hand, the Internet’s broad reach helps enable the cost-effective, global, on-demand accessibility that makes cloud computing so attractive. On the other hand, the naked Internet is an intrinsically flimsy platform—fraught with inefficiencies that adversely impact the performance, reliability, and scalability of applications and services running on top of it.

The middle-mile conundrum

While we often refer to the Internet as a single entity, it is actually composed of 13,000 different networks, joined in fragile cooperation, each providing access to some small subset of end users. This means the performance of any centrally-hosted Web application—including cloud computing applications—is inextricably tied to the performance of the Internet as a whole—including its thousands of disparate networks and the tens of thousands of connection points between them.

Four of the key causes of Internet middle-mile performance problems include:

  1. Peering point problems: Peering points are often overburdened, causing packet loss and service degradation, and, in turn, slow and uneven performance for cloud-based applications.
  2. Routing vulnerabilities: BGP (Border Gateway Protocol), the protocol that determines how data packets travel from one network to another within the cloud. While BGP is simple and scalable, it was designed neither for performance or efficiency and thus has a number of well-documented limitations.
  3. Inefficient communications protocols: Architected for reliability rather than efficiency, TCP (Transmission Control Protocol) is another source of drag on the Internet. TCP requires multiple round-trips (between the two communicating parties) to set up and tear down connections, uses a conservative initial rate of data exchange, and recovers slowly from packet loss.
  4. Network outages: Internet failures can happen on several different levels—from a single router malfunction to a data centre blackout to an entire network going offline.

The middle mile bottlenecks described above create an environment that is difficult to rely on for business-critical transactions. However, by avoiding the middle mile as much as possible, applications and services can be delivered over the Internet with much greater security, speed, and reliability.

Despite the broad variety in cloud computing offerings, their underlying deployment infrastructures can be categorised into two basic architectures—centralised versus highly-distributed. These two network architectures have existed long before the cloud computing phenomenon; they are in fact the same architectures that underlie all Web-based infrastructures but only one can address the issues outlined above.

New opportunity, old approach

As with traditionally-architected Web sites, SaaS, PaaS and IaaS providers typically host their applications and services in a single location or a small number of datacentre.

However, for applications with distributed users or highly variable demand, the centralised datacentre approach is insufficient, as it results in an end user experience that suffers at the mercy of the Internet’s middle mile. This means network outages, peering point congestion, routing inefficiencies, and other middle mile bottlenecks will frequently cause application performance and reliability to fall short of expectations.

Getting close to end users

By locating cloud computing infrastructure in a highly-distributed manner, it is possible to overcome the challenges posed by the Internet’s middle mile. A highly-distributed architecture—where servers are located at the edge of the Internet, close to end users—avoids the middle mile bottlenecks I’ve mentioned, enabling the delivery of LAN-like responsiveness for applications running over the global Internet.

While a number of other large cloud providers—including content delivery networks—do run multi-datacentre operations, these are fundamentally different from a truly highly-distributed network. With a centralised or multi-datacentre infrastructure, the cloud provider’s servers are still far away from most users and must still deliver content from the wrong side of the middle mile bottlenecks.

Even with direct connectivity to all of the biggest backbones, the cloud application must travel through the morass of the middle mile to reach most of the Internet’s 1.4 billion users. Only a highly-distributed architecture can overcome the middle mile challenge.