There is an elephant in the room when it comes to virtualisation – it simply isn’t fast enough to process big data. This leaves any company wanting to process more and more data with a difficult choice.
Owning and managing the supporting infrastructure is becoming increasingly undesirable, not to mention impractical. Buying in all the additional processing capacity can be hugely expensive, a significant burden to maintain and support, and just doesn’t offer organisations the flexible scalability they need – unless companies pay over the odds for capacity they may not need 90% of the time.
Yet cloud services aren’t always what they’re cracked up to be. For unlimited infrastructure scalability and better cost-efficiency, companies have looked to source the additional processing capacity they need from external infrastructure service providers. Amazon Web Services is a common choice, offering companies of all shapes and sizes access to on-demand IT resources via the Internet. No upfront investment is needed; businesses simply pay for the capacity they use.
For serious, heavy-duty applications however, such services leave a lot to be desired. This is because of the way they are provisioned.
Time To Veto Virtualisation?
The issue is with virtualisation and how this is achieved. Most cloud-based IT infrastructures are built by dynamically connecting lots of different servers whose capacity can be brought on line or reallocated at a moment’s notice, depending on what the controlling software is telling them to do.
Virtualisation is all about driving up the density and cost-efficiency of a data centre. It is about putting large numbers of virtual machines on as few servers as possible – yet with the scope to add to the set-up at a moment’s notice, giving the experience of unlimited capacity which can be drawn on as needed.
The downside is that monitoring and managing this virtualised environment consumes a lot of resources in its own right, with the result that customers don’t actually get the performance paid for.
The culprit is the hypervisor technology responsible for overseeing all of the communications between the virtual machines and the constituent hardware. This not only takes up space on a server, it consumes a lot of the performance.
Citrusbyte, which developed the open-source NoSQL data store Redis, does not recommend virtualisation for high-volume database tasks for this reason, noting on its web site that “system induced latencies are significantly higher on a virtualised environment than on a physical machine.”
Another problem with cloud-based infrastructure services based on virtualisation is that you don’t necessarily get what you pay for – 20% or more of this capacity is likely to have been wasted by the hypervisor, but this isn’t transparent to customers so they have no way of knowing.
Predicting spending can be difficult then – a situation that is exacerbated if performance isn’t consistent, so that on some occasions additional servers need to be powered up to handle the same workload. In the case of Amazon, storage is priced by input/output performance, so guessing the final bill is a bit like pinning the tail on a donkey. To arrive at more predictable costs, there needs to be more control over server usage.
Something else that affects cost-efficiency is the age and quality of the hardware. In an article on Zdnet.com earlier this year, a consulting engineer and data centre energy expert noted that, although data centre hardware should really be refreshed at least every three years, in many this is not happening. Indeed, many cloud providers are using systems that are 5-6 years old. The result is a significant degradation in performance and energy efficiency.
CIOs have expressed concern about this too. Research conducted for Compuware in 2013 found that almost 80% of CIOs globally are concerned about the hidden costs associated with cloud computing. Failure to properly manage the performance of cloud-based applications is being seen to drive up costs and prevent companies from realising the full benefits of cloud computing. IT chiefs also report concern about poor loss of revenue due to poor performance, and the impact on user productivity and external brand perception.
Stripping Services Back
So what is the answer? An emerging alternative to virtualisation is to harness a flexible hosted infrastructure that uses bare metal. In a bare-metal cloud environment, customers get rapid, automated provisioning of dedicated managed hosting environments – unimpeded by a hypervisor virtualisation layer in the operating system.
In Eastern Europe, where the market for cloud services is relatively immature, this new approach is taking hold fast. It is a model where dedicated, physical servers can be deployed in just 20 minutes, powered up within seconds, and allocated in any combination at any time of day or night. Crucially there is no degradation in performance, so commercial users of the service will get up to 100% more capacity from the equivalent infrastructure compared to a virtualised environment.
Looking Left Of Field
If one thing is certain it is that utility-based computing is the future. What is still being defined it is how best to achieve this. Virtualisation, accepted until now as the best option for resource elasticity, is now being called into question.
Where cloud suppliers are still pushing this, it is worth questioning whether this is because they still truly believe in it – or whether they are merely getting their money’s worth out of a sizeable legacy investment. If this is the case, bear in mind that with each passing year that same technology is becoming older and less reliable.
As increase in cloud services grows, it seems a new approach to the mechanics of delivery may now be needed; an approach that provides higher, more consistent performance and greater transparency – so that companies really can bet their business on it.