According to leading analysts, virtualisation (moving to cloud-based systems) is among the top 10 growth areas for information technologies at everything from enterprise to SME level. 

This shift brings with it a wide range of benefits such as reducing capital and operation expenses due to hardware consolidation, space utilisation efficiencies and power saving as well as catering to a more mobile work force. Indeed, companies that have implemented virtualisation, often report that the total cost of ownership of their IT infrastructure decreases by up to 50 percent.

However, there are a number of critical issues around security and access that this move can raise, one of the key ones being that companies typically deploy virtualisation seeking this increased performance, not security.

As such information security concerns are often afterthoughts. While virtual environments can often be made more secure than their physical counterparts due to the layers of protection and back-up employed by a good cloud service provider, there are some specific challenges that organisations need to understand to ensure the security of restricted data as well as their own continued access to that data.

The reality is that as businesses move from a physical to virtual environment, the old security solutions aren’t always up to the job, and it’s not enough just to have a solid firewall protecting your business from outside intrusion… you need to protect the inside as well. In physical environments organisations will usually have business-sensitive data processed on dedicated hardware with access restricted to specific users.

But virtual servers work in a very different way. In the cloud companies could have information stored on any number of different virtual machines, this is because the cloud maintains its efficiency by seamlessly ensuring optimal usage across a large number of interconnected servers. Moving over to a virtual environment entirely means businesses need to understand how virtual servers work and what the impact could be for their business.

The knock-on effect is that it is possible to be in a situation where information that was previously stored on a separate server when a company was running its own set-up may now be sharing server space with less sensitive or critical business information.

Potentially this could put companies in a position where people with lower security access privileges could be able to access critical data as it is stored on the same server they already have access to. Managing this can be a serious issue and involves the implementation of an additional layer of security.

The second major concern is what happens when the connection goes down. The last thing any organisation wants is to be in a position where their business grinds to a standstill – any benefits incurred from the reduced costs associated with virtualization on any level can very soon be wiped out by lack of productivity.

Even in this situation companies need to be able to access and work on important files. For larger companies having a backup connection is possibly a viable solution, but for smaller organisations this is very often not attainable.

Those using basic cloud-based systems such as Dropbox, Sugarsync, Carbonite or Google Storage, which allow users to store and share files on remote servers, are often the most vulnerable in this situation. In this case there needs to be what amounts to a localised caching server that contains business critical files so that a company can still continue to work when the internet is out, and which will then resync with the cloud service when the connections is re-established.

While undoubtedly the cloud has huge benefits for businesses, and will only continue to be a growing force in the IT sector, these are two key issues that are too often overlooked, certainly by smaller organizations, in the name of efficiency.