Organisations are experiencing a massive growth of data storage demands and have to ensure that these increasing volumes are easily managed – whilst still being highly available and affordable. In this article, I have highlighted this year’s key storage technologies, how they will enable organisations to address their storage requirements and rapidly deliver efficiency improvements – as well as providing an attractive return on investment.
Solid State storage
Although Solid State (SSD) storage technology has a bright future, there are two major issues of cost and long term reliability that need to be resolved before we see wider adoption.
The issue of cost can be linked to volume economics of manufacture and distribution, and no doubt in 3 to 5 years, the cost per gigabyte will have dropped. To improve the long term reliability of SSD’s, a huge investment is being made to address this – focusing mainly on the write “wear out” issues.
Putting aside these issues, the performance of SSD’s is staggering compared to traditional rotating storage and there are many applications that benefit from the very high performance of this storage technology. A highly effective solution could consist of; a small pool of SSD storage, a moderate pool of fast disc storage and large pool of slow disc storage – coupled with an intelligent data mover to exploit these pools.
Organisations that are migrating from old, unsupported versions of Exchange are realising that a SAN-based “mailbox” can cost four times more than a “mailbox” provisioned on direct attached storage. This is forcing organisations to examine how they can drive cost out of the company mail infrastructure. Some are considering a virtualised Exchange 2010 estate which can reduce the environmental footprint by 60%, compared to a traditional Exchange 2003 estate.
Organisation-wide file consolidation
The rapid growth in unstructured data, otherwise known as digital landfill, has led many organisations to use large numbers of discrete ‘network-attached’ storage appliances. However, this presents the inherent challenges of data movement and management. These issues can be best addressed by utilising a single name space / file system, across multiple nodes.
Much work has been done addressing virtualisation of servers and storage, and the natural progression of this is client virtualisation. By using less hardware, with a longer refresh cycle, can reduce deployment costs by 90%, maintenance costs by 60% and help desk calls by 40% – leading to very significant TCO reductions. However, care must be taken to architect an end to end solution that can handle the peaks in I/O that have naturally moved from the desktop, back into the corporate infrastructure.
Over 60% of our customers claim that back-up is one of the greatest pain points. A further challenge is that data protection strategies rely on keeping multiple copies of data. By reducing the duplication in the backup work stream, companies can achieve significant efficiency gains by using fewer disc and tape drives. This is because backup and recovery are typically sequential operations and a “good fit” for de-duplication.
There is a lot of hype around Cloud being a delivery mechanism, and there are many interpretations of what “Cloud” is. I believe that there will be limited take up of general public Cloud storage in the short to medium term. Some applications or services, such as remote backup and recovery, do lend themselves to re-engineering in a Cloud infrastructure.
However, other applications, such as mission critical applications, typically hold sensitive commercial or personal data and are often subject to regulation. These potential security and performance concerns will delay the move to general, public, Cloud-based delivery. In the interim, I believe that some forward thinking organisations will invest in internal private “Cloud like” storage and associated infrastructures.