System downtime has long been the cause of major headaches for IT departments across the globe. With companies of all sizes now relying more heavily on their systems than ever before, any service outage can have a catastrophic effect on a company’s bottom line.

According to Forrester analyst Rachel Dines, just 5 minutes of Google downtime in August 2013 reduced global web traffic by 40% and cost the company a mammoth $500,000. This is one key example of the effect that unplanned downtime can have on an organisation’s finances, as well as its reputation. However, this example is not to imply that the cost of downtime is only relevant to large companies like Google. Organisations of all sizes can be affected by an outage at a service provider.

Whether a huge global enterprise like Amazon or a small business, it’s vital that any organisation has a strategy to identify the reasons for potential downtime. Once these causes have been acknowledged, companies must put plans in place to defend against those threats. Only then will it be possible to accomplish the ultimate goal of “zero downtime”.

To find out more about how companies are coping with the issue, SUSE commissioned a study to look at the effect of downtime on organisations and the plans that are in place to deal with outages. The report uncovered some interesting findings, namely the gulf between the need for ‘zero downtime’ and the amount of downtime that enterprises are currently experiencing. While businesses are recognising the need to reduce downtime, it’s clear that there is still much to be done in order to make this achievable.

Nearly three-quarters of the IT professionals surveyed said their organisation considers achieving zero downtime for their enterprise computing systems an important goal, while a full 89 percent currently expect to experience downtime for their most important workload. Unplanned downtime, however, was experienced by 80% of respondents. Those that did suffer unplanned downtime encountered the problem an average of more than twice a year on their most important workload. Technology failure was far and away the most prevalent source of unplanned downtime.

The good news is more than half (54 percent) of respondents indicated they are executing on a strategy to significantly reduce system downtime in the coming year, and another 17 percent have a strategy but haven’t yet begun to implement it.

So what’s the solution?

IT professionals are evidently worried by the unplanned downtime that they have experienced in the past year and are under pressure to ensure it doesn’t happen again. It’s clear that reducing downtime is a priority for IT departments across the globe, however, organisations do not yet see ‘zero downtime’ as a realistic possibility, accepting for now that at least a small level of downtime is unavoidable. This is an unacceptable situation, yet there are steps that can be taken to address the issue that don’t require wholesale rip and replace. Key steps should include:

  • Building on firm foundations – Getting the right hardware and operating platform in the first place is critical in preventing downtime, providing extra stability and availability.
  • Minimising human mistakes – Humans inevitably make mistakes, and IT infrastructure management is no different. The best way to cut down on these is to make tools as easy to use for employees as possible, cutting down on operational mistakes.
  • Developing clusters – Clustering technologies are used today to improve availability of the system through redundancy. By clustering and combining several redundant servers into one cluster, higher availability can be achieved than with a single server

Achieving zero downtime is a crucial goal for modern business, yet for many it still seems too unrealistic. There are practical steps to take that can contribute towards this overall goal, but ultimately the right tools are crucial in this process. Only through the correct selection and effective implementation of these tools will zero downtime become the widespread norm.