This is the end of a tough year for many organizations across all sectors. We found ourselves snowed in last winter, were stuck abroad due to a volcano eruption in spring, suffered from the announcement of a tightened budget in summer, and had to start making drastic cost-saving plans following the Comprehensive Spending Review in autumn. Data security breaches and issues with unreliable service providers have also populated the press.

Somehow, the majority of us have managed to survive all that; some better than others. As another winter is upon us, it’s time to ask ourselves: What helped us through the hard times and what can we do better to prevent IT disruptions, data breaches, and money loss in the future?

Here are some things to learn from 2010 that may help us avoid repeating errors, so that we can have a more efficient, productive and fruitful 2011.

1: VDI to work from home or the Maldives

Plenty of things prevented us getting to work in 2010, natural disasters, severe weather, and industrial disputes being the biggest culprits. Remote access solutions have been around for a long time, but desktop virtualization has taken things a stage further. With a virtual desktop, you’re accessing your own complete and customized workspace when out of the office, with similar performance to working in the office. Provided there’s a strong and reliable connection, VDI minimizes the technical need to be physically close to your IT.

2: Business continuity and resilience with server virtualization

Server virtualization is now mainstream, but plenty of organizations large and small have yet to virtualize their server platform. When disaster strikes, those who have virtualized are at a real advantage — the ability to build an all-encompassing recovery solution when you’ve virtualized your servers is so much easier than having to deal with individual physical kit and the applications running on them. If you haven’t fully embraced the virtualization path, it’s time to reassess that decision as you prepare for 2011.

3: Good service management to beat economic restrictions

With the recent economic crisis and the unstable business climate, the general message is that people should be doing more with less. It’s easy to delay capital expenditure (unless there’s a pressing need to replace something that’s broken or out of warranty), but how else to go about saving money? Surprising, effective service management can help deliver significant cost efficiencies through efficient management of processes, tools, and staff. Techniques include rearrangement of roles within the IT service desk to get higher levels of fix quicker in the support process and adoption of some automatic tools to deal with the most common repeat incidents. Also getting proper and effective measures on the service, down to the individuals delivering it, helps to set the bar of expectation, monitor performance, and improve processes’ success.

4: Flexible support for variable business

An unstable economic climate means that staffing may need to be reduced or increased for certain periods of time, but may need rescaling shortly afterward. At the same time, epidemics, natural disasters, and severe weather conditions may require extra staff to cover for absences, often at the last minute. Not all organizations can afford to pay a floating team to be available in case of need or manage to get contractors easily and rapidly. An IT support provider that can offer flexibility and scalability may help minimize these kinds of disruptions. In fact, some providers will have a team of widely skilled multi-site engineers that can be sent to any site in need of extra support and kept only until no longer needed, without major contractual restrictions.

5: Beyond the PC

Apple’s iPad captured the imagination this year. It’s seen as a cool device, but its success stems as much from the wide range of applications available for it as for its innate functionality. The success of the iPad is prompting organizations to look beyond the PC in delivering IT to their user base. Perhaps a more surprising story was the rise of the Amazon Kindle, which resurrected the idea of a single-function device. The Kindle is good because it’s relatively cheap, delivers well on its specific function, is easy to use, and has long battery life. As a single-function device, it’s also extremely easy to manage. Given the choice, I’d rather the challenge of managing and securing a fleet of Kindles than Apple iPads, which for all their sexiness, add another set of security management challenges.

6: Protecting data from people

Even a secured police environment can become the setting for a data protection breach, as Gwent Police taught us. A mistake due to the recipient auto-complete function led an officer to send some 10,000 unencrypted criminal records to a journalist. If a data classification system had been in place, where every document created was routinely classified with different levels of sensitivity and restricted to the view of authorized people, the breach would not have taken place. We can all learn from this incident — human error will occur and there is no way to avoid it completely, so counter measures have to be implemented up front to prevent breaches.

7: ISO 27001 compliance to avoid tougher ICO fines

The Data Protection Act was enforced last year with stricter rules and higher fines, with the ICO able to impose a £500,000 payment over a data breach. This resulted in organizations paying the highest fines ever seen. For instance, Zurich Insurance, after the loss of 46,000 records containing customers’ personal information, had to pay more than £2m — but it would have been higher if they hadn’t agreed to settle at an early stage of the FSA investigation. ISO 27001 has gained advocates in the last year because it tackles the broad spectrum of good information security practice and not just the obvious points of exposure. A gap analysis and alignment with the ISO 27001 standards is a great first step to stay on the safe side. However, any improved security measure should be accompanied by extensive training, where all staff who may deal with the systems can gain a strong awareness of regulations, breaches, and consequences.

8: IT becoming the business’ business

In an atmosphere where organizations are watching every penny, CFOs acquired a stronger presence in IT — although neither they nor the IT heads were particularly prepared for this move. For this reason, now the CIO has to find ways to justify costs concretely, using financial language to propose projects, and explain their possible ROI. Role changes will concern the CFO as well, with a need to acquire a better knowledge of IT to discuss strategies and investments with the IT department.

9: Choosing the right outsourcing strategy and partner

In 2010, we heard about companies dropping their outsourcing partner and moving their service desk back in-house or to a safer managed service solution. We heard about Virgin Blue losing reputation due to a faulty booking system, managed by a provider; and Singapore bank DBS, which suffered a critical IT failure that caused many inconveniences among customers. In 2011, outsourcing should not be avoided, but the strategy should include solutions that allow more control over assets, IP, and data and cause less upheaval should the choice of outsourcing partner prove to be the wrong one.

10: Education, awareness, training

As the events of 2010 have largely demonstrated, there’s no use in having the latest technologies, best practice processes, and security policies in place if the staff isn’t trained to put them to use. Data protection awareness is vital to avoid information security breaches; training to use the latest applications will drastically reduce the amount of incident calls; and education to best practices will smooth operations and allow the organizations to achieve the cost efficiencies they seek.