There are a number of security issues that frequently fall down the gap when it comes to whose responsibility they are to look after. One such issue is that of code; whose job is it to keep it up-to-date? The problem arises when this question is left unanswered, with businesses believing that once software is placed onto the server, their hosting provider will automatically maintain it.

It’s the responsibility of any hosting provider to provide their customers with a server and IP address, network connections and – in our opinion – at least a shared firewall as minimum as well. However, when it comes to updates, there are issues of functionality to contend with.

While we patch the operating system for Windows and put patches in place for Linux if requested, we can’t do every update automatically. There are simply too many content management systems and open source web application frameworks out there. You can’t just auto-update everything because it can break website functionality and stop the server from displaying content.

The frequency with which code exploits are occurring may be correlated to the increasing usage of open source software. A number of content management systems, such as WordPress and Drupal are often free, attracting businesses to the idea of managing their own code.

However, the problem with this software arises because a lot of the groundwork has been done for users. Without having built it from the ground up, there are bound to be gaps in understanding. When it comes to this issue, people are then faced with a bit of a dilemma – do they keep updating it and risk coming up against problems they don’t know how to fix, or leave it? If it becomes out-of-date then this is where vulnerabilities occur and hackers can get in and do what they want.

So, if the hosting provider takes care of the operating system layer – the server and the network – and a business owner takes care of – well – their business, then what happens to that pesky code? There’s a missing link in this whole chain and that’s basically a server admin and/or web developer.

They need to be able to understand how to use a server, how to update their code and if there are problems, bug fix the issues. Ideally, they’d test any updates first in a development or sandbox environment to ensure compatibility with functionality.

To have everything working long term, we need to have us – the hosting provider – doing the infrastructure side of things and keeping the network ticking over while the client focusses on their primary business activities – providing products or services to their customers – and finally the web developer making sure there are regular code reviews. Some of our account managers call this combination the holy trinity!”

But are these measures really 100% necessary if there’s a firewall in place? Shouldn’t that take care of any cybernasties? If you look a little deeper into the technology of a firewall, it becomes apparent that there are ways a hacker can potentially breach it.

A firewall knows whether to let traffic in or out depending on the port. If it’s come through port 80 as a normal request, the firewall doesn’t see it as risky and it lets it in. Once you’re behind that firewall, you could have something in your code that allows you to update content without validating whether it’s a picture, for example. If there’s no validation method, all a hacker would have to do is upload a bit of code and then they’re into that box. From there, they can run root commands and take control of the server.

Once a vulnerability has been exploited, the only realistic way to solve it is simply to start again, something that takes a huge amount of time and probably, with all the stress, a huge amount of sleep from the nights of affected business owners. Going through the process that would follow the discovery of an exploit.

If an affected server is impacting other users on the network, we have to act. So if there’s too much throughput coming into or going out of a server, or if there’s a risk of cross-contamination, we have to black hole it. The next step is to arrange for a KVM at the data centre and help the client clean it up.

Recovery time for them to go through and clean up or migrate is astronomical, especially if they have lots of CMS sites built by various third party developers, many of whom are no longer contactable or overseas and hard to reach. And if the backups are full of exploits, it gets even more difficult. In this time, they’ll have their clients screaming at them down the phone because their site or email service is down.

It’s a nightmare scenario and one that can bring a business to its knees. So, what are the options available to companies? The cheapest solution seems to be keeping everything up to date yourself, which is fine if you have the knowledge.

There is also the option to use web application firewalls, which will actually look for bizarre behaviours on the box. It will perform file integrity monitoring, which focuses on changes to code to alert to anything unexpected. If there are any changes to them that are abnormal then this will alert the owner. So, if there are any injections of code, or similar changes, they are at least aware and able to react swiftly.

However, the most effective way to protect against exploits in code is to connect the dots with that missing link and ensure a web developer is looking after that side of the business on an on-going basis, whether that’s part time or a full time role. The fight against cybercrime is a constant battle and it seems like having a gap in your front line of defence is simply not an option.