By now, every savvy business leader is well aware of the power of cloud computing and the incredible benefits that it can offer organisations. Perhaps less familiar is the idea of ‘edge computing’, yet this type of computing has the ability to bring amazing results.

What is edge computing?

To understand what is meant by the edge, we first need to take a broader look at cloud computing as we know it. Cloud computing services are largely composed of centralised offerings such as Gmail, and the subscription software such as SaaS. Our smart tech in the home such as the Amazon Echo and Apple TV are powered by the public cloud, drawing on its information and processing capabilities in order to function. The public cloud is, in the main, operated through the infrastructure of a small number of companies – those big household names such as Microsoft, Google, Amazon and IBM.

These ‘big four’ are predominantly responsible for the amazing powers made possible by public cloud deployments, such as machine learning and processing capabilities. Indeed, Amazon alone has almost half of the public cloud market. This means that there are now fewer opportunities for centralised cloud technologies. This has led to a move towards the edge.

In fact, the name ‘edge’ can be taken literally. The edge in edge computing means that the computing work is decentralised, and instead takes place physically closer to the data’s location. Rather than being processed in the remote data centres of the public cloud, data will be dealt with in the user’s vicinity.

Advantages of edge computing

One of the foremost advantages of edge computing is the reduced latency it allows. Latency is simply the delay between a user asking a computer to do something, and the time it takes for the request to be fulfilled. You may have noticed latency when using a smart speaker. The pause before your question or command is answered is the time it takes for the device to connect to the cloud and retrieve the information it needs. With edge computing, the data doesn’t need to travel as far, speeding up those response times.

Latency matters because we are moving towards an increasingly digitised society. The development of self-driving cars, for example, is dependant on the vehicles being able to operate with absolute minimal latency to be able to safely react to the road.

Edge computing can also offer advantages when it comes to bandwidth, too. Smart technology using edge computing can reduce bandwidth usage compared to traditional uploading methods.

Edge computing and our daily lives

If you own a smartphone, you will already be experiencing edge computing at work. The security features of an iPhone, for example, constitute edge computing because it stores your biometric information and processes it for decryption purposes, right there in your device. This is a trend which looks set to continue, with work being carried out by Apple and Google to offer AI capabilities for devices, rather than have it take place in the cloud.

Of course, it’s the IoT that is most commonly mentioned in conjunction with edge computing. Indeed, edge computing was originally created in order to facilitate IoT technology. IoT devices can produce a vast amount of data, and relying on centralised public cloud to process it all can soon become a problem. For businesses using multiple IoT devices such as sensors, cameras, or stock monitoring technology, the result of such a data load means that latency and bandwidth use is increased. By using edge computing instead, edge gateway devices can complete most of the processing themselves, only referring data to a centralised cloud when necessary.

It’s evident that edge computing can offer exciting benefits, and looks set to become more prevalent in our daily lives. Understanding edge computing can help you to choose wisely when you next make an investment in technology.