We are in the early stages of the Internet of Things, the much anticipated era when all manner of devices can talk to each other and to intermediary services. But for this era to achieve its full potential, operators must fundamentally change the way they build and run clouds. Why? Machine-to-machine (M2M) interactions are far less failure tolerant than machine-to-human interactions.

Yes, it sucks when your Netflix subscription goes dark in a big cloud outage, and it’s bad when your cloud provider loses user data. But it’s far worse when a fleet of trucks can no longer report their whereabouts to a central control system designed to regulate how long drivers can stay on the road without resting or all the lights in your building turn out and the HVAC system dies on a hot day because of a cloud outage.

The current cloud infrastructure could crumble under the data weight

In the very near future, everything from banks of elevators to cell phones to city buses will either be subject to IP-connected control systems or use IP networks to report back critical information. IP addressability will become nearly ubiquitous. The sheer volume of data flowing through IP networks will mushroom.

In a dedicated or co-located hardware world, that increase would result in prohibitively expensive hardware requirements. Thus, the cloud becomes the only viable option to affordably connect, track and manage the new Internet of Things.

In this new role, the cloud will have to step up its game to accommodate more exacting demands. The current storage infrastructure and file systems that backup and form the backbone of the cloud are archaic, dating back 20 years. These systems may be familiar and comfortable for infrastructure providers.

But over time, block-storage architectures that cannot provide instant snapshots of machine images (copy-on-write) will continue to be prone to all sorts of failures. Those failures will grow more pronounced in the M2M world when a five-second failure could result in the loss of many millions of dollars worth of time-specific information.

API keys will need to be more flexible

The current API key infrastructure of the cloud cannot easily handle the sorts of critical and highly-secure information flows required for true M2M communications. This architecture of public keys, for the most part, relies on third-party authorisation schemes that make it very easy for bad actors to perpetrate a “man-in-the-middle” attack.

These secure APIs not only need to have better hooks for user-specified authentication schemes (from SSH to LDAP to less secure mechanisms like OAuth), but they also need to be far more flexible and fast in order to support the higher volume of transactions. That is critical, in turn, to mitigate growing latency risks for mobile connectivity resulting from the wild proliferation of IP enabled devices on mobile networks coming in the new era of the Internet of Things.

Let’s be clear. Right now no one is putting truly mission critical or “bet your life” applications in the cloud. But in coming era of the Internet of Things, that is a near-guaranteed eventuality, either through intentional or unintentional actions.

As we build out the Internet of Things and slowly ease it first onto private clouds and later onto public clouds, we have no choice but improve the core of the cloud or risk catastrophic consequences from failures. Because on the Internet of Things, no one can blame it on user error and simply ask a hotel air conditioner, an airplane, or a bank of traffic lights to restart their virtual server on the fly and reset their machine image.