Why do we encourage our children to meet people from other cultures? People pay lip-service to the idea that travelling, or at least communicating, internationally can broaden the mind, and be an important part of any education. But in any large multicultural mix there is also a tendency for “birds of a feather to flock together”, as people gravitate to a group that “speaks their own language” – both in the literal sense but also in the sense of sharing similar cultural constructs and beliefs.

People who resist information technology – for example, those who still prefer putting pen to paper – often object to the ridiculous abuse of language that computer communication forces on the user. For example: the portmanteau word “username” appears on-screen without any question mark, and we are expected to understand that the device is not only asking a question, but also wanting the answer in a specific format.

This is seen by the technophobe as a threat of “the way things are going” – although it is really quite the opposite: the only reason any user needs to understand computer language is because computers have been too stupid to speak human language. And this, of course, is already changing: today’s smart devices increasingly try to listen and speak to us through everyday language.

So, if we are witnessing the growth of an Internet of Things (IoT) – where machines increasingly communicate directly with each other without a human intermediary – will the machines prefer their own company? And where would that leave us? Of course the very idea of machines “preferring” their own company is absurd in terms of today’s artificial intelligence (AI), but look at it this way: machine to machine (M2M) communications can take place at machine speeds so – before any human has had time to “get a word in edgeways” – a population of machines could in theory complete a conversation of sufficient importance and complexity to initiate global economic meltdown.

Last year Stephen Hawking and a group of leading scientists said: “Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all. All of us should ask ourselves what we can do now to improve the chances of reaping the benefits and avoiding the risks.” The sort of risk that caught the public imagination following this statement was of an emergent super-intelligence that would arise to dominate and reduce humankind to slave status.

If that sounds far-fetched, consider those early AI projects that showed how surprisingly intelligent behaviour can emerge from populations of very simple elements obeying simple rules of interaction. While one or two ants will tend to wander around aimlessly, once the number increases beyond a certain threshold the ant colony as a whole begins to behave in an adaptive way that suggests remarkable survival “intelligence”.

The small possibility that some malevolent intelligence could emerge from the growing network of smart household gadgets makes great science fiction, but it obscures a far more immediate danger: that unexpected, unpredictable and “irrational” consequences might emerge from adding billions of relatively simple devices to our already complex Internet. The financial markets gave us a glimpse of what might happen in 2010, when high frequency trading (HFT) systems contributed to the “Flash Crash”. Each HFT system was following its own set of rules and they were inter-communicating via the markets at M2M speeds to form what was – in terms of future IoT scenarios – a relatively tiny “Intranet of Things”. But the results still came as a shock to the financial markets.

Already there are many more devices than humans connected to the Internet. According to IDC’s estimation the number of devices capable of being connected approaches 200 billion and around 20 billion of them are already connected. So the danger is not so much about the impact of any particular connection, as about the possibility of unpredicted responses or vulnerabilities emerging out of sheer complexity.

There is also another even more immediate danger. The very idea of intelligence suggests some ability to learn new behaviours: so what if the wrong people provide the teaching? The 2013 holiday season saw a smart, Internet-connected fridge sending out spam as part of a junk mail campaign that had hijacked more than 100,000 connected devices. The funny side was the idea that a smart fridge might turn criminal; the nasty truth was that a device created to perform a simple, useful task could be recruited into a criminal gang.

Whereas those HFT systems were highly sophisticated, the intelligence in the smart fridge is very limited – call them “naïve” and one can understand how easily these devices can be lured into a life of crime. Whereas each new computer added to the Internet comes with some degree of malware protection already built into its operating system, things like smoke detectors, security alarms and utility meters come from a different culture. Traditionally all such devices were either autonomous units or else, if connected, they were on a closed, dedicated network.

Fire alarms were installed by one company, control and instrumentation networks came from a different vendor, the electricity meter was installed by the power supplier and none of these networks overlapped. While computers and IT systems have for many years been fighting off outside attacks, none of these simple devices currently lining up to join the IoT have built in defences – and it would be absurd to expect sophisticated malware proofing in a ten dollar smart chip.

So the growing IoT includes a majority population that is inherently naïve and wide open to the lure of criminal involvement. The risk is not only that one specific function might be compromised – as if an attack on a vehicle tracking system could lead a secure van into an ambush – but also that the IoT might provide a weak link or point of entry to an otherwise strong security chain. Nobody noticed anything wrong with that smart fridge while it continued sending out spam mail, because it “kept its day job”.

This means that simply adding an Internet connected control device to an existing IT network might open a door into an otherwise secure system. A couple of years ago saw an attack on a system designed to integrate a US electricity company’s IT network with the grid control. There was nothing inherently wrong with the system – it was a highly sophisticated system in use since the late 90s – but it was never designed to connect to the Internet and that meant it was vulnerable and, sure enough, it was attacked.

What is especially disturbing about the IoT is not just its vulnerability but also that so many of its components have a direct, physical function. It is very inconvenient when a computer virus causes your PC to crash and lose your latest documents, but at least no-one is physically hurt. But if an attack on the IoT were to prevent a fire alarm from being triggered, cause a life-sustaining medical system to fail, disrupt air traffic control, or the brakes to fail on a connected vehicle – then lives and property would be endangered as a direct result of the attack.

This escalates the possibilities for serious criminal activity and opens new doors to terrorists and cyber war between nations. This was the sort of attack seen in 2010 when the Stuxnet worm closed down Iran’s Natanz nuclear facility: not by simply closing down a thousand centrifuges but by physically damaging them in a manner that would take weeks to repair.

This means that the IoT threatens us with sheer diversity as well as large numbers. At one extreme it will be connecting highly critical systems: industrial and utility grid control systems that could cause widespread damage or economic harm if breached; critical healthcare and remote medical devices containing sensitive personal data or responsible for life support; navigation and control systems for connected cars, air traffic control and so on. At the other extreme it includes a huge naïve population of low-cost devices: monitors, meters, wearable devices, simple switches for remote control of household lighting and other domestic gadgets.

Once these cumulative risks have been recognised, the challenge is to understand them. There are limits to the amount of complexity that even the cleverest human can predict – hence the surprises that can emerge in complex systems. So, rather than try to predict what might happen, the solution is to model accurately the complex system and see how the model behaves, in order to gain understanding.

This is how today’s complex networks are already being tested – both to prove that they perform correctly under all sorts of everyday conditions as well as under extreme conditions or cyber-attack. The network is modeled and realistic traffic and possible attack conditions are imposed on the model in order to see what happens. This can have several consequences: it can prove that the system is invulnerable, or it can reveal a weakness that can be traced and repaired, or it can simply reveal the system’s limits – so the operators can be forewarned of possible danger and design an appropriate damage limitation strategy.

The same principles will apply to the growing IoT. The difference is one of scale and diversity, but the building blocks for testing an IoT are already there, and there are already specialist network testers with long experience of what could happen, and what sort of tests will be most needed.

The Internet of Things does indeed present a new challenge. But the networking industry has, for longer than three decades, been gearing up to address this sort of challenge