Despite the technology becoming more advanced with every passing day, the debate over the dangers and ethics of robotics continues to rage. Tesla CEO Elon Musk is regularly to be found making stark warnings of the destructive potential of AI. Conversely, for tech entrepreneur Tej Kohli, charity, philanthropy and positive impact all stand to benefit from a humanitarian vision of robotics.

His investment vehicle, Tej Kohli Ventures, seeks out projects in fields ranging medical research to end-of-life care. If robots are to become an increasingly common part of our everyday lives, what can be done to ensure they behave ethically and safely?

The Question Of Robot Ethics

In the 1940s, when even the very earliest forms of robotics were still in their infancy, the concept of a code of robotic ethics was popularised by the science-fiction writer Isaac Asimov. Asimov’s Three Laws of Robotics proposed the following rules:

  • A robot may not injure a human being or, through inaction, allow a human being to come to come to harm.
  • A robot must obey the orders given to it by a human, except where such orders would conflict with the first law.
  • A robot must protect its own existence as long as such protection does not conflict with the first or second law.

Despite being works of fiction, Asimov’s concepts have proven to be extremely influential in the world of science and technology – in fact, the term ‘robotics’ comes from his stories. But as the field of robotics has become more advanced, it has been necessary to begin treating these laws as less of a thought-experiment, and as more of a tangible, pressing concern – just how do we keep robots ethical and safe?

Is Science Fiction About To Become Science Fact?

At the University of Hertfordshire, a team of researchers have been working to solve the issue of robotic ethics and bring Asimov’s ideas into twenty-first century reality. Their goal is to figure out an ethical framework that can be used to integrate robots into our society in a safe, risk-free manner.

According to the team’s Professor Daniel Polani, Asimov’s laws form a solid foundation from which to start, but lack the necessary level of complexity and nuance. In particular, Professor Polani suggests that the laws could easily be misinterpreted by robots, as they are based on human language – which can be a more ambiguous, slippery factor than we often realise. Words integral to Asimov’s laws, such as ‘harm’, are dependent upon their context. Who decides what counts as harm or protection? These ambiguities have major implications for the way the laws function.

Recently, this language barrier to robotic and AI development was seen as Facebook shut down an experiment after two AI programmes began creating their own syntax. Although the story was reported, somewhat hysterically, as evidence of secret robot plotting, the real implication was more mundane – the project’s aim was an AI that humans could communicate with, not one that could only communicate with fellow AI.

To counter this problem, the University of Hertfordshire team are working on Empowerment – a more complex, nuanced framework for programming ethics into a robot than Asimov’s laws. To formulate this framework, the team have turned their attention away from ethics and semiotics, and towards hard mathematics. Instead of trying to make robots understand the concept of ethics, or interpret the nuances of human language, the researchers aim to optimise the number of options available to a machine. For very basic robots, this could include only a handful of potential functions; for other, more advanced creations, it could involve a much greater number of human-like options.

With a robotically assisted future seeming increasingly likely, the work being done by the team behind Empowerment offers a vision of how it can be implemented safely and securely. So despite the scare stories and the prophecies of doom, a robot that would rather save your life than exterminate us all could be closer than you think.