Why it’s time to flag up every time you use machine learning

machine-learning

#1

The popularity of machine learning amongst businesses is growing dramatically, and for very good reason. Thanks to its ability to learn from the huge torrents of data now being produced by virtually every industry, machine learning is expected to add a substantial 10.3% to the UK economy by 2020.

Its potential benefits range from saving lives by quickly diagnosing cancer, piloting driverless vehicles, predicting shoppers’ buying habits and speeding up mortgage and insurance decisions.

But, despite all the benefits, the wholesale adoption of machine learning faces a significant stumbling block, the public has to trust it, and here sits the virtual elephant in the room.

Computers in the driving seat

While hospital patients, bank customers and vehicle passengers are more-or-less comfortable trusting their safety to human ‘experts’, our latest research shows that big sections of the public would think twice if a computer was in the driving seat.

Our research shows that 40% of the public find machine learning ‘confusing’, while a further 40% think it is ‘scary’. In fact, when our sample of 2,000 UK adults was given a definition of machine learning, the percentage of people ‘scared’ actually increased to over 50%. More dramatically, 12% even went as far to say that machine learning could trigger the ‘demise of the human race’.

It’s clear that trust is a big issue when selling machine learning to a nervous public, but our research showed confidence was highest around long-established and well performing tech. This included travel direction services, such as Google Maps and satnav (trusted by 38% of people), and music recommendations services, such as Spotify (trusted by 33%).

Life-influencing decisions

However, high-value, life-influencing interactions, such as bank transfers (11% trusted) and providing legal advice (9%) are much less trusted.

This lack of trust threatens to prevent organisations from all sectors of the economy reaping the full benefit of what promises to be revolutionary technology. ‘Scared’ consumers are liable to steer clear of certain services driven by machine learning creating a real barrier to adoption.

Faced with significant sections of the public who lack trust of and are ‘scared’ by, machine learning it would be understandable if organisations decided to keep their use of machine learning.

Three-step approach to reducing anxiety

Instead, our research suggests that organisations should take a proactive approach by adopting three-steps to reducing machine learning anxiety.

Flag up whenever machine learning is used to make important decisions that impact lives. Our study showed that increased exposure and hands-on experience make machine more familiar and ‘trustworthy’ over time.

Empower the consumer. Give them input into how an algorithm makes a decision and seek feedback when a decision is made. Our findings indicate collaboration builds trust.

Share information and give options. Explain how and why machine learning is being used in easy-to-understand language at a logical point in the consumer’s journey. Our research shows that, in some circumstances, depending on context, consumers would like a machine learning cookie-style pop-up or opt-out functions.

While educating and empowering consumers, it is also crucial to be transparent about the current limitations of machine learning and the importance of human supervision over this technology. Machine learning isn’t fool-proof, just as the humans who design it aren’t fool-proof. Machine learning and the outputs it provides are only as good as the available data.

One thing we can be sure of though, humans are physically incapable of making sense of the huge datasets machine learning can analyse and the benefits of well executed machine learning significantly outweigh the risks. These facts lie at the heart of winning over a sceptical public.