Machine learning algorithms are complicated, but they should not be ‘mysterious’

26 Sep 2018
Science-fiction themed image of outline of a human head overlaid over a computer circuit style pattern

I'm a fan of classic fictional detectives (as illustrated by my previous blog post). One of my favourites, Father Brown, said:

"The modern mind always mixes up two different ideas: mystery in the sense of what is marvellous, and mystery in the sense of what is complicated." - The Wrong Shape by G.K. Chesterton

Unsurprisingly, he was talking about a murder mystery, but when I read this the other day, it struck me that he could easily have been talking about one of the mysteries of our age - artificial intelligence and machine learning algorithms.

The ubiquitous images of androids and terminators seem designed to make us afraid, confused or indignant about anything badged as an 'algorithm' or 'AI'.  These terms also tend to give such tools an air of infallibility, perhaps beneficial for those whose business it is to sell them! Neither approach is desirable.

Machine learning algorithms are mysterious, not because there is anything miraculous or magical about them, but because developing and using them can be complex.  Building them involves maths, statistics, computer science, law, ethics, social science, operational knowledge, and an understanding of the often multifaceted working environment into which they will be 'dropped'. Their 'outputs' - recommendations or predictions about something or someone - are probabilities, not certainties, and often say little about causation. Despite this, outputs can be expressed in too simplistic or confident a manner, leaving the human operator unable or unwilling to contradict or challenge.

Sometimes the consequences of machine learning algorithms will have minimal impact - predictions of people's behaviour used to help supermarkets stock up on strawberries for instance! There are other circumstances when the impact could be more serious, such as where machine learning is used by the police to help decide whether or not to pursue a particular course of action.

It is this latter case that is the subject of a new report authored jointly by the Royal United Services Institute, and the University's Centre for Information Rights. The report tries to shed light on what machine learning actually is, how it might be used in police decision-making, and how it can be properly evaluated. As the report advises, machine learning algorithms will require constant ‘attention and vigilance’ - the machine can only go so far, it’s up to humans to interpret the results.

There could be considerable benefits for policing from the use of these new technologies. But no good will come of a rush to adopt. While complexity will remain, we should all be striving to remove the ‘mystery’ from machine learning algorithms.

Marion Oswald is a Senior Fellow in Law and the Director of the Centre for Information Rights at the University of Winchester. Find out more about her, and Senior Lecturer in Law, Christine Rinik’s, findings on the legal, ethical and regulatory challenges of using machine learning algorithms in police decision making in her recent collaborative report with the Royal United Services Institute. Follow her on twitter @Marion_InfoLaw.

Back to media centre