“Just what do you think you’re doing, Dave?”

Published:

When the technology we rely on makes mistakes, what should we do? It feels like a simple question with a simple answer: if the technology is wrong, it needs fixing, and then it will be right again. But this black-and-white view of technological decision-making – right or wrong – has passed its sell-by date. One paradox of our age is that the more computers do for us, the less we understand them.

It wasn’t supposed to be like this. The motif is that computers are either right or wrong, and over time they will be wrong less often. Through literature and film, we have told and retold the story of the journey towards techno-perfection. Running the universe in Ian M. Banks’ Culture novels is handed over to the hyperinelligent machines called Minds. The Terminator’s Skynet was given control over US military hardware precisely because it was felt a computer wouldn’t make mistakes.

“Let me put it this way”, explains the villain of 2001, HAL-9000, an artificial intelligence, “the 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, fool proof and incapable of error.”

We have come to imagine the decisions made by computers today in the same way.

This might have made sense in the past. It was pretty obvious when a machine wasn’t doing what it was supposed to. A car should start, and if it doesn’t it’s broken. An alarm clock should wake you up, and if it doesn’t go off, it’s broken. Chuck some numbers into an equation or a model and if the numbers that come out the other end aren’t right, the equation is wrong.

We don’t see the shades of grey in the decisions machines make; we assume they are binary – either working or broken, right or wrong, one or zero. Someone cleverer than I will tell me that was never a good way of thinking, but it certainly isn’t a good way of thinking now. The decisions made by the algorithms governing so much of what we do aren’t right or wrong. They aren’t binary, they are never ‘right’ or ‘wrong’, but rather on a probabilistic rainbow of good and better guesses.

If a machine-learning algorithm, for instance, pushes one news article higher than another in my Facebook feed or my Google results, it is impossible for me personally to tell whether it’s made the ‘right’ decision. If my sat nav recommends one route over another, I hope it’s ‘right’, but I am similarly blind to its decision-making process. Perhaps it took a new route to avoid traffic? Perhaps there are road works I didn’t know about. I have no idea. The decision-making process is utterly opaque, and therefore beyond serious evaluation. But I’ll follow it anyway.

Some of this is down to deliberate obfuscation. Computer programs are intellectual property, kept under lock and key and bombproof bunker. But it is also a result of the technology itself. Machine learning, artificial intelligence and algorithmic decision-making will soon be so complex as to be beyond human understanding. Understanding exactly why A results in B will be impossible. It is commonplace to speak to technologists developing algorithms who themselves aren’t sure exactly how they work, let alone their sales team pitching its ‘insights’ in PowerPoint presentations. As Jason Tanz writes in this great Wired piece, AIs are now so complex that “our machines speak a different language now, one that even the best coders can’t fully understand.”

That language is one of probabilities. At their core, AIs are making as good a guess they can, based on the information they’ve been given. This is going to require a step change in how we think about tech-driven decision-making. It won’t be right or wrong, working or busted, but just probably (hopefully) good.

This maps poorly to sci-fi’s ‘infallible machines’, and our trust in these systems ought to reflect that. Blindly following a sat nav that’s taking you into a traffic jam is not the end of the world, but when decisions on policing, disaster relief and healthcare are being made by an AI, it’s vital we remember that, at the end of the day, they are still just good guesses made on whatever data we fed the machine.

There will be frustration, no doubt. Governments and citizens are increasingly desperately looking for explanations for why technology is doing the things it is. Unfortunately, it feels like these questions are going to become harder to answer, not easier. We may have to submit to not knowing, or look for new ways of evaluating and accrediting technology that isn’t just based on a transparent code base. With this will come a change in expectations, too: we will have to accept that HAL 9000s’ pretences to perfection will remain in the pages of science fiction.