Hello, Robot
Last year, Beauty.AI held an online beauty contest, but with a twist: it was “judged by robots”. Beauty.AI was going to be scientific. Rather than using the judgements of fallible human judges, the beauty of the participants would be established by a computer. Unfortunately, when the results came out, its creators were horrified: the computer overwhelmingly favoured white contestants.
This decision is one of many that is now being made by machine learning – though a number are rather more successful – and there are plenty more: the computer I’m writing this on has already cleaned up some of my sloppiest spelling.
Here at the Centre for the Analysis of Social Media (CASM) – a collaboration between Demos think tank and the University of Sussex informatics lab – we have worked with machine learning technology for the best part of six years. We have built it, experimented with it and turned it towards answering questions we might never have been able to answer before.
Our technology has identified digital cries for help and support during times of crisis, spotted eyewitness reports of corruption and vote-rigging from halfway around the world, has followed political campaigns and debates with an attention to detail a dozen analysts couldn’t replicate. It has learned to read German, French, Arabic and Yoruba.
Much of our work is called ‘semi-supervised’ machine learning: we teach our machines. We hold the hand of the computer as we lead it through reams of Tweets, comments and posts, trying to get it to understand the difference between one idea or another.
There is something oddly human about this kind of machine learning. It’s frequently frustrating. It takes time to learn enough for it to be sufficiently capable, it makes mistakes, and it’s never perfect. The machine challenges us, too, telling us our own explanations are incoherent. But perhaps most importantly, a researcher training an algorithm like this influences it.
As with parents, or teachers, we impart a bit of ourselves in the technology we build. Our own beliefs, prejudices, the way we see, understand and categorise the world is passed onto our digital charge.
When a group of humans come together to teach an algorithm, we disagree about what it should learn: should this Tweet be classed as ‘abusive’? But we all disagree about what constitutes abuse. In one example, we presented five tweets to a group of 100 people at a conference. For some people, all five were abusive. For others, none were. If we can’t decide, how on earth can we train a computer to do the same?
The importance of this cannot be underestimated. We tend to see the whims and prejudices of humans as being secondary to the ones and zeros that govern the processes lurking under the shells of our computers. The opposite is frequently true: the decision a computer has made can be directly traced back to the person who created it.
Worse still – like the butterfly effect – the prejudices imparted into systems that operate at the scales artificial intelligences do can have their effects amplified many times. Cathy O’Neill has written brilliantly about the prejudices written into recidivism algorithms. Scientists at Princeton and Bath have warned of AI’s replicating our unconscious biases. Andreas Ekström has spoken about the way developers (and even users) can warp the results of a simple internet search. After all, when was the last time you checked the 11th result on a Google search?
One word comes up time and again in CASM’s work on algorithms: transparency. And we are in tough times for transparency: as algorithms and machine learning systems are become more and more powerful, more and more enormous, more and more complicated, we are seeing even less of their inner workings. Computer engineers are signing reams of NDAs. Technology companies operate out of military-grade bunkers. In some areas, computers are starting to write code themselves – yet another step of removal, an even blacker black box. Politicians are getting tetchy: earlier this year, the EU put up a ten million euro tender for somebody to help it monitor Google’s search results following a €2.4 billion antitrust fine.
Machine learning, artificial intelligence, algorithms – they have utterly reshaped our world, and we have seen just a tiny fraction of what’s to come. We will see transformations in the way we work, the way we fight and diagnose disease, the way we learn: there are brilliant people doing brilliant things with artificial intelligence. But the sceptics are right to be cautious and the doomsayers are right to be alarmed.
The machines are already powerful forces in shaping our lives, and their power is increasing. Transparency – the only way we might know why a machine is making the decision it is – is going to be essential, but as algorithms get more complicated and more inscrutable, it might not be sufficient. Regardless, we must stay vigilant.