Our emotional responses to AI and the danger of a backlash
Rodney Brooks, one of the world’s leading Artificial Intelligence (AI) researchers of the last 20 years, has recently written an article pointing out that popular expectations for the rate of progress in AI research are running far ahead of reality (seven-deadly-sins). Hype has overtaken any rational assessment of the state of the field. Brooks identifies a number of reasons why people tend to over-estimate the rate at which AI research will progress. These range from a failure to appreciate how factors other than the technology itself slow the rate of adoption of something new through to overestimations of how well machines perform today and how quickly they might improve. Many people seem to regard films such as ‘Her’, ‘Ex Machina’ or even ‘Blade Runner’ as realistic fiction rather than the science fantasies that they are.
Brooks’ efforts to provide a rational explanation for the hype that pervades this subject today are exemplary, but they ignore the emotional undercurrent that is an important feature of our collective response to the new and unknown, an emotional ‘gut’ reaction described by Daniel Kahneman in his book ‘Thinking Fast and Slow’ and also perhaps analogous to John Maynard Keynes’s ‘animal spirits’.
“The difficulty lies, not in the new ideas, but in escaping from the old ones, which ramify … into every corner of our minds. … Most of our decisions … can only be taken as the result of animal spirits – a spontaneous urge to action rather than inaction, and not as the outcome of a weighted average of quantitative benefits multiplied by quantitative probabilities.” John Maynard Keynes, 1936
When faced with the prospect of Artificial Intelligence, we tend to respond strongly emotionally. AI captures the collective imagination – it feels new, unknown, exciting and scary. Our reactions tap into a shared set of cultural beliefs and expectations dating back to the first Industrial Revolution when we were faced with a profoundly alienating new environment. The shock of the Industrial Revolution spawned two closely related movements in the late 18th and early 19th nineteenth century: the Romantic movement and the Gothic Horror tradition. Our emotional reactions to AI are similarly strongly oriented towards an excited imagining of possibilities (a Romantic response), or towards a strangely pleasurable sense of dread (Gothic).
“With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.” Elon Musk, 2017, in full Gothic mode
These culturally-driven reactions are independent of the facts of the matter. Our emotional reactions are just not driven by the facts but by the excited imagination of the possibilities. So they can be formed readily by both decision makers and the general population without need to acquire any knowledge of the facts. They can therefore be widely spread and shared without recourse to the (lately much-derided) expert.
Both forms of reaction, the Romantic and the Gothic, raise expectations of imminent realisation and resolution. Romantics looked to renewal through a revolution, while the suspense of a Gothic horror story was always resolved by the realisation of our fears. Dr Frankenstein threw the switch and created his monster. He didn’t apply for a new research grant to explore the results and new research questions raised by his last experiment.
Those of us whose responses to AI are being driven by our emotions, whether excited and positive (Romantic) or fearful and negative (Gothic), share an expectation of major change in the near future – a resolution of our hopes and fears. And this is where the problem arises, because AI research, like most other fields of endeavour, moves forward gradually, albeit with occasional spurts. The process is, in short, a marathon not a sprint. There is no resolution – no Eva from ‘Ex Machina’ or K from ‘Blade Runner’ – just a slightly more useful Google search engine each year and a somewhat more reliable face recognition machine at airport customs.
The resulting ‘failure’ on the part of practitioners to provide an immediate resolution of the hopes and fears of decision makers and public will lead to a backlash, which some believe has already started, a backlash which will be in two forms: anticipation and reaction.
An anticipation backlash looks like this. Observers (inclined either to the Romantic or the Gothic view) foresee major impact on society of imaginary AI developments and move to prevent those changes before they take place, pressing for and putting into place regulatory, legislative and/or financial restrictions. Some such changes might be a good idea. Many recent innovations, most of them having little to do with AI, are having significant impacts on society and there is a rational case for responses to those changes. There is, however, a likelihood that changes of this kind will slow down research and limit the adoption of useful new technologies without due cause. There is even a danger that AI research becomes the scapegoat for all negative impacts on society of recent technological innovation.
“I’m increasingly inclined to think that there should be some regulatory oversight, maybe at the national and international level, just to make sure that we don’t do something very foolish.” Elon Musk, 2017, again.
We know what a reaction backlash will look like because it has already happened twice before in the field of AI Research. In the early 1970s, and then again in the late 1980s, periods of hype and irrational excitement about the prospects for Artificial Intelligence were ended by dramatic cuts in research funding and corporate investment after expectations for major advances were not met. A failure to meet irrational expectation lead to irrational condemnation. The reaction backlashes in 1973 and 1989 were so bad that they coined a new term to describe the devastation to research that followed: AI Winter.
“The AI Winter … caused the extinction of AI companies, partly because of the hype over expert systems and the disillusionment caused when business discovered their limitations.” AI Expert Newsletter, 2005.
When emotions drive our judgement, people can be damned when they succeed as much as when they fail. When new technologies actually succeed the resulting improvements are seen as “taking away jobs” rather than improving productivity, despite widespread recognition that poor productivity growth is at the heart of Western economic problems.
“Flying drones putting workers out of a job – Flying drones and robots now patrol distribution warehouses … It is driving down costs but it is also putting people out of work: what price progress?” BBC News website, October 2017
AI research is making exciting progress and many resulting technologies are enriching our lives and increasing productivity. Machines can now recognise our faces and respond to simple verbal commands. Like all new technologies, some of these advances raise issues that need to be addressed. But the state of research falls far below the excited imaginings of Hollywood. Time to think with our brains, not our emotions.