Trusting the Data: How do we reach a public settlement on the future of tech?


What role do we want technology to play in our lives? In popular debate, this question divides into two polar extremes: those who see us living better, healthier, technology assisted lives, and those who fear a Black Mirror-style digital dystopia. Yet after the rapid digital acceleration of the last year – from retail, leisure, health and community services having to shift online to the development of contact-tracing apps to tackle Covid-19 – questions around what responsible technology looks like are more important than ever.  

Working with BT, Demos has asked people across the UK what they think: how new technologies can be designed to protect and promote individual rights as well as the public good, and what oversight and governance is needed to guard against unintended consequences or abuses. We found that these different attitudes aren’t as clear-cut as they seem.

We asked a nationally representative sample of over 1,000 adults to take part in a survey conducted on the open source deliberation tool Polis. This enabled people to vote not only on statements we submitted to the conversation, but to submit their own statements to be added to the conversation for others to vote on:  covering topics ranging from surveillance to healthcare to targeted advertising. Polis also groups respondents together, based on the similarity of their responses, allowing areas of consensus and division amongst people who have different attitudes to be identified. We found that the public divides into three groups who balance the benefits and risks of technology very differently. 

The Conflicted

This group feels the most strongly about the use of personal data – but in two conflicting directions at once. They are very supportive of using data to improve public and private services, but are extremely worried about the risks, potential for abuse and unintended consequences of data-sharing and new technologies.

The Concerned

The largest group in the study, the Concerned view new tech and uses of data as bringing only risks, with very little upside. 

The Nonchalant

By contrast, the Nonchalant group are not especially worried about how their data is used, and think there could be upsides to new technologies. 

Find out more about the different groups here.

So how do we move forward – with three disparate groups, who disagree not only about many of the problems but also about what the appropriate solutions are? Using Polis, we can highlight areas of consensus between these positions.

There are some areas of clear agreement – where not only was there overall majority support in our sample, but majority support within each attitude group. The need for better education for children about data uses, the need for technology to be made accessible to all, and support for government oversight of how companies are using people’s data are all likely areas of agreement across groups, and provide a framework of existing consensus on which further debate about uses of technology can be built. 

None of these groups are whole-heartedly supportive of technological progress without caveat. Those who most strongly think that new uses of data and tech can be for the public good don’t trust private companies who provide that technology to use it properly, while those who aren’t actively worried about their data still aren’t happy about some ways it is used. 

What we can do

Taken together, these findings speak to a crisis in trust: that people feel disempowered, uncertain or anxious about how their data is used and how it will affect them – and on the flip side, we see people not being trusted to make their own decisions regarding their data. Building public understanding of data uses and new technologies and trust is essential: and the two can’t be separated: building trust relies on increasing understanding to enable people to judge for themselves what is or is not worthy of that trust. 

How can this understanding and trust be built? Past experience shows it cannot be through half-baked add-ons to systems which are already fundamentally mistrusted. For example: the introduction of ‘cookie consent’ on websites, which nominally empowers users to be more in control of what personal information they allow to be shared. But they are designed in obfuscatory ways, and are more of a confusing annoyance to users which still allows widespread collection of personal data, rather than a genuinely empowering paradigm shift. 

For genuine change, we need approaches to technology and data uses that are not just top-down, imposed on people to whom the uses then need to be explained and ‘agreed’. Thin consultation procedures need to be replaced by meaningful co-design, so that new technologies are shaped by a wide range of people with different needs and experiences, including those at risk of discrimination and the digitally excluded. And where the development of new technologies poses a significant risk to human rights, that development itself should be subject to strict regulation and democratic oversight, which takes into account how people will actually use a system and the impact that has on their rights. 

Neither the utopian nor the dystopian vision of the future are guaranteed nor inevitable. How far citizens can be involved in a settlement on responsible tech will determine both the efficacy and legitimacy of new technologies.

See the full slidedeck here.