Can the UK open tech’s black box?


A Demos response to the CDEI report on the future of online targeting.

As the government’s new AI adviser calls for overhaul of social media regulation, is greater transparency the solution to online harms?

Governments around the world are ramping up their efforts to reshape the Internet. Fears around the development of the web are mounting with every data protection scandal, every story of online abuse and exploitation, and every murky-sounding company’s day in the headlines. Despite this rush to regulate, we must work out how digital regulation can protect and promote human rights in practice and not just in principle.

Data, targeted advertising and other algorithmic decision-making systems – the processes by which platforms allow us to navigate what seems like an incomprehensible amount of information – are now squarely in the headlights, accused of manipulating, misleading or even radicalising their users

Policymakers are looking to respond, but there remains a major hurdle: how much do we really know about what’s going on online? Unpicking the impact of these systems on society requires access. We need to understand these spaces are more than just the sum of the content that is posted to them.

Specifically, we need computational access. We need the ability for regulators, civil society and the research community to observe and understand what takes place in these vast new public spaces. Balancing this with the processes to ensure that this oversight is ethical and democratically accountable is one of the major challenges we see facing any future regulator. 

Some of the answers might be found in a new report by the Centre for Data Ethics and Innovation (CDEI), the UK government’s independent advisory body on the impact of data-driven technology and artificial intelligence. This week it published its first set of formal recommendations to the government which set out the foundations of a solution to this problem.

The recommendations include:

  • Compulsory advertising transparency archives, 
  • Giving a new Online Harms regulator information-gathering powers, and 
  • A code of practice for online targeting. 

Like the government’s Online Harms White Paper, they emphasise this focus on systems over content, and responsibilities over general restrictions. They highlight the importance of data access and transparency. Like many researchers into the web, we at CASM have repeatedly faced issues of lack of access to comprehensive data from multiple platforms. And when investigating the so-called ‘online harms’ – from harassment to extremism to misinformation – gaps in evidence can lead to dodgy policy. We need to be able to take a systematic approach: which means we need systematic access to data. 

After the Cambridge Analytica scandal, many social media companies pulled up the drawbridge on data. This trend has slowly been changing, with the creation of political advertising transparency archives. But these current advertising transparency attempts, when they exist at all, are incomplete, inconsistent and often incoherent

In addition, they could be shut down at any time with no recourse. A glaring example of the failures of the current approach was the disappearance of 74,000 adverts from Facebook’s advertising transparency archive just 2 days before polls opened in the 2019 UK General Election. This left researchers and journalists in the dark and severely hampered their ability to hold politicians to account during a crucial part of the campaign.

Access to data has always been haphazard. But insights into how the underlying systems work has been almost non-existent. Here, self-regulation has truly failed. Take Facebook’s Social Science One, which nearly two years in is still not delivering the access to data it promised. 

So CDEI’s insistence on going further than just advertising transparency is an important step forward. It recommends a democratically accountable regulator should be able to examine platform data and that the regulator, not just the platforms, be able to give computational access to data on targeting systems to independent researchers. This is crucial in bringing independent oversight to these opaque systems and diagnosing the true causes of systemic harms.

Will regulation of online targeting ever come to pass? The government has set out ambitious plans in its Online Harms White Paper, and recently has seemed to show willingness to go toe-to-toe with the US on tech, such as on the digital services tax. 

With concerns growing about abuses of online targeting systems, it seems inevitable that governments will act. Many would say this is overdue. But in the rush to regulate, it is vital to ensure that regulation retains and protects the freedom of those who use the web. Britain has an opportunity to establish standards of transparency and accountability founded on human rights and democratic principles. In doing so, we have a chance to set global standards on digital regulation and play a leading role within the international community by championing evidence-based policy for an ethical Internet.