Have online platforms passed the self-regulation test in the Covid-19 crisis?

Published:

“I have flagged a small number of fake news items (around 5G towers) on social media platforms and no action has been taken.” Female, 30-39, South West England

Covid-19 misinformation has been widely recognised as a serious challenge to the public health response to the crisis. As people’s internet usage has increased during lockdown, fears about harmful content online are also escalating – with new forms of online fraud, online abuse and misinformation adapting themselves to a Covid-19 context. These can pose serious threats to individuals’ health as well as undermining the broader public health response by encouraging people to believe false claims about the origins, transmission, prevention, or cure for Covid-19.

In response to this crisis, we have seen online platforms acting with a more concerted and collaborative effort to tackle Covid-19 misinformation than we have seen on other issues: from making authorised information more obviously available, labelling or removing Covid misinformation – even that posted by world leaders – and pushing health information to those who have been exposed to misinformation. 

However, the problems with these self-regulated systems remain. Platforms are still not enforcing their terms consistently, and often only reactively, in response to being called out. Transparency is lacking – senior platform representatives from Facebook, Twitter and Google were summoned to a Select Committee hearing after an initial round of questioning was deemed to have provided inadequate answers to how they are tackling Covid-19 health misinformation. Respondents to our Renew Normal survey expressed many concerns about misinformation being spread by government, the media, on social media, and shared feelings of being unable to know which news to trust and which not to.

And the Covid crisis has highlighted the flaws in a system of moderation that can easily be overwhelmed. The furloughing of content moderators meant platforms have had to rely more on automated systems, and there are widespread concerns this will lead to more moderation mistakes and excessive takedown of content. Other forms of abuse online are also slower to be taken down, as moderators struggle with workload. Although there has been more action on harmful misinformation, it does not show any willingness to engage with the problem of harmful content more broadly or in the long term, as shown by Facebook’s continued failures to tackle hate speech online, leading to a mass advertiser boycott.

This crisis may be an experiment in platform self-regulation: to see if, at a time of global need, online platforms can adequately keep the online spaces they control safe and healthy for users. If so, it is already clear that the experiment has failed. 

As we move on to the next stages of Renew Normal, we’ll be shining a spotlight on many of the issues you raised through our survey. Keep up with the latest news by subscribing here.