The Eternal Optimist: why we shouldn’t just dismiss Ofcom’s new role in regulating social media

Published:

Last week the Government announced that it plans to appoint Ofcom as the new online harms regulator. The regulator will have powers to act against tech companies if they fail to comply with duties to remove illegal content on their platforms, or fail to uphold a duty of care to protect users from legal but harmful content. 

Negative reactions across the media and social media included: that to attempt to regulate social media was unacceptably paternalistic; that this was the start of the social media Big Brother police, that the government will shut down free speech; that the tech companies will shut down free speech; that Ofcom will be at worst, biased and rights-infringing, and at best, simply ineffectual and unable to cope with its gargantuan task. The picture being painted is twofold: on the one hand, a team of Ofcom moderators, desperately combing through YouTube videos to try to find which ones could be classed in some vague, indefinable way as maybe harmful or offensive to some people and get them taken down immediately; on the other, a toothless Ofcom bureaucrat emailing Facebook to say it needs to be doing more, receiving a ‘Thank you for your inquiry’ response and giving up in despair as online hate spirals out of control.

I exaggerate – but that’s the problem with the debate around this issue. Critical analysis of government initiatives is vital – as so often what was promised to be a panacea turns out to cause more problems than it solves; what claims to promote rights can end up undermining them. But just because there are details to be sorted out, problems to be overcome, issues that this new regulator will not solve, doesn’t mean we should dismiss it out of hand as doomed to fail.

It is a truth universally acknowledged that, to put it mildly, there is a lot of really bad stuff online – from content which is already illegal (such as terrorist content) to abuse to harassment to content promoting self-harm. The question is what we do about that.

We already do not – and literally could not – operate in these spaces unconstrained by how the platform has decided it will operate. The length of what we say; who we can say it to; who can read or access it; what we can or cannot say – this is already constrained, by platforms, simply in virtue of how they are set up. And for good reason – without any constraints at all, we end up with platforms like 4chan, notorious as a breeding ground of extremism, radicalisation, the promotion of violence and hate speech. An Online Harms regulator will not be introducing constraints where there were none. Rather, it is at least intended to ensure that the constraints which are imposed are transparent, and in the interests of users and the promotion of their rights, rather than in the interests of platforms and the promotion of clicks and retweets. 

And indeed, tech companies already take down content; they already have terms of service, allow users to report breaches of those terms, and have design features to help reduce the incidence of harmful content online – think of Instagram’s ‘nudges’ to suggest people rethink potentially bullying messages; Twitter’s allowing users to control who can reply to your tweets; or Pinterest’s ‘breaking’ of its search engine so people can’t find anti-vaxx misinformation on its platform. But currently, it’s up to the platforms what they do to deal with harmful content, and how far they really go to enforce their own rules. There is no democratic oversight of the steps being taken to deal with these problems; problems which are inflicting significant harm on people and prevent their participation in the very spaces which are hailed as ‘democratising discourse’. 

Ofcom is not taking over the role of content moderator from Twitter, sifting through and banning thousands of rude tweets to prevent one which crosses the line into abuse. Rather, its new powers should mean it can demand evidence on the systems and processes being used by platforms to tackle online harms; penalise failures to be taking meaningful action to deal with the problem; and give more power to users to understand how they can act when they see or are the subjects of harm.

This is not to say that, as it stands, the government proposal is perfect. Far from it. With its lack of defined scope, it provokes more questions than answers, and concerns about over-reach or under-regulation are by no means unwarranted. But this is recognised – and we should see it as an opportunity, not a failing. There is a clear path for civil society, academia, industry, and the public, to pool their collective knowledge and experience to push Government towards setting up a regulator that truly is effective in promoting users’ rights; one that could set an international standard for how democratic societies can tackle these issues. Just pointing out the flaws in the plan isn’t sufficient – we need to be demonstrating what kind of Internet we want to see; elaborating what values it should be founded on; building evidence that will allow us to realise those principles; and in doing so, resist an Internet that more malign forces would wish to see.