Bursting the democratic ‘bubble’

Published:

After two years, the UK Government has finally put forward a draft bill on Online Safety – to empower a regulator, Ofcom, to compel tech companies and social media platforms to fulfil a ‘duty of care’ to their users and put in place processes to reduce and remove online harmful content. Oliver Dowden declared in the Telegraph on Tuesday: ‘If it’s illegal, platforms like Facebook and Twitter will have to flag and remove online abuse quickly and effectively, or face the consequences. The same goes if it breaches their terms and conditions. No more excuses.’ 

But he also goes on to say: ‘The last thing we want is for users or journalists to be silenced on the whims of a tech CEO or woke campaigners…We’ve also placed a protective bubble around journalistic and “democratically important” content…The largest platforms will also have to protect posts on areas of political controversy, and companies won’t be able to discriminate against particular political viewpoints.’ 

In legislation language, this is one of the key – and most vague – parts of the Bill.

The powers to compel action from platforms if they either fail to act on abusive speech, or if they over-censor people’s speech, are both needed. It shows an explicit recognition that the right to freedom of expression categorically does not entail the right to abuse others. Protections of speech, though, are also needed as a crucial safeguard against, for instance, a platform over-censoring speech critical of the government for fear of facing stricter legislation. 

But this as currently framed in the Bill and Minister’s statements, sets off multiple alarm bells. How this ‘protective bubble’ around content which is deemed ‘democratically important’ – or, as Dowden suggests, ‘politically controversial’ – can operate in practice while protecting users from abuse is opaque. It suggests that if someone is being abused about something political – a policy, campaign or ‘live political issue’, it will be expected to be permitted under a platform’s terms of service: not the zero-tolerance approach to abuse the Bill seems to be aiming for. 

Cordoning off certain topics or viewpoints in a ‘bubble’ as ‘protected no matter how harmfully expressed’, is not a democracy-enhancing strategy. Democracy more urgently needs online spaces where people can speak out without fear of violence against them, and where disinformation campaigns are not able to run rampant to stoke division and fear. And violence and disinformation are often inherently linked to political debates. From gendered disinformation campaigns which attack women for participating in public life to misleading or false stories aimed to stir up anti-immigration sentiment, the overlap between ‘harmful abuse online’ and ‘speech which is about a politically controversial viewpoint’ can be significant.  

The worry that claims of ‘democratic importance’ will be used to shield harmful content from action isn’t an unfounded worry. We’ve already seen other Government legislation proposed aimed at defending harmful speech in universities from critique or censure on grounds of ‘free speech’. We’ve seen a similar move in the US, with countless, baseless, accusations of censorship of political viewpoints used by a government to attack social media platforms, when they have taken action against extremism and abuse on their services. At the same time, we’ve seen the lack of effective action against such extremism online playing a contributing factor in the storming of the Capitol. We know that extremists often claim to be ‘citizen journalists’ to bolster their credibility and protect themselves from critique. 

There will inevitably be judgements that have to be made on speech which is political but harmful: we’ve already seen the platforms having to make decisions on state leaders who post dangerous content. But how will these decisions be made by a regulator? Dowden says that ‘woke campaigners’ will not ‘silence’ people online: will those who campaign for greater protection from abuse online be barred from testifying about what content harms the communities they represent? Or will these decisions be made with the interests of those in power in mind?

A liberal democratic vision of the internet must be able to do both – to protect diversity of opinion, to empower people to speak up and speak out, to enable freedom of thought and expression and association, while robustly tackling speech when it becomes abusive, violent, hateful, or dangerous. But this is not something that can be achieved through clearer terms of service or through a more accurate algorithm. This is an area where cross-party consensus is crucial; where the involvement of and co-design with people most affected by the harms in question must be the starting point, and not an afterthought; where data and evidence on the outcomes of different platform actions on harmful speech online is vital. This will be the challenge faced by MPs in the pre-legislative scrutiny of this Bill over the coming months, and Ofcom over the coming years of enforcement. We cannot go into this and assume that tensions will sort themselves out: we need a sincere, genuine effort to engage on when we may, and when we must, limit speech online.