Government regulation and political speech online are uneasy companions – discussion around how to regulate the online world frequently comes up against two inevitable yet clashing truths. One, that doing nothing about the systemic harms in online spaces, from rampant abuse to biased moderation systems, means that people – especially members of marginalised groups – are driven out of engaging freely in online spaces and political discourse. Two: that a government getting involved in what counts as legitimate and illegitimate political speech online opens the door to greater censorship and potential interference with political processes.
The Bill’s solution for how to square this circle is to have protections in place for ‘democratically important content’. While platforms will have to have systems in place to tackle illegal content, and the largest ones must also enforce their terms and conditions on harmful content, they will also need to balance those systems against the importance of the free expression of content of democratic importance. The Government has said that this is not an absolute protection, but a balancing test that must be undertaken, in which ‘common sense’ would help dictate the correct outcome.
The issue with these protections is their roots in the view that currently, political speech online occurs freely. Therefore, any intervention which might change how political discourse occurs must be interfering with this existing free and ‘robust’ democratic debate, and so must be guarded against, or mitigated.
What these clauses risk doing is simply enforcing the status quo: calling on platforms to essentially ‘leave political speech alone’, to remain as it currently is. As it is, however, it is something which all too often spirals into disinformation, targeted hate, conspiracism, and discrimination. And this doesn’t simply happen organically, or freely, but is encouraged, incentivised, facilitated and amplified by systems designed by corporations without public consent or involvement. Meanwhile, what counts as political speech is essentially content about government policy or political parties. This effectively cements the role of government and political parties in determining what speech should be left alone.
And this poses a technical challenge as well as a political one. The workings of protections and exemptions in practice speak to an idea of an enforcement regime in which platforms will hold each piece of content up to the light and examine it carefully to determine all of its complex nuances. This is simply not how platform moderation works.
Our report shows that this ‘find the really bad content, take it out, and leave everything else as it is’ is not a workable approach to tackling harms in online discourse. Instead of clear examples of Good Speech and Bad Speech, we see patterns and narratives being amplified, distributed, facilitated and scaled: so that legitimate and necessary scrutiny of politicians goes hand-in-hand with conspiracism; one news story about a woman politician sets the stage for widespread online misogynistic abuse; technical features of platforms used to piggyback onto mainstream political conversations to try to find a wider hearing for extremism and disinformation.
The environment that content exists in matters as much for the nature and health of political discourse as the content itself. Thinking of online spaces as free marketplaces of ideas, where too much harm-reduction risks squashing our democratic freedoms, places far too much faith in how far we meaningfully have democratic freedoms in these commercial environments in the first place. Consider a library which advertised itself as a general public community library and which stocked a whole range of books, but put the general fiction and non-fiction away in the basement and had instead on prominent display for the easiest access a host of extremist texts. A bookshop, which instead of organising things alphabetically or by genre, started organising them by shock factor. A co-working space, where everytime one person offered another some advice or support, ten people walked over and screamed abuse at them. None of these are spaces which are conducive to genuine free and open democratic discussion.
But the way these spaces are built is not an inevitability. How to build them better is a question to which we have lots of partial answers, but we need more. We should be asking: who currently controls the shape of democratic speech online? How do we redistribute the power over democratic speech online? Instead of asking ‘how do we protect democratic speech online?’, we should be asking ourselves ‘how do we build democratic spaces online’?