The future of Legal but Harmful remains uncertain

Published:

After a long and anxious summer for those of us working on tech policy, the new DCMS Secretary, Michelle Donelan, announced that the Online Safety Bill would be returning. The  future of the bill became uncertain when a combination of Parliamentary timetabling and point-scoring in the Conservative leadership race derailed its previously smooth journey through the Commons. The teaser from all this is that the controversial ‘legal but harmful’ clauses in the Bill will be altered to better protect freedom of speech – but how it will be edited remains unclear. Read our Head of CASM’s take on what this could – and should – look like. 

‘Legal but harmful’, or ‘lawful but awful’ as it is often known outside the UK, has become one of the most contested concepts in digital policy. The basic idea is fairly straightforward: there are many harms – disinformation, abuse and bullying, dangerous health misinformation – that people face in online spaces which are not illegal. The more frequently contested view is that this harm is substantial enough that ‘something’ should be done about it in the legislation. 

The Online Safety Bill, currently going through parliament, includes ‘safety duties’, which are designed to tackle content that is ‘harmful to adults’. 

These safety duties mean that platforms have to do risk assessments against a list of particular harms (such as assessing how much of a risk online abuse on their service is for users) and set out in their terms of service what, if anything, they will do to content that poses such a risk (this could include content moderation or curation). Then they must apply those terms consistently. 

Critics say that it is tantamount to the Government censoring legal speech online, and will set a dangerous precedent because it means that platforms will act to take down legal content that the Government has determined are unacceptable. The Government says that what platforms must do isn’t specified: all they are required to do is say what they do and do that consistently. The safety duties are also accompanied by a variety of exemptions and exclusions, seemingly to mitigate the risk of over-moderation (though, we have argued, likely to backfire). 

What might the changes be?

This is all speculation at this stage: we will have to wait and see. But here are a few options for what the new DCMS Secretary could do.

  1. Get rid of all safety duties applying to content that is harmful to adults (i.e. ‘scrap legal but harmful’)

This would be the option most likely to appease critics of the Bill who are worried about freedom of expression – it would be a shortcut to get out of all the debates about the scope, about creeping censorship, and would leave the structure of the broader Bill intact.

This could be politically difficult: the Government which has promised that the UK will be the safest place in the world to go online, effectively washing its hands of doing anything about vast swathes of online abuse. (This political difficulty might be mitigated, though, by the fact that Donelan has said the Bill will retain separate proposed duties where it concerns children.)

And to do so would be a huge missed opportunity. 

Much of the strength of this regulation is to bring transparency and oversight not just to what platforms are doing, but what they say they are doing, and what users can expect from these spaces. If oversight of how platforms may be amplifying or encouraging legal online harms is removed from the regulation, the effect will be to give free rein to platforms. It will be a carte blanche that no, it’s just too difficult for regulation to worry about these sorts of harms: an incentive that will mean platforms feel even less compulsion to act.

Much of this debate has been captured by the idea that ‘legal but harmful’ will be used by people to demand the removal of any and all individual posts that they find personally offensive. If something is seriously bad enough to require intervention, critics of ‘legal but harmful’ argue, then make it illegal. 

For some cases, this could be an effective approach, but for others, making new forms of speech illegal just so that they can be tackled online would be a disaster for free speech. Lowering the threshold at which someone can be prosecuted for what they have said is a much greater threat than incentivising platforms to take something down. If we as a society want to tackle things like disinformation, which we know cause real-world harm, making it all illegal is not going to be the most rights-preserving way to go about it.

2. Increase the strength or number of the exemptions and exclusions

The Bill currently seeks to mitigate against over-zealous moderation through requirements on platforms to take into account the importance of the free expression of democratically important and journalistic content when making content moderation decisions. These aren’t hard and fast rules – it doesn’t require that all such content stay up – but has to be added into the balancing equation of platforms’ systems and processes. This is in addition to the general media exemption, and the temporary ‘must-carry’ for news publisher content added to the Bill earlier in the summer, to prevent platforms taking down content before an appeal.

This would be a worsening of the status quo: not an improvement. By having fairly minimal requirements on platforms to enforce their terms and conditions consistently, and then incentivising them to make exceptions for certain people or discussion of certain types of content, the risk is that platforms are required to do even less than they currently do, and incentivised not to take down content which is in clear contravention of their terms of service. 

3. Clarify the scope of the safety duties 

There are valid concerns about how the Bill may incentivise over moderation of legal content. Much of this has been amplified by confusion over what the Bill actually requires and what it permits: the relationship between priority harms, various kinds of treatment of content, exceptions, exemptions and so forth. This could be resolved by redrafting these clauses to set out much more simply that platforms must have and enforce clear and accessible terms and conditions setting out what is and isn’t acceptable speech on their services.

This is not a threat to free speech. If we want our social media platforms to be functional, it is not tenable to say that any legal speech cannot be taken down or demoted. The only reason we are able to navigate the morass of content online is moderation and curation. Even aside from any notions of harm, different online communities want to talk about different topics with different norms and in different ways, and this freedom of association should be protected. So given that some degree of moderation will likely need to occur, being clear and consistent about it is a boon for free speech, rather than a threat.

Indeed, one of the major complaints from all sides is that platforms do not consistently enforce their current terms – it’s entirely unclear why some people are blocked for speech which clearly doesn’t violate any policies, or why others report clear instances of harassment and abuse, and no action is taken. 

4. Focus on systems not content

The best – and sadly most unlikely – outcome would be that the legal but harmful duties are reformulated away from focusing on types of content and content moderation to the ways in which platforms are involved in shaping online harms. The effects of ‘legal but harmful’ content become a significant worry at scale: and this scale happens as a result of the amplification and facilitation of more harmful discourse over less. 

There are elements of this in the Bill that could be much more strongly brought out. For instance, risk assessment duties could be strengthened requiring the publication of more information about how platforms’ systems and processes affect users, and what mitigations to the harms exacerbated by platform systems are in place. Requiring terms and conditions to be enforced would remain important, but only as a downstream part of de-risking systems and processes. This places responsibility for tackling the exacerbation of online harms on the platforms, rather than on individual users. 

But even if the changes to the ‘legal but harmful’ clauses in the bill are done well, there is a more fundamental problem: that changing these clauses will solve freedom of expression concerns about the Bill. It is a misperception that by amending or removing these safety duties, the greatest threat to free speech has been removed. Other elements of the Bill – the encouragement and requirement of widespread identity and age verification; the potential to compromise the security of end-to-end encryption; wide-ranging and strict illegal content duties which incentivise much greater over moderation – are all much greater sources for concern about the future of free expression online. 

We mustn’t get so caught up in a battle against ‘legal but harmful’ that we overlook the need to tackle the more fundamental problems.