‘If it’s illegal offline, it should be illegal online.’ This is the rallying cry around which the debate on digital regulation in the UK has coalesced. It and its converse (‘if it’s legal offline, it should be legal online’) are foundational principles championed both by those who prioritise greater action to protect users from harm, and those who prioritise the defence of freedom of expression.
There are already laws that stop you saying things, and other laws that protect your right to say other things. There are laws that ensure children are afforded greater protection than adults. There are laws preventing discrimination on the grounds of race or gender.
The Joint Committee, tasked with scrutinising the draft Online Safety legislation, published their report this week. The report addresses many of these concerns by seeking to integrate the Bill far more into other parts of legislation: from a reformed Communications Act to human rights legislation to the Equalities Act and more. We have laws against a wide range of harms: the question now is how we ensure those laws are felt online.
This is a step forward. The original bill seemed to be centered around new definitions: new definitions of harm, which led to worries of state restriction by proxy of legal speech on the one hand, and failure to tackle significant dangers that were left undefined on the other. Vague safeguards threatened significant privacy violations. Others raised fears that the UK’s regulatory model is likely to be replicated internationally and used as political cover to justify human rights violations.
So connecting the Bill more strongly to other legislation makes sense: it gives more clarity as to what will be expected of platforms, it strengthens the justifications for action by building it into a wider democratic framework, and it increases the safeguards and oversight of how the Bill operates.
But let’s temper this optimism a little: by placing online safety into a web of legislation, that legislation needs to be fit for purpose. And there is no guarantee of that. The question of safeguards isn’t solved just by relying on other legislation. There are inherent ambiguities in law itself, meaning cases about when one piece of content online is in fact illegal or not can go on for years. And the enforcement of law all too often sees discrimination perpetrated by law enforcement, and their current powers putting marginalised groups at risk.
The most pressing example is the safeguarding of human rights. This has been set out in the Bill and the Committee’s report as a crucial element of the regime – that platform actions and Ofcom’s powers must respect and uphold human rights to freedom of expression and privacy.
But the same day the Joint Committee published their report, highlighting that Ofcom will have to act in accordance with human rights legislation, the Government announced its proposals to reform that very human rights legislation, in what has been deemed by campaigners an ‘unashamed power grab’. The proposals set out staying within the European Convention on Human Rights, but giving UK courts ‘more ability to interpret human rights law in a UK context’, in a bid to counter (among other things) ‘wokery’. How human rights online are interpreted and overseen, is hence moving away from established human rights frameworks, rather than towards.
And requiring platforms to take action to reduce the risks of disinformation that poses a threat to democratic processes is recommended to be grounded in new offences set out in the Elections Bill. But the Elections Bill hasn’t been passed yet, and the chair of a Commons committee scrutinising it has said the proposals ‘lack a sufficient evidence base, timely consultation, and transparency.’
We need a recognition that any fundamental flaw in other areas of legislation on which it depends is a fundamental flaw in the Online Safety Bill.
Yet the Committee aren’t naively thinking everything will work perfectly: notable is the recommendation of the Committee to constrain the powers of the Secretary of State to direct Ofcom in its delivery of the online safety regime, since this could too easily be abused. It’s also beyond the mandate of the Committee to examine the entire legislature on which the Bill depends.
But as DCMS considers its response, we need our institutions to recognise that they are building on a Jenga tower that itself is inherently risky: the success of the regime depends not only on actually what’s in the Bill itself, but on whether all of the dependencies surrounding it hold up or not. Pull or push one brick, incautiously or deliberately, and your carefully constructed structure can collapse.
It’s difficult for a government to endorse the idea that the laws they make could be seriously misused while they are in the process of making new laws, for fear of fundamentally undermining their own authority – but if they do not, they fundamentally undermine their own legitimacy. We want a government that designs laws with substantial safeguards against government overreach, state violence or discrimination in mind- a government that acknowledges these possibilities is far more trustworthy than one which claims it could not do anything wrong.
The Online Safety Bill demands risk assessments by companies about how they might be perpetuating harm through their safety policies. The same commitment from the Government would not go amiss.