Last week the government published its long-awaited White Paper on Online Harms, proposing that an independent regulator be established to ensure that digital technology companies uphold a duty of care to their users and tackle harmful content on their platforms.
The paper is a welcome acknowledgement that with the rise of digital technologies, we are seeing both new forms and magnifications of harm – from dangerous content targeting vulnerable groups, to violent threats against individuals to the dissemination of radicalising extremist or conspiracy-fuelling content.
But the fact that something needs to be done does not mean that simply anything will do. There are still questions about the proposed regulations before it can be ascertained whether it will be able to effectively remedy those harms, and to do so without encroaching on other fundamental rights and freedoms.
Firstly, the remit of the proposed new regulator is extremely broad – not only in terms of the harms it proposes to tackle, but in terms of the number and size of platforms over which it will have oversight. As our CASM director wrote last week, what steps are needed to tackle each of these different harms that the regulator will be tasked with overseeing, and to secure compliance when dealing with this scale of content, is a vast technical and human challenge. Implementing these regulatory changes will demand levels of transparency, resources and evidence that may take many years to successfully achieve.
A widespread concern, acknowledged in the paper itself, is that the concepts of harm invoked are in places difficult to define, and are frequently left to the discretion of the future regulator. One category of harms that the new regulator will hold responsibility for is “harms with a less clear definition”. While these harms, which include disinformation, trolling, and violent content, are indubitably serious problems on digital platforms, regulating them effectively and fairly will require at a minimum clear definitions and understandings of these phenomena. And although the paper states that the proposed regulations will not apply to private communications, it also remains to be determined what definition of ‘private communications’ the government plans to use.
Concerns which have been raised about government overreach interfering with, inadvertently or otherwise, with freedom of speech and expression, and personal privacy, are likely exacerbated by the uncertainty surrounding how and through what process these terms will be defined. We’ll wait and see.
Wariness about the danger of governments tackling online harms is not without reason. For instance, as a collation of attempts to tackle misinformation across the globe (by Daniel Funke of Poynter) shows, laws which have been brought in allegedly to combat misinformation are allowing governments to imprison journalists, politicians and individuals who challenge them politically – many of which rely on vague and broad definitions of misinformation, such as publishing ‘any news without being able to prove either its truth or [with] good reason to believe it to be true.’ The White Paper concedes that ‘the code of practice that addresses disinformation will ensure the focus is on protecting users from harm, not judging what is true or not’ and admits, ‘there will be difficult judgement calls associated with this’ – but this too lacks essential clarity on the role and processes of a regulator in making these calls.
But the need to carefully scrutinize proposals to act against online harm should not translate into a failure to take any remedial action at all. It means instead that any new regulations must be specific, realistic, evidence-based, designed transparently, and subject to continued democratic oversight.
Our hope for this White Paper – shared by others – is that the emphasis on a duty of care for companies towards users in particular will lead to a cultural shift in how we conceive of digital platforms, and encourage technology companies to make structural changes in how these platforms are designed and maintained. In the long run, these will be needed to adequately protect free, open and safe engagement online. If the UK can set a positive precedent for how online regulation can be done effectively and in accordance with human rights, this could be a powerful tool for raising standards globally.
This is very much still only the beginning of this conversation. The extent of the consultation, the commitments throughout the paper for the government and future regulator to consult with industry, civil society and other stakeholders in refining and delivering these proposals, are encouraging. This is an important opportunity for both those who see this White Paper as a promising initiative, and those who see it as a worrying intervention, to work together to shape how the UK can lead the way in promoting more inclusive, more open, and safer digital technologies.