“Rest assured, this Bill will end anonymous abuse, because it will end abuse, full stop.” We will ‘turn the tide on racist, misogynistic and anti-semitic abuse.’ The promises of a succession of Secretaries of State about the Online Safety Bill set expectations high. And they hearken to a significant debate surrounding the Bill: what it’s going to do about anonymous abuse online.
Currently, the Bill doesn’t require platforms to take any specific action on anonymous accounts. This makes sense: the Bill currently sets general duties and principles, and specific expectations will be introduced later down the road, based on evidence gathered by the new regulator and platforms’ own risk assessments.
However, there have been many more calls recently for the Bill to do ‘more’ about anonymous accounts online.
The argument for this goes: anonymity online is an important protection for people, especially vulnerable people. However, much abuse online is carried out by anonymous accounts. This means we don’t know who the abusers are – this means they can’t be traced and held accountable for their actions. So: we should protect people’s ability to use pseudonyms publicly online (i.e. as their username or handle), but get social media platforms to verify their identity so that the information can be passed to law enforcement if necessary. Identification could be required or incentivised: either making people submit identity details to use a service, or making certain functions of a platform only available to those who have verified their identities.
The problem with these arguments is that they conflate different kinds of anonymity, and different solutions to abuse.
1. Anonymity from users is not the only kind of anonymity that is needed to protect people online
Ensuring people can use public pseudonyms is necessary but not sufficient to protect anonymity online. Putting the onus on users to identify themselves to an authority, be it a social media platform or otherwise, before they use a space is a burden we know would significantly and disproportionately exclude people from marginalised groups. This kind of pre-emptive identification is something we do not regularly accept in the offline world in case people commit crimes: we do not generally sign in with our IDs before we enter a shop, or a park, or a cafe. You cannot preemptively restrict (either by requiring or incentivising identification) anonymity online in a way that will only affect potential offenders. As the DCMS has said in their recent response to a Parliamentary Committee, ‘We must [address abuse] in a way that ensures that those that rely on anonymity to speak truth to power or for their personal safety can do so, without being excluded from mainstream online debate.’
2. The barrier to addressing online abuse is not as simple as ‘improve enforcement’
The barrier to effective law enforcement responses to illegal online abuse is not simply ‘we can’t find out who these people are.’ Law enforcement, as well as civil society organisations, can and do identify people even if they appear to be anonymous to other users or have not provided accurate personal information to a platform. More pressing is the need for greater resourcing as well as training and awareness to enable police to pursue investigations.
Moreover, police having sufficient powers to compel information in the context of an investigation does not necessitate instigating even greater mass personal data collection by private companies. Proposals in other countries to require platforms to share the data of users they suspect to have shared illegal content with police proactively (rather than in response to a court order): or the use of identification facial recognition systems in public spaces have come under fire from privacy campaigners for enabling mass and likely discriminatory surveillance, rather than leading to improved law enforcement.
And this focus on legal accountability can only ever be a means to tackle illegal online abuse. Ensuring we have people’s identities so they can be traced if they offend will not do anything about the myriad of legal forms of online abuse – and abuse campaigns are good at evolving to stay just on the right side of the law while still causing significant harm.
3. Greater identification will only ever affect how we respond to abuse – it does not help to reduce risks
Presenting the solution to online abuse as something that can be solved by greater policing fails to engage with how we can prevent abuse occurring in the first place. A strong deterrent effect is unlikely: we already see people widely posting online abuse under their real names, or with profiles easily linked to their identity:
Identification is posited as a solution to problems that increase the risk of abuse, such as people being able to create multiple throwaway accounts, or experiencing a ‘disinhibition effect’ when posting online. But there are more direct ways to tackle these issues which do not require identification. For instance, platforms could encourage and incentivise people to use stable identities – accounts which gain powers and reach through building up their own profile and network and reputation over time through continued prosocial behaviour, rather than through the provision of identity details. There is evidence that users alter their behaviour based on the established social norms of a space online – regardless of whether they are identifiable or not.
Anonymity online is not a threat nor a risk: anonymous abuse is, and tackling it as a problem of who users are rather than their behaviour is likely to have serious negative consequences. The Bill should look to tackle the systems which affect how people behave: which encourage, incentivise and amplify online abuse, or else people will continue to be widely targeted by online violence.