How do we fight back against online abuse when counterspeech can’t keep up?

Published:

Over the last few years, solutions to the problem of online abuse have been proposed and attempted – from pleading with platforms to do the bare minimum of enforcing their own terms and conditions, to entirely new regulatory frameworks being created to tackle the problem. But it persists. This week, in partnership with BBC Panorama, we revealed research showing that discussions of reality show contestants online still disproportionately target women, with misogynistic, vitriolic, and sexualised abuse. The effects are significant – women fearing for their safety, driven out of online platforms, exhausted by the psychological toll of dealing with hatred daily. 

What do we do? A common reply is that ‘the solution to bad speech is more speech’ – if people are going to say bad things, let them say them in the open where they can be scrutinised and challenged, not hidden away to fester. And encouragingly, we found that people in online communities are doing this work of ‘more speech’. People are engaging in counter speech, in identifying abuse, reporting it, calling it out, speaking out against it. But the problem is: counter speech can’t keep up. 

The pattern that persists is not that we see those sharing abuse cowed by the challenge. Quite the opposite: in return for challenging abuse, people online receive more abuse. They faced hostility and harassment; delegitimisation; being talked over and shouted down. They were told they were really the problem; that they were making up or exaggerating the problem; and that online violence didn’t count as ‘real’ violence. Among all of this we saw the recurring idea: that women shouldn’t be complaining about abuse, because it was predictable – women stepping into an online space just have to expect they will get this and adjust their behaviour accordingly.

That the predictability of abuse online is taken to mean we should tolerate that abuse is deeply disturbing. We have to shake the assumption that the status quo of how speech is currently created, shared and consumed online is the default, and that to change anything now is to ‘infringe on free speech’ in a way that is somehow worse, a greater and more worrying restriction on freedom of expression than the situation we already have.

The tech giants have captured our idea of what communication can be like online. No-one says it’s an affront to free speech that we can only say things up to 280 characters at a time on Twitter; or that men can’t message women first on Bumble; that Facebook will promote content in your News Feed from your friends above random strangers; or that you can only post peer-reviewed research in the subreddit r/science. Why not? Because that’s just how things are: that’s how the platforms are designed, and how communication is permitted, encouraged or incentivized. 

But they could be something completely different. Holding platforms to account for these choices – where their choices make abuse accepted, tolerated, amplified and circulated in online spaces – doesn’t threaten an existing model of open and free speech. That model is a myth.

The challenge – that Parliament will be tussling with very soon – lies in how we can hold platforms meaningfully to account. Part of the solution lies in the most honest answer: we don’t know. We have some idea, for sure – to start with, we need better enforcement, more support for those targeted, improved digital literacy; we need transparency, audit and oversight of platform’s design choices and algorithmic systems; we need more decentralisation and alternative business models for tech platforms.

But building specific policies  – what should Facebook do tomorrow to fix this? – requires information that we simply do not currently have. In this research, we looked at around 90,000 posts on Twitter and Reddit. Maybe that sounds a lot: but there are hundreds of millions of tweets per day sent out: not to mention billions of posts on sites like Facebook  – inaccessible and so invisible to many researchers. The portion of the online ecosystem we can examine at once is tiny: and our ability to observe the effects of changes to platforms is limited to extrapolation rather than being able to test things out for ourselves. Those who have the power and the information to do that are the platforms themselves who, without oversight, are motivated by profits and engagement, rather than by protecting fundamental rights.

Users have to have more power in the online spaces they help maintain. That doesn’t mean an extra reporting button or the ability to turn off notifications, or handing over our identity details to the platform in case we transgress. We need to be thinking beyond what already exists, and reimagining what we think is possible in social media: finding those sparks of community, of solidarity, and what brings them about. We need to be supporting people to build new tech rooted in shared values, and helping users to demand that instead of us shrinking ourselves until we can exist in online spaces, that they grow and evolve to accommodate us.  

Explore our research in full.