‘Frankly, we did win this election’: on social media, who will watch the watchmen?

Published:

Election Day in the US saw a predictable surge in false, misleading or harmful narratives emerging, from robocalls telling voters to stay home, to YouTube livestreams of fake election results, to online amplification of hate and unrest. To their credit, Twitter, Facebook, Google and others responded, albeit to varying degrees. Platforms tried promoting authoritative information, labelling, restricting or removing misleading or harmful content, and trying to squash new rumours as they arose.

But the disinformation that made the headlines was much more straightforward. It was also the one that all the labelling in the world couldn’t suppress entirely: the President’s early-morning statement.

He had won the election. He hadn’t, of course –  at time of writing, no-one has won. He was determined to take steps to stop election fraud, though there is no evidence of fraud. He called for ballots to stop being counted, an immeasurable violation of the electoral process. 

His diatribe was fed straight into the online conspiracy paper mill. The story began gaining more and more traction that Democrats were trying to ‘steal’ the election, and Trump’s supporters needed to stop them. 

In dealing with this, simple enforcement of moderation policies can help. But it’s not going to solve the problem. As long as the most prominent online platforms are designed for those money-making clicks and shares, user empowerment and democratic freedoms will always end up an afterthought. 

The challenge of how to deal with disinformation that comes straight from the state to entrench its power over its own population is by no means new – not only in the US, but across the world. Last night was the latest stark reminder that ‘online harms’ are not just things that happen ‘online’. They are things which happen everywhere, amplified and scaled by the online world, but with impact reaching far beyond it. A statement from a leader becomes a touchpoint for the blossoming of more and more conspiracising, doubt, and anger. QAnon is running rampage. A fringe conspiracy theory that began on 4chan is seeing its backers entering the corridors of power. Meanwhile, the narrative of social media ‘censoring’ or being ‘biased against’ conservative voices continues from the White House.

The UK is about to set out its path for how it will seek to tackle online harms, including disinformation and extremism. It wants to be a world leader in setting out what a free, open and secure internet should look like. It would do well to look to what we’ve seen over the last 24 hours as instructive of a danger that any country seeking to set the global standard on regulation must take into account. Namely, that regulation, in the wrong hands, would take the power to control information online from the platforms and hand it to state leaders.

Not regulating platform action on disinformation at all risks allowing these abuses of platforms by states and state leaders to continue. Over-regulating platform action on disinformation risks allowing those in power to define what ‘successful action’ on disinformation looks like in its own interest. And regulating to try to stomp on bad spaces online without supporting the development of better alternatives will only ever address half the problem.

The UK must not sweep these issues aside – it must tackle them head-on. Putting safeguards in place to ensure that regulation is reviewed, responsive, and has meaningful democratic and judicial oversight – is not a worrying admission that we think we aren’t to be trusted to regulate well. It’s a crucial commitment to build an internet that prioritises rights and freedoms: and that regulation, wherever it occurs, must be in the interests of citizens, not in the interest of the state.