How not to do platform regulation

Published:

‘! Get the facts about mail-in ballots’. This week Twitter added a fact-check label to one of Donald Trump’s tweets, which incorrectly claimed that plans for mail-in ballots would lead to voter fraud – on the grounds that they were enforcing their civic integrity policy, and believed ‘those Tweets could confuse voters about what they need to do to receive a ballot and participate in the election process. Trump’s response was first of all, to claim censorship, bias, and threaten regulation.

Now Trump has signed an executive order which seeks to reduce the freedom of platforms to moderate their platforms as they see fit, allegedly to ‘prevent censorship’: despite the fact that this could lead to platforms being forced to take more content down, rather than less. And as many have warned, seeing this order as a serious attempt at platform regulation rather than simply an attempt to stop platforms making decisions which are not aligned with the President’s interests would be a mistake. Since the order was signed, Twitter has also labelled and hidden a tweet by Donald Trump about the protests against police brutality in Minneapolis after the killing of George Floyd. The tweet ‘violated the Twitter rules about glorifying violence’, but remains on the site, with Twitter saying it may be ‘in the public’s interest’ for the tweet to remain accessible. How Trump may respond to this is yet to be seen. 

From ‘fake news’ laws to internet shutdowns, the use of purported regulation of the internet to advance the interests of political leaders is widely seen across the world. When we advocate for regulation, we must be wary of whose power that regulation challenges, and whose it entrenches. It is crucial that events like this – where a political leader seeks to avoid accountability for what he says online by claiming political bias and threatening platform shutdown  – do not undermine the legitimate arguments for platform regulation; and conversely that criticism of the executive order does not amount to a free pass for the platforms to make their own decisions without any oversight. 

Some people’s speech will always be ‘disfavoured’ by platform decisions – to use the executive order’s terminology – whether that’s because it’s harmful or whether that’s just because it’s boring. Search results are ranked in terms of supposed usefulness; ads are shown to you based on personal information gathered about you; illegal content is removed; posts are shown to you based on how likely they are to get clicks or likes or shares. Not labelling a tweet is as political a decision as labelling one; leaving a tweet up is as political as taking it down. What a user sees online or can say online is always subject to decisions made by platforms, based on a desired end result – whether that is more users or greater ad revenue or a safer internet or public health or ensuring freedom of expression. The question is who gets to decide what the desired end result we work towards should be: a platform, an individual leader, or the body politic?

Currently, UK proposals for social media regulation, while also citing platform failures, have a very different aim: the ‘duty of care’ proposal aims to ensure that platforms are meeting their responsibilities to put adequate systems in place to tackle harmful content online such as hate speech, cyberbullying, or disinformation – rather than trying to stop platforms from implementing such systems. The UK should stand firm in the face of changing US political signals. We should be calling out platform decisions when they are harmful, and we should be standing with platform decisions when they are justified. What the UK can learn from the US example, however, is the absolute need to ensure that the public, politicians from across different parties, and civil society are actively involved in shaping future legislation – to avoid any charge of legislating by fiat. 

Knee-jerk reactions and intimidation based on personal political convenience get us nowhere. We need more power over what we see in our online spaces to be given to individuals and communities – those who are most affected by the harms and have the most to gain from the benefits of the web – and not for the regulation debate to continue to bounce back and forth between corporate interests and political party interests. We need transparency and consistency from the platforms in what actions they are taking and why; and we need serious efforts to work towards a democratic settlement between citizens and platforms – and a positive vision for the internet that takes into account the rights and needs of all users.