System change for system changes’ sake

Published:

The publication of the Online Safety Bill, after years of debate and scrutiny and consultation, is imminent. By way of teasers, we’ve had two announcements from DCMS now ahead of the publication of the new Draft Bill itself – the extension of the list of priority offences beyond CSEA and terrorism to include a wide range of illegal offences, as well as new plans to ‘stamp out anonymous trolls’.

The first asks platforms to ‘proactively tackle the priority offences…[making] sure the features, functionalities and algorithms of their services are designed to prevent their users encountering [priority illegal content] and minimise the length of time this content is available.’ This means that online platforms will have to have systems that mean users don’t see a wide array of types of illegal content online – from terrorism content, to content which commits fraud, promotes suicide or constitutes weapons sales. In the extreme, this means a system which identifies and takes down these types of content, continuously and at scale. 

The Bill will also apparently include duties for platforms to ‘give adults the ability to block people who have not verified their identity on a platform….[and] to provide users with options to opt out of seeing harmful content.’

‘Regulate systems not content’ has been the rallying cry of many groups – including us – in calling for a Bill which holds platforms to account for the decisions they are making about their services, rather than one which introduces liability for platforms for content posted by users. The shift in language here could be positive: talk of proactive changes to features and functionalities, and suggestions of changes to platform design that could help reduce abuse, rather than seeing reactive content takedown measures as the only good outcome or the only good solution. 

But this celebration at this point can only go so far: just because measures are ‘proactive’ or targeting ‘systems’ doesn’t mean they are actually good measures.  

Why not? Surely any change to a system that might make it safer is a good thing? 

We can easily picture dangerous designs around us. A car, for instance, with a 200mph top speed, concrete airbags, candyfloss seatbelts, no mirrors or license plates would raise eyebrows.

On the other hand, you could add in safety measures: a car limited so it can’t go above 40mph, which emits a continuous loud warning noise as you drive to alert others to your presence. You have to enter a PIN to get the car to start, which you must collect every week from your local authority by providing your driving licence. It also automatically sends your speed every 30 seconds to the DVLA so you will definitely be caught if you speed.

The second car is not a good car. I doubt it’s even a safe car. Sometimes it might be safer to go over the speed limit, if it’s an emergency. Sometimes you might need to drive faster than 40mph to get to an urgent destination like a hospital, or to overtake. A loud warning noise might distract you from being able to drive safely, or upset other drivers to the point of distraction. Taking a holistic view of safety means looking at the likely outcomes of design choices, and not just whether or not they sound ‘safe’. The questions we should be asking: and demanding answers to, before any system changes are required, are not just ‘will it make things safer?’, but include: 

  1. What problem is the system change trying to solve?
  2. What outcome is it trying to achieve?
  3. What level of error is acceptable? 
  4. Can undesirable outcomes be effectively mitigated? 

For instance: previously in the Bill, platforms were required to take ‘proactive’ action against child abuse and terrorism online. This makes sense – not only because these are extremely harmful activities, but because much CSEA and terrorist content can be automatically and successfully detected (through hashing images and comparing them to known CSEA and terrorist content.) There is a clear system which platforms can use to address these offences. It is imperfect – but it’s a scaleable, technical solution for identifying large quantities of extremely harmful and dangerous content that is also highly likely to be illegal. 

This does not extend to all forms of illegal activity online, much as the Government would like it to. The same combination of characters, emojis, and images, could be legal in one instance and illegal in another. Context, knowledge and intention (under the Government’s proposed new communications offences), all play a substantial role in whether or not some forms of harmful content would qualify as illegal or not. Platforms literally cannot make these fine-grained legal distinctions at scale: and we shouldn’t be expecting them to, or else we are effectively outsourcing our already-flawed system of law enforcement to private companies that have neither the ability nor the legitimacy to be making these judgements. 

Similarly, a ‘turning off legal but harmful content’ filter, theoretically, sounds nice: a way to reduce the risk of users being exposed unknowingly or unawares to content that might be harmful to them. But what does this look like in practice? Users being promised they are being protected from disinformation and abuse, and then being expected to trust or tolerate whatever they see? Users who want protection from dangerous content that blatantly contravenes platforms’ terms of service, then not being permitted to see anything that could vaguely be considered at all harmful to anyone? 

There is no apparent assessment of what level of error, and in which direction, will be expected: under-moderation, where much harmful content is missed and users expose themselves to risk once more unknowingly, or over-moderation, where too much is filtered out and users turn off their access to benign or even extremely valuable information. After all, it’s common for existing content moderation practices to demote and demonetise content from marginalised groups. 

Both of these are attempts at something ‘proactive’ and ‘systematic’ but both boil down to the same thing: identify where the harmful content is and then act on it; a content measure looking like a systems change. 

And the right to verify is intended to empower users with more choice about who they interact with – specifically whether they interact with accounts of users who have not provided some form of verification of their identity to the platform. But this again risks a mismatch between a problem and a solution: by targeting identity as the problem rather than abusive behaviour, it threatens to normalise the exclusion, mistrust and ostracisation of those who need to be anonymous online – while encouraging us to hand over yet more data on ourselves to private companies.

So how do we work out, when we are requiring ‘safety by design’ from platforms, what we should actually be requiring?  

Some people have argued that most system changes are just content measures in disguise, albeit more sophisticated ones because they seek to affect particular types of content (e.g. reduce the amplification of abusive content). And yes, these measures are designed to ultimately have some kind of effect on what content is made or seen or shared or believed.

But there is a distinction between the following principles:

Disinformation is harmful content: so content which is disinformation should be taken down

And

Disinformation increases the risks of people coming to harm (e.g. by justifying violence, by prescribing false medical treatment, by affecting people’s trust) so we should take action to reduce those risks – including through moderation, demotion, labelling, media literacy, promoting authoritative information, reducing virality, &c.  

Both are saying that disinformation is harmful, and both are recommending action to reduce the harm associated with disinformation: but the first locates the harm *in the content*, for which the only solution can be takedown – regardless of any actual impact. The second recognises the harm arises from a complex system including the content, the context, the environment, the speaker and reader. That’s why we need a systemic approach (see here for a good discussion of more content-agnostic systems measures). 

And moreover: there is no neutral option here. In asking for certain measures, we are asserting that some kinds of content are more harmful than others, or worse than others, or less desirable than others – and when that goes beyond the bounds of content which has been deemed bad enough to be illegal, people are understandably nervous. But companies already have systems in place that decide or influence what content is made, seen, shared and believed. Legal speech online is not currently ‘free’ or ‘unfettered’ or ‘has free reach’. It’s constrained by the commercial imperatives and opaque decision making of platforms. Those decisions can raise the risks of harm to users – they can reduce the risks – they can infringe on privacy – they can infringe on freedom. 

Asking for regulation is not about adding content measures – it’s about changing the content measures which are inevitably used: moving from a framework where technical decisions are made to make shareholders more money, towards establishing a principled framework to examine those decisions, rooted in democratic values.

There are different kinds of system/design changes platforms can make, which carry different kinds of potential and risks. 

Tiers of system/design changes

      1. What is the point of your platform?

This is the fundamental, business-model question. Is the point to make money? Is the point to make money, but with side-constraints of certain corporate values? Is the point to empower communities? Is the point to experiment with new technologies? 

      2. What can a regular user do on your platform, and how is this facilitated?

Is it a 121 messaging service; is it a video-sharing platform; is it a community chat forum; is it a VR meeting space; is it a document-sharing service? The parameters of these powers are then the technical limits to how a user can do these things on the platform (not what they are allowed to do). How many characters can a user include in a message? Can they broadcast messages to users they don’t know? Can they upload photos from their device, or only take new photos? Can they choose the level or kind of encryption they want? Can they add filters to their videos?

      3. What does a user see or experience on the platform, and how is this facilitated?

This includes, but is not limited to content. There’s the aesthetic design of the platform – the colours, the fonts, what you see on the homepage. Then there’s what content is promoted or recommended to them; what comes up when they search; what is in the ads they are seeing; what hashtags or trends are presented; how content is labelled; if content autoplays or is tagged as sensitive.

Some of these things can be controlled by the user themselves setting preferences (e.g. users following or subscribing to see particular content); some is controlled by the platform;  (think the TikTok For You page). Some is personalised to individual users; some is constant across all users.  

      4. What can a user access on the platform, and how is this facilitated?

Can they access all kinds of information? Is anything blocked? What can they access without being logged in, or without verifying their identity? What can they see about other users? Can they access content from other users: can they access data about themselves, or about others?

      5. How does the platform respond to user behaviour, and how is this facilitated?

If someone does something on a platform that is against the platforms’ rules, or against the law, what action does the platform take? How are their content moderation decisions made? Who is involved? Is it automated or involves moderators – are those moderators trained, supported and paid?

      6. How does the platform respond to personal attributes of a user, and how is this facilitated? 

What about the above changes if you are a certain age? How is age verification or age assurance carried out? Are characteristics about you inferred from your data and used to change what services you can access, or what ads or content you see? Are biased algorithms being used that are more likely to shadow-ban marginalised groups of creators? 

Some of these are technical decisions (what does the code say?); some of these are process decisions (who is involved at step 2?); some of these are clearly content decisions (what content is ok?). But all of these areas include platform decisions. Some of those decisions, in a liberal democracy, we think are the prerogative of the company providing the service. Some of those decisions should be made by users themselves. And some of those decisions may have such a significant impact on the safety and rights of citizens, that it’s appropriate that there be independent oversight. 

But the further up the chain you go, the more likely you are to be regulating decisions that platforms are making in providing services that might be causing or exacerbating risks to their users. The further down the chain you go, the more likely you are to be regulating user behaviour or activity, and using the platform as a proxy to do so: and the closer attention to those pesky side effects – like impacts on user rights – you need to be paying. 

How do we do that? Under the original model of the Bill, platforms would have to produce risk assessments and evidence to the regulator’s satisfaction that their systems were reducing the risks of harm to their users. It would be within a regulator’s purview, specifically so that it can be an ongoing, evolving process, supported by research, analysis, and consultation to establish the level of harms that were occurring, balancing protections and rights, assessments of proportionality, to establish what specific requirements Ofcom should place on platforms – and to test if they were working, and change them if they weren’t. 

We’ll have to wait to see how far the new Bill will be wearing the increasingly prevalent rose-tinted glasses of wanting to regulate for the desired outcome, even if it can’t be technically achieved. But a lack of engagement in how these different tiers of system changes should be approached means that it’s likely to be charting a worrying course. There’s still time for a course correction: the real test will be whether our Parliamentary processes are strong enough to be up to the challenge.