“Does it count if I’m hurt in the metaverse?”: Living with Immersive Harm

Published:

With the publication of the Online Safety Bill, it has been a landmark few weeks for platform regulation. What is stated in the Bill now is supposed to keep us safe not just on the internet as we know it, but in whatever our digital futures may turn out to be, promising to protect us whether we are politicians on Twitter or kids in the metaverse.

However, political discourse is failing to get to grips with the complexities of the metaverse. 

For starters, all too often people think there is only one, that Facebook runs it and that it “has been around only a few weeks.” But Meta did not invent the metaverse. The metaverse already exists now outside of Meta’s purview as a collection of fairly niche virtual reality (VR) worlds, populated by gamers, artists and social VR worlds (imagine a chat room crossed with a playground). The term was coined in the early 90s in Neal Stephenson’s novel Snow Crash, and was picked up in the early noughties to describe games like Second Life, which let users create avatars that led virtual lives very similar to ours. VRChat, one of the most popular examples of a social metaverse, has existed since 2014. 

But despite the term being decades old, Meta has taken control of the narrative and is leading the way with their rebrand. Meta’s social VR platform, Horizon’s Worlds, was released to users in the US and Canada last December. If Mark Zuckerberg has his way, the future will see our physical world – our offices, our pubs, libraries, parks and squares, even our bodies themselves – almost entirely replaced by a virtual equivalent. How far people really will embrace life in the metaverse is yet to be seen, but given Meta/Facebook’s exponential growth from a university-campus socialising website to a corporation valued at hundreds of billions of dollars and responsible for global digital and information infrastructure, it’s not impossible that the metaverse will be as integral to people’s lives as social media currently is. 

If Meta’s vision for the future comes to pass we can expect the kinds of interactions we currently have on social media to move into this immersive digital reality. Below are three predictions for how this could turn out – and what those scrutinising the Online Safety Bill need to keep in mind: 

Prediction 1: Rampant violence in the metaverse will challenge our current understanding of ‘harm’

The push to make VR increasingly immersive is going to change our understanding of ‘online harms’. Currently, the conversation around online harms focuses on different forms of content like racist tweets, fraudulent advertising, intimate images shared without consent. But VR is going to challenge this way of thinking by bringing harmful behaviours into online spaces that are experienced by users as much more physical. People already report being sexually assaulted in VR: and the development of new technologies may heighten these risks even further if dystopian accessories like suits that stimulate physical pain take off. 

We need to ensure that our understanding of harm is sophisticated enough to deal with cases where the body might not be literally harmed but the user experiences the trauma of this as a real, physical event. And we need to be careful to do this in a way that doesn’t create a hierarchy of abuse, where the current tendency to dismiss women’s experience of trauma online is made worse than it already is.

Prediction 2: Anxieties about how people represent themselves through avatars will seed mistrust amongst users and revive misguided debates about banning anonymity

Avatars in the metaverse promise an exciting new era of experimenting with self expression online. Want to try that new haircut? Be a few inches taller? Be a duck? Spend an hour playing around with your gender presentation? As the metaverse means we shift from representing ourselves with our most flattering pictures to avatars we design, these are all questions we all might soon be encountering when thinking about our digital representation online.

But this opportunity for exploration is also going to be abused. Trolls will revel in digital blackface. People will deliberately disguise themselves as trusted relatives as part of scams. The worst bits of the internet will be just as present in the metaverse. Serious questions about how it is acceptable to represent ourselves online, how to balance the arguments for self-expression, respect for others and the need to trust other users in metaverse spaces need to be addressed. 

Add to that the weaponisation of claimed abuse, and there are even more reasons to be concerned. 

Having seen how online discourse about single sex spaces in the physical world marginalises and threatens trans people, so in the metaverse, we need to be concerned about the possibility of fierce policing of gender designed to exclude trans individuals from being in spaces that match their gender. This would not just be bad for trans folk. It would be bad for anyone whose gender presentation or voice casts them as ‘suspicious’, as we already see happening to women whose masculine appearances leave them vulnerable to transphobic abuse.

When there are issues around identity online, ‘user verification’ is all too often the suggested response. But verification doesn’t stop people from abusing others online. Instead, it becomes a threat to marginalised groups like the LGBTQ+ community and risks creating an online culture where being anonymous is automatically, unjustifiably suspect. These basic facts won’t change in the metaverse. 

Prediction 3: The metaverse will lead to a new era for content moderation, one that risks waves of victim blaming and uncomfortably high levels of surveillance

Platforms are going to have to respond to these complicated challenges somehow. The big change compared to social media as we currently know it is that interaction in the metaverse will shift from shareable content to real-time activity. This shift will have ramifications for what it means to ‘moderate’ these platforms. This will happen in two ways: user controls and top-down moderation processes.

User controls, like personal boundaries, are designed to limit the ways users interact with each other. While these tools will undoubtedly prevent a number of assaults happening in metaverse, echoing real-world women’s safety discourse, emphasising user controls risks giving rise to victim blaming, where victims are questioned about their safety settings rather than the attention being on perpetrators of abuse. Layering safety features is not a silver bullet to making platforms safer. People will always find ways around these measures to harass and abuse: loitering at the edge of someone’s boundary for hours isn’t any less threatening than them standing right next to you. 

Moderation processes are going to have to be reinvented. In VRChat, moderators are able to turn invisible in public spaces to monitor behaviour without others knowing. This offers us a glimpse into what moderating user activity might look like in the future: an increase in top-down surveillance, moderating not only content that we are knowingly posting, but our eye movements, our microexpressions, our heart rates, the directions we start to wander in but then turn back from. Invisible moderators strolling through virtual places is not something to be welcomed. If spaces are moderated like this, then we need to be deeply concerned about how that would impact people’s sense of privacy and the chilling effect that could have on the ways that we feel able to exist in digital spaces.  

To contend with any of these changes in the metaverse, the Online Safety Bill needs to not just be platform-agnostic but ready to withstand a shift from posted content to real-time activity and interactions. The difficulties lies in that while this shift would mark a new chapter in the digital age, many of the scenarios described above are exacerbations of what we already witness on the internet rather than things we have not seen before. To help build a metaverse that any of us might want to spend time in will involve managing to walk a tightrope between learning lessons from the internet as we know if and avoiding simply replicating our current responses without paying attention to whether or not they are appropriate. It is this tightrope that means there is reason to think hard about whether the Online Safety Bill is as ready for the metaverse as policymakers would have us believe.