Still a Long Road Ahead in Fight Against Digital Extremism

Published:

With Theresa May speaking at the UN this week, the subject of extremist use of the web is back in the headlines. This follows Tuesday’s Policy Exchange publication ‘The New Netwar’, which presents an analysis of the way the Sunni extremist movement uses the web to support a series of recommendations for future policy to deal with extremist and terrorist content online.

The report brings together research by Drs Ali Fisher and Nico Prucha, and a series of policy recommendations by Dr Martyn Frampton. These include regulation and fines for social media companies, greater investment in law enforcement and changes to the law.

The first section of the paper is confident and comprehensive, cataloguing problems in existing research, and convincingly presenting Fisher and Prucha’s long-standing theories of Islamist narratives and dissemination networks. The policy recommendations made later in the report, however, are rather more mixed.

Calls for greater care by researchers and the media with regards to the content they republish have been desperately needed, and the report makes a strong case for continued and serious dialogue between tech companies and the government. But at the crux of the paper’s argument is the echoing of mantras that policy-makers have been repeating for some time: that technology companies should do more, that extremist content shouldn’t be on the internet, and that Silicon Valley brains ought to be making better technology.

What the tech companies ought to do isn’t spelled out. ‘Extremist content’ is treated as a simplistic label. Calls for better technology do not recognise the challenges posed in their development. Three distinct but interrelated policy areas stand out: technology, extremist content and regulation.

Technology

One of the key findings is that Islamic State have been pretty consistent in their outputs over the last year, which goes against reports of victories against the terror group on the web. This evidence is used in this report to question the commitment of the tech giants in the struggle against extremist content online.

Crucially, all references to “the challenge” in the report refer to the threat of online extremist content. Nothing is made of the challenge of actually policing a platform as large as Facebook, Twitter or YouTube.

For years, the government, law enforcement and think tankers have harried Internet companies to do more, but actual solutions for policing a website with two billion users have been few and far between. Nods to technological innovation – algorithms to police content, image and video databases, AI and so on – both misrepresent the capabilities of this kind of technology and underestimate the effectiveness of community-based content moderation.

One example of this, mentioned in the report, is the success of social media companies in removing content that infringes copyright, an argument made in a Home Affairs Select Committee earlier in the year. But there is a huge difference here: something is either copyrighted or not. Carefully packaged material from copyright holders is easy to cross-reference against content on your website. Ever-changing, organically-created and undocumented content that sits on a spectrum of legality is much more difficult.

The costs of implementing systems like PhotoDNA is also underappreciated: we can expect Facebook to do it (and they have done), but what about the myriad tiny sites and blogs without the billions in the bank? The web is not just the people you can get round a table in Silicon Valley and shout at.

And, of course, even in the simple arena of copyright this technology still isn’t perfect. It’s still dead easy to find videos of Antonio Valencia’s screamer against Everton this week.

https://youtube.com/watch?v=873BMC0p53s

What a hit.

There is a contradiction between the evidence presented and the recommendations made. For example, “the simple truth”, write Fisher and Prucha, “is that just because researchers cannot find it on Twitter does not mean the jihadist movement is in decline”. This is certainly the case: a myopic focus on Twitter as a single platform for distribution of Islamist material has never been sufficient.

But the increasing difficulty of finding Islamist material on major platforms, and the growing importance of alternative platforms like Telegram (referred to by Fisher as a “multiplatform zeitgeist”) could be hailed as a success on the part of Twitter. Indeed, the recommendations make this explicit, calling for the big companies to drive the extremist content off their platforms.

Extremist Content

The long and short of it is that moderating platforms of this scale is extraordinarily difficult. It becomes even more difficult when dealing with the content described. A simple example of this emerges from the report itself: one survey question asked respondents to ‘draw the line’ on extreme content – does it, for instance, contain murder, assault, or even just hateful speech without incitement to violence? The report recommends the Commission for Countering Extremism draw up a definition of extremism based on promoting violence or hatred.

But what about the thousands of images of tractors and shopping centres circulated by Islamic State and shown in the research supporting the recommendations? Only a small percentage of content circulated by these extremist groups is actually violent, focusing instead on utopian arguments of state-building and victimhood narratives. What do we do about this stuff? The language of extremism is nuanced, ever-changing and far from universally violent. This is not only a challenge to law and law enforcement, but a challenge to designing technology.

Regulation

Ultimately, the report’s recommendations are more stick than carrot. The most menacing-looking stick, regulation, is discussed, including the primary question of whether social media companies are de facto publishers of what their users post. The report argues they ought to be seen as such. It is certainly tempting: the surveys commissioned for the report suggest that the British public demand more from social media companies on extremist content.

But again, we must take care to follow the recommendations to their logical conclusions. The most obvious result of forcing social media companies to be responsible for every piece of content on their platforms will be a catastrophic impact on the companies themselves, and as such will be fought tooth and nail. We need cooperation, not conflict, if we are to continue to press for change. But we must also ask whether the British public is willing to sacrifice the modes of communication that have made social media companies so successful, that so many of us use on a daily basis, that are the firmament of communication in the 21st Century.

We now expect the ability to share thoughts, photos and videos at the touch of a button. Pre-moderation – hold on, we’re just checking the photo you’ve tried to post is okay – feels ludicrously out of date. It would be interesting to see how Brits would feel about their communications being pre-screened, though that is clearly out of the scope of this report.

Ultimately, the report presents fascinating new evidence in understanding extremist use of the web, and should remind us that this is a struggle that will continue to grow and evolve in the coming years. What it is unable to achieve, however, is a sufficiently sophisticated suite of policy responses to fundamentally address the challenges it poses.

[email protected]