Demos Daily: Anti-Social Media

Published:

The Black Lives Matter movement continues to organise protests around the world to try and end racial discrimination against the black community, often reaching people online through social media. However, social media has also been where many of the hate groups that perpetuate racism have organised, as they’ve used platforms to spread their messages.

In 2014 Demos produced Anti-Social Media, a report looking at the way racial, religious and ethnic slurs are used on twitter, and whether it’s possible for a machine to identify hate speech online.

You can read the executive summary below or the full report here.

Executive Summary

How to define the limits of free speech is a central debate in most modern democracies. This is particularly difficult in relation to hateful, abusive and racist speech. The pattern of hate speech is complex. But there is increasing focus on the volume and nature of hateful or racist speech taking place online; and new modes of communication mean it is easier than ever to find and capture this type of language.

How and whether to respond to certain types of language use without curbing freedom of expression in this online space is a significant question for policy makers, civil society groups, law enforcement agencies and others. This short study aims to inform these difficult decisions by examining specifically the way racial and ethnic slurs (henceforth, ‘slurs’) are used on the popular microblogging site, Twitter.

Slurs relate specifically to a set of words, terms, or nicknames which are used to refer to groups in a society in a derogatory, pejorative or insulting manner. Slurs can be used in a hateful way, but that is not always the case. Therefore, this research is not about hate speech per se, but about epistemology and linguistics: word use and meaning.

In this study, we aim to answer two following questions:
(a) In what ways are slurs being used on Twitter, and in what volume?
(b) What is the potential for automated machine learning techniques to accurately identify and classify slurs?

Method

To collect the data, we scraped the publicly available live Twitter feed (via its stream application programme interface) for all tweets containing one or more candidate slurs over a nine day period (19 November – 27 November 2012). The list of terms judged candidate slurs was crowd sourced from Wikipedia.ii The tweets were then filtered to ensure that the slurs were contained in the body of the tweet and were not part of a user’s account name and then passed through an English language filter to exclude non-English tweets. In total 126,975 tweets were collected: an average of 14,100 tweets per day. All of the tweets in our samples were publicly available to any user of Twitter as a live comment (i.e. at the time the tweet is published by the sender).

Using this data set, we ran two types of analysis.

In study 1, we used automated machine classifiers to categorise the data sets. This involved human analysis of a sample to identify categories, followed by training a natural language processing technique to recognise and apply those categories to the whole of the data set automatically.

In study 2, we used human analysts to categorise subset samples of the data. This involved in-depth, iterative, analysis by researchers of small and then larger random samples of the data to reveal a stable set of categories.

Results (a): volume, nature and type of ways racial, religious and ethnic slurs are being used on Twitter

• We estimate that there are approximately 10,000 uses per day of racist and ethnic slur terms in English (about 1 in every 15,000 tweets). The ten most common terms found in our data set were (in order of prevalence) “white boy”, “paki”, “whitey”, “pikey”, “nigga”, “spic”, “crow”, “squinty” and “wigga”. The distribution was uneven across the terms, with “white boy” appearing in 49 per cent of tweets, and of the rest, only “paki” and “whitey” comprised more than five per cent of the total (12 and eight per cent respectively).

• Slurs are used in a very wide variety of ways – both offensive and non-offensive. We identified six distinct ways in which slurs are employed on Twitter: negative stereotype; casual use of slurs; targeted abuse; appropriated; nonderogatory; and offline action / ideologically driven.

• Slurs are most commonly used in a non-offensive, non-abusive manner: to express in-group solidarity or nonderogatory description. Both human and machine analysis identified non-derogatory use as the largest category of tweets (estimated at 47.5-70 per cent of tweets, respectively). If casual use of slur terms is included in the human analysis (as in “pikey” being interchangeable with ‘West Ham supporter’), the proportion rises to about 50 per cent. Both analyses also showed that relatively few tweets – from 500 to 2,000 per day – were directed at an individual and clearly abusive.

• There were very few cases that presented an imminent threat of violence, or where individuals directly or indirectly incited offline violent action. We estimate that, at the very most, fewer than 100 tweets are sent each day which might be interpreted as threatening any kind of violence or offline action. (This does not mean there are no other threats taking place which do not include the use of a slur).

• Casual use of racial slurs account for between 5-10 per cent of use. A significant proportion of use cases are what we have termed ‘casual use of racial slurs’, which means the term is being used in a way that might not be employed to intentionally cause offense (or where the user is unaware of the connotations of the term) but may be deemed so by others. The way in which racist language can seep into broader language use and reflect underlying social attitudes is potentially an area of concern.

• Different slurs are used in very different ways. Different slurs are used very differently. One of the most common terms, “whitey” is more often used in a non- derogatory, descriptive way compared to other terms, such as “coon” or “spic”. There  are some indications that “paki” is becoming an appropriated term – a significant proportion of its use was by users identifying themselves as of Pakistani descent, despite it remaining in regular use as an ethnic slur.

Even though racist, religious and ethnic slurs tend to be used in a non-derogatory way on Twitter, this does not mean that hate speech is not being used on this platform. Language does not require the use of slurs in order to be hateful. We therefore do not make any broader claims about the prevalence of hate speech on this platform, an issue that warrants further study.

Results (b): what is the potential of automated machine learning techniques to accurately identify and classify racial and ethnic slurs?

Overall, the medium of Twitter provides an unprecedented source of data for studying slurs, and language use more generally. However, context is extremely important in determining the underlying significance and meaning of language, especially in contentious areas. For example, standing literature (and indeed our research) suggests that racial slurs can be appropriated by the targets of these slurs, and used in non-derogatory ways, defined as: ‘used without displaying contempt or causing hurt’. There are many versions of this, including humour, satire, and assumed in group norms (appropriation).iii This means that the relationship of a
speaker to the group concerned is vital, but not always clear in the short text form tweets.iv This can make purely automated techniques quite difficult to apply.

• Machine classifiers were extremely useful to identify and filter data sets into more manageable data sets. The automated classifiers performed well in initially
distinguishing between relevant and irrelevant tweets i.e. tweets where the terms were being used in racial or ethnic senses rather than unrelated senses.

• On more nuanced categories – such as distinguishing between the casual use of slurs and targeted abuse, they performed less well. Some of the categories created for the different types of slur usage were quite nuanced distinctions. The automated classifiers performed reasonably well at correctly identifying certain cases, although the smaller and more nuanced the category, the less well they performed.

• Qualitative analysis was useful to determine nuanced categories. Qualitative analysis of the data sets allowed the analysts to find more nuanced categories, such as appropriated use. Careful analysis of individual tweets also revealed how significant context is in determining meaning and intent – and how often it is lacking in the short form text of Twitter.

• Even following detailed discussion, human analysts would often still disagree on meaning, intent and purpose. Even where analysts discussed disagreements over how to classify specific texts (for example, whether it was ‘nonderogatory’ or ‘casual use of slur’) there remained continued disagreement. Two analysts working on the same data set were able to agree 69 per cent of the time when classifying the data. There were several reasons for this, including cultural biases.

Implications

We limited our data collection to a very short time segment on one social media platform. Even based on this short study, however, the use of social media data collection to understand trends and changes in language use is an excellent new resource for researchers – and especially linguists and those interested in the relationship between language and belief. We recommend that consideration be given to apply similar techniques for other language use: for example conversations about certain groups and communities. However, it is vital that subject matter specialists be involved.

Twitter sampling works on the basis of key word matches. This type of analysis automatically creates systemic bias into the research method. Our use of a crowd-sourced word cloud was a simple way around this problem, but there are no doubt several other, fast changing, terms that we may have missed. It is certainly the case that automated key word matches are of limited power in respect of finding genuine cases of serious ethnic slurs or hate speech. Each case is highly contextual; and often will depend on approximations about the individuals involved.

Any conclusions drawn from these or similar data sets need to bear in mind these limitations. In general, therefore, language classifiers are extremely useful as tools to filter and manage large data sets. When combined with careful and detailed qualitative efforts, their use is magnified.

Overall, perhaps the key finding of this paper was the significant proportion of Twitter slurs that were found to be superficially non-derogatory. One working hypothesis is that the language used in tweets with such cases – and the sentiments expressed – reflect social norms within the sender’s personal community. Notwithstanding the absolutes of legislation, these social norms are negotiable, contestable, and contested. For example, it has been argued that, in Britain, it is socially acceptable to use the term “chav” in a prejudicial (and typically insulting) context even though such a term arguably refers to a distinct and identifiable ethnic subgroup of the wider population. Other prejudicial terms were historically deemed acceptable, and are no longer seen as acceptable by society-at-large, but arguably remain acceptable to significant sub-communities within society. The ways in which slurs of this type may encourage or enable certain behaviours, or reflect certain underlying beliefs in society, deserves further consideration.