General election campaigns are a fertile time for social media research. In the run-up to election day, online platforms become battlegrounds, with candidates, commentators and supporters reacting to events, sharing information and vying to control the debate This political fracas is especially evident on Twitter, where conversations of all types happen in public – the platform is designed to allow anyone to participate in any argument.
Over the course of the 2017 campaign, Demos will be examining the ways in which supporters of the largest 6 British parties are interacting – what they’re talking about, who they’re talking to, and which information they’re sharing. We’ll be sharing our findings on the BBC’s Victoria Derbyshire show every Wednesday morning until the election, and regularly updating this blog with a more in depth look at the analysis supporting each week’s programme.
Assembling a cross-party group of users
In order to look at this election through the eyes of a broad cross-section of party supporters, Demos has selected a group of users on Twitter who describe themselves as supporting one of the 6 largest parties – Labour, The Conservatives, The Liberal Democrats, the Green Party, the SNP and UKIP. We looked at people who had sent a relevant Tweet, then randomly selected 400 people from each party who had described themselves as a supporter. A full description of our method here is provided at the end of this post.
It quickly became clear that this collection of users were discussing the election with gusto. Since the campaign begun on the 18th April, the group sent over 860,000 Tweets – a daily average of 12 per person. The graph below shows that Theresa May’s surprise announcement caused a huge spike of activity, with Tweets sent settling down to almost double the previous hourly rate – a pace which is still being maintained at time of writing.
Mapping political conversations
Over the coming month, CASM will be exploring the topics discussed, articles shared and candidates trolled by this group of self-described party supporters. This week, however we have focused on the shape, rather than the content, of the group’s conversations.
In order to better understanding who parties were talking to within our dataset, we used a free tool called Gephi to map the connections between users, producing the following graph.
This network diagram shows how users within party groups are talking to each other on Twitter. Nodes represent users, while the lines connecting them represent mentions – every time a user has mentioned another user in our dataset by using their Twitter handle, that Tweet appears above as a line connecting those two users. (It also, of course, shows up on the recipient’s timeline.) These connections are the only factor which determines a user’s position in the graph – the more you talk to a group of users, the closer you will be to them.
Conversation between supporters of all parties on Twitter
The tangled web of conversation above gives us some insight into the current political landscape on Twitter. While CASM has added colours to show party affiliation, and we rotated the graph once it had been drawn, the shape of this network has arisen entirely through the interactions between users, and the patterns which have emerged through following this simple rule are striking. Not only have parties tended to clump together, indicating that many users within our collection are talking predominantly to their own party, but the entire system is laid out along something resembling an ideological spectrum – with Labour, the SNP and the bulk of the Greens talking to each other on the left, the Conservatives and UKIP on the right, and the Lib Dems sitting in the middle. Even cross party conversation, it seems, tends to be taking place amongst those likely to share a broad set of concerns.
The coloured bundles emerging on the graph above are one indicator of what has been termed ‘the Echo Chamber Effect’, examined in detail in a recent Demos paper, which describes the process by which social media users can find themselves talking primarily to people who agree with them. For those who reside within them, echo chambers have the potential to limit discourse and exposure to conflicting views. The denser these groups are, and the closer a user is to the centre of them, the stronger this amplification of opinion is likely to be.
Conversations between Labour and Conservative supporters on Twitter
These telltale huddles appear above in different intensities, depending on the party involved. Removing other parties from the mix, we can see Labour and Conservative supporters are relatively widely distributed across the network – there’s also a complex tangle of conversations between them. Both parties still form recognisable groups, showing that there’s still a tendency for people to talk to their parties, but they’re fairly fluid and loose-knit. This could indicate that these parties cater for a relatively wide range of concerns – or, at least, a range of concerns relatively evenly spread across Britain’s political spectrum. We will be examining the areas at which these networks break down in more detail as the campaign progresses.
Conversations between SNP and UKIP supporters on Twitter
In comparison, the parties at the extremes of the network have tended to form dense, circular clouds of conversation – though there are a few strands of chatter between SNP and UKIP supporters, they’re not talking to each other very much. One aspect of this, of course, is that the two parties here have an entirely different focus, and haven’t found sufficient common ground to contest; the SNP have engaged more with Labour, for example, to whom they’re more closely ideologically related. However, this does show that both the SNP and UKIP have flocked into close-knit groups, with many users at the centre or outer edges of these only talking to others within the party. It is within these groups that the ‘echo chamber effect’ is likely to be strongest.
In some ways, this analysis may seem unsurprising. Individuals on Twitter are not regulated for impartiality, and people have always tended to talk about things, and to people, who interest them; for users describing themselves as supporters, that’s likely to be their party. It should also be noted that, even though our collection is evenly weighted across parties, Twitter users do not constitute a representative demographic sample of British voters, and there are other ways, besides mentioning someone, to interact on the platform. Furthermore, the multicoloured centre of the ‘all parties’ graph also shows that there are some truly cosmopolitan networks on Twitter, with a melee of supporters all trading mentions.
For some parties, however, the shapes of their interactions above should perhaps give pause for thought. Effective, vital politics lies in debate, in engaging with voters and bringing them around to a point of view. Sheer numbers of shares or retweets alone, and attempts at introducing new concepts to political debate, won’t get your message across if you’re only preaching to the converted.
Methodology – selecting 2,400 supporters online
To select a random group of party supporters on Twitter, we used Method52, an analytics platform developed by CASM in partnership with the University of Sussex, to do the following:
- Collected all Tweets sent during the week of the 18th of April, the day on which the election was formally announced, which contained a generic term likely to relate to the election – ‘#GE2017’, for example, or “Jeremy Corbyn”.
- Filtered the users according to their description on Twitter, assembling for each party a list of users who had used a word relevant to that party to describe themselves (e.g. “Conservative” or “Tory”.)
- Trained a Natural Language Processing classifier for each party, aiming to distinguish between users who supported or opposed the party mentioned. These were designed, for example, to distinguish between uses of ‘Labour’ in the phrases “Confirmed Labour supporter” and “Ex Labour – now not sure”.
- Of those users classified as supporting each party, 400 were chosen at random, forming a group of 2400 users in total.
- Since NLP Classifiers of this type are never entirely accurate – we achieved scores of around 75% to 95% accuracy across our parties – this group was then manually verified. Where mislabelled users were found, these were removed and replaced.
- We then used Method52 to collect and analyse all publicly shared Tweets from this final, verified group of users.