Reflections on 10 years at Demos

After 10 years at Demos – in normal human years that’s half a century – I’ll be stepping down as Director of CASM. I’ll still be here, working on several Demos projects. But the day-to-day running will be handed over to my talented colleague Alex.

By my calculation I am by now the single most experienced think-tank employee in the UK – which is a good excuse for a couple of reflections about the sector as a whole, and our work at CASM in particular.

On one of my first weeks at Demos, a more senior colleague (Charlie Edwards) bowled in and announced to the office that a new website he’d discovered called ‘Twitter’ was going to be the next big thing. I was among the most vocal in telling Charlie he was not only wrong, but also stupid. Because why would anyone want to share in 140 characters what…

That attitude was and is still shared by many think-tanks. A decade ago, think-tanks were among the boldest and most imaginative research institutions in the UK. But as an industry we’ve been slow in adapting to the changes around us. The most exciting new techniques of researching, presenting and disseminating ideas now often come from snazzy start-ups. Think-tanks still, by and large, prefer focus groups and PDFs. We all claim to ‘get beyond the Westminster bubble’, or be a ‘think-and-do-tanks’, but actual examples are rare. There’s nothing wrong with that per se, but we’re missing something as a result. Why have so few think-tanks made their research data genuinely open source? Or produced more interactive visualisations? Why do no think-tanks have github pages?

A few years later it dawned on me that my colleague Charlie was completely right: social media was not only changing society but also offered new ways of understanding that change. Based on that simple proposition, in 2011 my colleague Carl Miller and I, along with a crack team at the department of informatics at Sussex University, set up CASM. We’d seen exciting new data sets – millions of tweets and posts – but also found the only way to get to them was via expensive, off-the-shelf software that was designed for advertisers and marketing execs. There was no shortage of data analytics kits that promised the moon – especially in unproven ‘sentiment analysis’ and flaky trend or heat maps – but for a decent researcher bothered by rigour, they were essentially useless. Government officials would tell us they’d bought a pricey data analytics ‘suite’ which could collect millions of data points. And a few months later they’d be back complaining that no-one knew if the data was biased, how it was collected, how accurate the machine learning classifiers were, or how to change any of it.  (This ‘black box’ issue is still a problem, by the way, although less so. And governments still burp out procurement documents that ask the impossible from software. We want to build an algorithm to find and remove hate on the internet. This is impossible).  

So from 2011 we started to build software, methodologies and research techniques that would give access to big data approaches to social scientists, who have very different questions, needs, and standards of veracity. It remains a work in progress of course – but I’m proud of all the reports we’ve produced, including the technical research on representativity in social media data sets (2015), our warning about disinformation and conspiracy theories (2011) and predicting the growth of the radical right, partly due to their skilful use of social media to spread their messages (2011).

Much has changed since then, and to reflect that two new broad ideas will now animate our work.

First: to understand society today means you must also understand the technology available to make sense of it. While it’s never been easier to hear the public’s voice, it’s perhaps never been harder to listen to what they’re actually saying. We’ve moved rapidly from an age of information scarcity to one of abundance: every day there are millions of available data points – bona fide on-the-ground insight mixed in with trolls and fakes and frauds. It is confusing, contradictory, overwhelming, and certainly not amenable to human analysis alone. We need to keep developing new techniques and methods. What happens when our internet enabled devices (the smart fridge, smart car, crypto-currency transactions, fit bit data) all become amenable to research? What should we collect? How should we ensure reasonable consent? What forms of analysis are useful? How can these be best transmitted to the public? New data categories are emerging all the time, and each bring a new set of interesting and difficult methodological, technical and ethical questions.

Second, as is often the case during rapid change, there is a growing incompatibility between social problems and the institutions we have created to sort them out. Put simply: we have an old, analogue democracy with norms, rules, institutions, and expectations that increasingly don’t fit with how digital technology works. This can’t last indefinitely.

There are so any examples. We have monopoly and competition rules that don’t really apply to free digital platforms. We have micro-targeted political adverts that fly under the regulator’s radar. We have an education system insufficiently nimble to teach about ‘deep fakes’ and hyper-partisan news feeds. We arrange our police around geographic constabularies, even though most crimes take place online. We have a taxation system that chases after software companies that can base themselves anywhere. Decent local government depends on a powerful local media, which is being replaced by free news run on private servers based on the other side of the world. I could go on.

Nothing seems to work properly anymore.  And as machines become smarter, computing power increases, and more devices become internet enabled, this institutional mis-match will look more and more ludicrous. My fear is that people will lose faith in the idea of politics as a system that gets things done. Who knows what they’d turn to instead?

For a think-tank like ours, which has always been about getting people actively involved in the decisions that affect their lives, this is of supreme importance. This question has two parts, really. We need to work out how technology can be regulated and democraticsed, but without losing all the benefits it can bring to society. But we also need to figure out how democracy and its institutions can be upgraded to become more effective, and in tune with the spirit of the age. How must our education system change? How can we re-organised our police force? What new ways might we raise tax? What must manufacturers do to ensure our ‘smart homes’ aren’t at the mercy of the untraceable Russian hacker? Can we design a professional guild for software engineers and algorithm writers?

These are issues that would have been unimaginable, perhaps even slightly absurd, when I first joined Demos. No doubt the issues that will occupy us in 2028 would seem equally ridiculous to us now. But the underlying questions won’t change: how can we retain a progressive vision of change, but in a way that makes sense in people’s lives? What are the incentives, regulations, design principles we need to make sure technology extends human capabilities and opportunities to live free, and flourishing lives – rather than dependent, narrow ones?