Below the Samaritans Radar


A couple of weeks back those decent people at the Samaritans got themselves into a bit of trouble. They released an app on Twitter which allowed a user to use an algorithm to automatically scan the tweets of the people they follow, looking for words associated with depression and suicide. Once spotted, said app sent the user an email which contained that tweet, asking if they felt it was cause for alarm. This innocuous sounding app was called Samaritans Radar.

This, to put it mildly, did not impress some people on Twitter. Some said it could stigmatise people; that it was unethical; that it was creepy; that it would attract trolls; that there was no-opt out; that it wouldn’t work technically; that it was possibly even illegal under the Data Protection Act. By last weekend, the Samaritans had decided they’d had enough of this pressure, anddecided to remove the offending app – to ‘suspend’ it while they decide what to do next.

I think the Samaritans were treated a little unfairly. They’d spotted a couple of years back that there are quite a lot of people feeling depressed, or in a low mood, but who aren’t in any kind of formal treatment, and post about it on Twitter: a sort of cry for help. But because there is so much blustery nonsense on Twitter, these cries often get drowned out.

This app, they hoped, would allow concerned friends or followers to have those tweets brought to their attention again, giving them a second chance to be seen. The Samaritans did their prep, too. Some academic specialists advised on which words and terms the app should look out for, and they even tested it out on some volunteers.

As far as I can read it, and it gets a little technical here, the Samaritans Radar is not in breach of the data protection laws designed to protect our privacy. The Data Protection Act requires ‘data controllers’ – in this case people who get public data from Twitter – to adhere to certain principles about how personal data should be handled and shared. But there is an exemption for individuals who are processing data for ‘personal, family, or household affairs’ (I guess originally intended to stop the regulations reaching into your family holiday spreadsheet – but I’m no lawyer).

Because an individual signs up to Radar and gives the service permission to access the data on their timeline, that individual is technically the ‘data controller’ – and not the Samaritans. More generally, the Data Protection Act was passed all the way back in 1998, when the mass sharing of personal information on public platforms like Twitter was never envisaged. It clearly needs an update.

So I don’t think the Samaritans were breaking the law, but something more finely balanced: the ethics. They hadn’t thought through the possible harm that can arise from collecting and packaging up people’s data, especially when it comes to sensitive cases like this. Although the app isn’t diagnosing illness, it could appear that way.

Imagine: if people who follow you on Twitter suddenly start receiving emails which suggest you might be feeling depressed. That could cause you some distress, perhaps even professional or personal problems. After all, it’s not just a harmless email: it’s an email from the Samaritans – the experts! – who’ve built this app with academic specialists. That has a certain added weight, an added significance. And anyone who signs up to the app can get these emails – my boss, my work colleagues, my partner, who knows. (And to be fair, the Samaritans did make it simpler for people to opt out, if they are worried by this – although it could be easier).

That’s not to mention the dubious efficacy. I’m not convinced that people on Twitter who are genuinely suicidal or depressed post obvious words to that effect; but many of Samaritans’ flagged tweets appeared to be based on a word list based on some research into what sort of words people use when they are feeling down.

It’s inevitably crude, because of the way people mobilise language. A couple of my own tweets were flagged to other people: because I’d been tweeting about the app. (And although, of course, your friends are vitally important when you’re feeling low – it’s not obvious that someone who follows you on Twitter is a friend in any meaningful sense).

But the Samaritans were right to try – and I hope they aren’t put off by all this trouble. This is a charity with good intentions struggling to figure out how to do best help people in this new digital space, not a shadowy spy agency snooping on innocents. Social media has become a vital new place for people to talk about mental health, and the Samaritans need to be there too.  According to Joe Ferns, Director of Policy, Research and Development at the Samaritans, there have already been ‘a lot’ of people who’ve had their tweets spotted successfully by concerned friends. Who knows? Perhaps someone’s already had their life saved by this app.

Social media has become one of the great blind spots in health provision. These days, everyone heads online when they are feeling down. There are hundreds of websites, forums, chat rooms dedicated to mental health conditions – including the so-called ‘suicide forums’, where hundreds, perhaps thousands, of people openly discuss suicidal thoughts, and even share advice on how to most efficiently kill yourself.

Some of them can help – nudging people toward proper professional help and advice. Others are destructive, as they create a culture in which people come to consider that suicide is a normal solution to life’s problems. But the reason these sites are popular is because there is nowhere else for people to go if they want to speak openly and honestly to other people – and social media can be a powerful and important way for people to talk about their conditions with others.

Studies have shown that speaking to people who have first-hand knowledge and experience of your own condition helps to improve self-esteem, boost confidence and wellbeing. But it can turn bad, too. People can get easily sucked into these online communities, and they are often extremely unwell and in need of help. And they are less good when populated by untrained, and often very ill people listening, offering advice, information and even mild encouragement. That’s why it’s vital that groups like the Samaritans take an interest.

For now, it looks to me like the Samaritans are pondering what to do next. They shouldn’t give up. There are better ways to do this. They could use Twitter’s targeted advertising system, which Twitter uses to match adverts to people who use certain words. Using this, the Samaritans could ensure people using those phrases and expressions that worry could be reached subtly. Targeted advertising is not everybody’s cup of tea, but it would allow the Samaritans to ‘signpost’ their presence in a general, non-personal way, without collecting any data about that person, and in a manner that alerts no-one but the sufferer:

‘If you’re feeling down, there are people you can talk to. Call the Samaritans’.