Samaritans urged to withdraw app over privacy fears

Suicide prevention charity the Samaritans has been urged to withdraw a controversial app over privacy concerns.

Samaritans urged to withdraw app over privacy fears

The Samaritans Radar app allows users to monitor the Twitter feeds of more than 1.6m people they follow — searching for certain keywords which may indicate mental health problems.

Once activated, the app uses a specially-created algorithm to identify potentially concerning words and phrases.

The process happens without the knowledge of the person who is tweeting — many who may have no connection with the charity.

Those being monitored through the app will not be notified unless the person who receives an alert chooses to personally approach the Samaritans.

Since the launch last Wednesday, privacy campaigners, lawyers and data protection experts have been extremely critical of the app and took to social media to express concerns.

Some have urged the organisation to withdraw Samaritans Radar, claiming it violates people’s privacy and could enhance the ability of stalkers and online trolls to target vulnerable individuals.

Paul Bernal, an IT and media law lecturer at the University of East Anglia, posted a blog about the issue, asking the Samaritans to “have a rethink”.

He said: “The logic behind the Samaritans Radar approach to privacy is simple: tweets are ‘public’, therefore they’re fair game to be scanned and analysed. Privacy, both in the ‘real’ world and on Twitter, doesn’t work like that. It’s far more complex and nuanced and anyone who thinks in those simple terms is fundamentally misunderstanding privacy.”

So far, the app has more than 3,000 users and is monitoring over 1.64m Twitter feeds.

It looks out for phrases such as “kill myself”, “I want to die”, and “despair”. It also identifies more ambiguous terms like “I want to sleep and never wake up” along with “I’m worthless”.

After activation, users will receive an email alert if a worrying tweet is spotted. When the user logs in, they will see the tweet and can confirm if it’s a potential concern.

Samaritans say the verification process helps make the computer algorithm more accurate.

If a user confirms a tweet is worrying, information on how to approach the person is sent out.

The Samaritans confirmed they would not get involved unless requested to do so.

Joe Ferns, executive director of policy, research and development at Samaritans, said the organisation had worked with academic experts as well as young people with mental health problems, Twitter, and Samaritans volunteers in developing the app.

He said: “We are looking into the details of the issues raised, including working with the relevant regulatory authorities, and will continue to take action as needed to address these concerns.”

More in this section

Puzzles logo
IE-logo

Puzzles hub

Visit our brain gym where you will find simple and cryptic crosswords, sudoku puzzles and much more. Updated at midnight every day.

Puzzles logo
IE-logo

Puzzles hub

Visit our brain gym where you will find simple and cryptic crosswords, sudoku puzzles and much more. Updated at midnight every day.