‘Happy (and safe) shooting!’: Chatbots helped researchers plot deadly attacks

‘Happy (and safe) shooting!’: Chatbots helped researchers plot deadly attacks

OpenAI’s ChatGPT, Google’s Gemini, and the Chinese AI model DeepSeek provided at times detailed help in the testing carried out in December, during which researchers from the Center for Countering Digital Hate (CCDH) and CNN posed as 13-year-old boys

Popular AI chatbots helped researchers plot violent attacks including bombing synagogues and assassinating politicians, with one telling a user posing as a would-be school shooter: “Happy (and safe) shooting!” 

Tests of 10 chatbots carried out in the US and Ireland found that, on average, they enabled violence three-quarters of the time, and discouraged it in just 12% of cases. Some chatbots, including Anthropic’s Claude and Snapchat’s My AI, persistently refused to help would-be attackers.

You have reached your article limit. Already a subscriber? Sign in

Unlimited access starts here.

Try from €1.50 a week.

Cancel anytime

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited