Gareth O'Callaghan: AI and suicide — how ChatGPT jailbreaks are putting vulnerable users at risk

AI can comfort and fool us — but evasive safeguards and jailbreak prompts mean chatbots may now cause real harm
Gareth O'Callaghan: AI and suicide — how ChatGPT jailbreaks are putting vulnerable users at risk

Chat GPT generates text responses to user prompts and questions, but lately it has been implicated in a number of suicides, including 16-year-old Californian teenager Adam Raine. File photo

Fear and fascination are flip sides of the same coin. Regular readers will know I am fascinated by artificial intelligence. I’m also afraid of this strange science, which some experts say will soon be expanding faster than the speed of light.

Despite all the compelling claims that it will enhance the quality of our health and help us to live longer, AI has in recent times shown a very dark side. It’s evolving in ways that its inventors never considered. Reader discretion is advised here.

Already a subscriber? Sign in

You have reached your article limit.

Unlimited access. Half the price.

Annual €120 €60

Best value

Monthly €10€5 / month

More in this section

Revoiced

Newsletter

Sign up to the best reads of the week from irishexaminer.com selected just for you.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited