AI seriously endangers children's mental health

Mark Zuckerberg is quoted as suggesting that chatbots could supplant the need for a therapist in the future. File photo: AP/Nic Coury
In March, Meta AI was quietly embedded into WhatsApp, reaching around three million Irish users — including an estimated 300,000 children under 16.
There was no opt-out, no parental warning, and no meaningful public consultation. The rollout extended to Facebook, Instagram, and Messenger. Meta claims to have built-in safety checks — but regulators aren’t convinced, especially when it comes to protecting children.
And children have taken to AI with zeal; CyberSafeKids’ latest Trends and Usage report shows that a quarter of 8- to 12-year-olds and more than a third of 12-to 15-year-olds engage with AI chatbots, mostly to look up information but also to produce their homework and to chat.
In what looks to be a cynical case of “create the problem, sell the solution”, Mark Zuckerberg has spoken about how AI could help alleviate loneliness in society. However, some studies suggest that excessive social media use, particularly passive consumption of social media, may cause isolation and loneliness.
Research from an EU-wide survey shows that young people in Ireland exhibit higher levels of loneliness than their European counterparts, while one third of young respondents exhibit patterns of social media addiction.
AI chatbots, in their current form, are no cure for loneliness. Unlike real-life friends, who offer a range of emotions, perspectives, and pushback, chatbots often provide uncritical validation — passively echoing the user’s narrative. As a result, delusional or potentially dangerous thoughts can go unchecked and can even be reinforced.
Mark Zuckerberg is quoted as suggesting that chatbots could supplant the need for a therapist in the future. This is a pretty irresponsible approach since an AI tool is in no way qualified to provide sound mental health advice.
Concerningly, there have been numerous reports of “chatbot psychosis” - when the AI leads people into worsened mental health episodes or dangerous behaviours, sometimes with devastating real-life consequences. At a recent US Senate panel hearing, devastated parents spoke about how AI chatbots had engaged their children in sexual role play and encouraged self-mutilation, sexual and physical abuse, homicide and suicide.
With the release of AI into the hands of so many, the incidence of harm is likely to increase dramatically. These systems are designed to seem human and to mimic emotional intimacy, saying things like “I think we’re soulmates”.

This blurring of the lines between reality and fantasy is especially potent for young people because their brains are still in development and they have less mature decision-making, impulse control and emotional regulation abilities, making them more likely to form intense attachments.
Like social media, AI chatbots are designed to be extremely addictive and to foster psychological dependence.
It was hard to avoid recent headlines about how Chat GPT allegedly played a role in encouraging US teenager Adam Raine, to take his own life, or about how Meta’s AI allowed conversations of a romantic and sensual nature with children.
What’s most shocking, though no longer surprising to anyone familiar with Big Tech’s attitude to child safety, is that conversations of this nature in the case of Meta AI, were approved by the company’s legal team and chief ethicist.
Since the story broke, Meta claims to have made changes to its policy. However, Common Sense Media, a leading source of technology recommendations in the US, rates Meta AI’s risk level as “Unacceptable”, due to the likelihood of harmful events occurring and the risks to teen safety.
They have described how experiments conducted since the announced policy changes, using teen-modelled accounts across multiple risk categories such as self-harm, eating disorders, sexual content and mental health crises, produced alarming results once again.
There are other cases, like the Raine v. OpenAI case, that allege AI has brought young users down dangerous and misleading paths, in some cases with tragic outcomes. Many parents may be unaware that their child or teen can now chat with AI companions that have names, and look and act like real teenagers.
In a nutshell, AI chatbots continue to prioritise emotional attachment and dependency over safety. Why? To maximise time spent on the platform and of course, for profit.
This isn’t just about Meta AI — we must ask ourselves whether we believe there really is a genuine commitment to prioritise children’s safety and wellbeing on the platforms they use every day. If not, or if that commitment falls short, which we see happening time and time again, the companies must be held to account.
Earlier this month, whistleblower testimony alleged that Meta has been running co-ordinated, global initiatives to aggressively target young users across all of its platforms for years. And as of last week the Molly Rose Foundation released new research highlighting systemic failures in the safety tools of Meta’s Teen Accounts.
The regulation of social media, including messaging apps, has not kept pace with real world developments. The EU’s Digital Services Act has not, so far, applied to WhatsApp nor does Ireland’s Online Safety Code.
It appears to evade such regulation on the basis that it’s an interpersonal communications platform - and yet, it’s possible to have over 1000 people in a group and it’s increasingly morphing into a Snapchat-like app with its Chat Lock, Channels and Disappearing Message features, as well as the AI companion feature.
This week, the Oireachtas Committee on Artificial Intelligence heard from teenagers and youth representative organisations calling for regulation under the framework of the EU AI Act. This can't come soon enough.
As with social media, AI tools are not built with child protection as a key consideration, and consequently fall far short of being safe by design for young users. We're tired of the tech industry's attempts to add child safeguards as an afterthought, once the damage has been documented.
Ireland must urgently develop and implement comprehensive regulations to safeguard children from the risks posed by AI. Until meaningful protections are in place, the harm and heartbreak caused by these technologies will only continue to mount.
- Alex Cooney is the Chief Executive of CyberSafeKids