Irish experts warn AI algorithms are pushing toxic, violent and sexual content to children online

These algorithms â which generate revenue for social media companies but cause serious harm to young people â should be switched off by default, Noeline Blackwell said.
Artificial intelligence (AI)-powered recommender systems are feeding millions of children toxic content about eating disorders, self-harm, and graphic videos of Charlie Kirkâs murder.
These algorithms â which generate revenue for social media companies but cause serious harm to young people â should be switched off by default, Noeline Blackwell of the Childrenâs Rights Alliance told the Oireachtas Joint Committee on AI.
Grace French, from the Youth Advisory Panel (YAP), said that as a teenage girl she is routinely exposed to harmful material online.
Ms French says every 20 videos sees on social media platforms she is pushed content about eating disorders or self harm, even though she never chooses to watch this material.
She said videos of Charlie Kirkâs murder have been pushed by algorithms to many young people she knows, even though they never searched for such content.
âItâs horrific. People as young as eight [âŠ] are on TikTok and theyâre seeing that video [of Charlie Kirkâs murder] and it can properly damage their mental health," Ms French told the Oireachtas Joint Committee on AI today.
âTikTok needs to be held responsible for that,â she said. âAdults are not seeing it on the news but children are. That video was up [on social media platforms like TikTok] for days.
âWhy was it up there in first place?âÂ
James Geoghan, Fine Gael TD for Dublin Bay South, noted that while adults are protected from violent, graphic content through broadcast regulations, children are routinely exposed to disturbing material on unregulated platforms with no accountability.
Fionn McWeeney of the YAP said society should examine why far-right content is pushed at teenage boys while anorexia and self-harm material is targeted at teenage girls.
Alex Cooney, CEO of CyberSafeKids, said the internet was not designed with children in mind and meaningful protections have been slow to appear.
âRecommender algorithms shape much of what children see online. Platforms like TikTok, YouTube, and Instagram use personal data such as age, location, interests, to create detailed profiles and serve content," she said.
âWhile this can seem helpful, these algorithms often expose children to harmful material - including stuff they donât want to see - with little transparency.
âWe hear about this all the time from children themselves. A 13-year old girl told us earlier this year: 'Sometimes I can feel nervous when Iâm on my phone. It is very easy to come across rude content that you donât want to see'.
âDespite platforms claiming to ban harmful content, disturbing material still slips through, such as explicit videos or graphic violence.
âLast year, the Joint Committee on Children, Equality, Disability, Integration, and Youth recommended that recommender systems be turned off by default for children under 16.
âHowever, no significant reform has yet been published."
Ms Cooney called on government to ban the use of profiling data for recommending content to children.
Clare Daly of CyberSafeKids warned that AI-generated deepfakes are now a growing issue.
âA childâs likeness can be used to create explicit content with just 20 images. Weâve seen reports of children using AI to create sexual deepfakes of peers, leading to real-world harm. These tools remain largely unregulated and the omission of these risks from the AI Act is a missed opportunity."
She noted that the UK has proposed banning the creation or distribution of AI tools for child sexual abuse material (CSAM), while Australia is moving to ban ânudifyâ apps that manipulate images of minors. She urged Ireland to consider similar laws.
Ms Daly also warned about the rapid spread of AI chatbots.
âAccording to our latest report, over a quarter of primary school children and a third of 12-15-year-olds are using them.
âThese chatbots are often designed to feel human â empathetic, warm, and engaging, increasing the risk of children â and particularly more vulnerable children â becoming emotionally attached," Ms Daly said.
âRecent cases taken by the families of children who have tragically taken their own lives, such as Raine v. OpenAI and Garcia v. Character.AI, highlight how children can be encouraged toward self-harm or suicide by interacting with these bots and, in the Raine case, discouraged from seeking help.
âWhile these cases are still under legal scrutiny, they underline the urgent need for safeguards, built in at the earliest stages rather than added as an afterthought.
âUnlike physical toys, which must pass rigorous safety and compliance checks, AI technologies remain largely unregulated. Children are effectively canaries in the digital coalmine."
Dr Emily Bourke of Belong To, a national LGBTQ+ youth organisation, said two major concerns have emerged: recommender algorithms and weak content moderation.
âYoung people want to have the choice to opt in or out of online recommender systems and algorithms, allowing them more control over what they see online and the amount of time they spend on social media.Â
"While many of the young people we work with speak of the positives of social media â as a place where they can find community and learn about their identity â they also express serious concerns about the content that is pushed to them and their peers. They see hateful content, anti-LGBTQ+ content, daily, and algorithms push it because it gets a reaction from people, despite the harm it causes. They are also troubled by the recent weakening of content moderation by online platforms."
Rob Byrne, a Belong To youth representative, raised concerns about how data collection linked to AI could endanger LGBTQ+ youth.
"The sale of our data to the highest bidder by big tech corporations, who then use the data to push specific targeted advertisements to us, could unintentionally out people [regarding their sexual identity].
"This is another important reason that we should be able to opt out of recommender algorithms," Mr Byrne said.
Reuban Murray, Youth Work and AI Project Officer with the National Youth Council of Ireland (NYCI), said young people are most exposed to the rapid adoption of AI, with youth recruitment in some sectors already down by 13%.
âOne in three young people are turning to AI companions for âsocial interaction & relationshipsâ â a trend that reflects the massive impact of covid-19 on this cohortâs social development," he said.
âWe are also watching AI widen the digital divide as youth from poorer households are much less likely to use AI, particularly for tasks such as learning or for help in school.
"We must build a regulatory framework that addresses the dangers and concerns of AI, meets the new challenges it presents and harnesses its many benefits."