Irish experts warn AI algorithms are pushing toxic, violent and sexual content to children online

Irish experts warn AI algorithms are pushing toxic, violent and sexual content to children online

These algorithms – which generate revenue for social media companies but cause serious harm to young people – should be switched off by default, Noeline Blackwell said.

Artificial intelligence (AI)-powered recommender systems are feeding millions of children toxic content about eating disorders, self-harm, and graphic videos of Charlie Kirk’s murder.

These algorithms – which generate revenue for social media companies but cause serious harm to young people – should be switched off by default, Noeline Blackwell of the Children’s Rights Alliance told the Oireachtas Joint Committee on AI.

Grace French, from the Youth Advisory Panel (YAP), said that as a teenage girl she is routinely exposed to harmful material online.

Ms French says every 20 videos sees on social media platforms she is pushed content about eating disorders or self harm, even though she never chooses to watch this material.

She said videos of Charlie Kirk’s murder have been pushed by algorithms to many young people she knows, even though they never searched for such content.

“It’s horrific. People as young as eight [
] are on TikTok and they’re seeing that video [of Charlie Kirk’s murder] and it can properly damage their mental health," Ms French told the Oireachtas Joint Committee on AI today.

“TikTok needs to be held responsible for that,” she said. “Adults are not seeing it on the news but children are. That video was up [on social media platforms like TikTok] for days.

“Why was it up there in first place?” 

James Geoghan, Fine Gael TD for Dublin Bay South, noted that while adults are protected from violent, graphic content through broadcast regulations, children are routinely exposed to disturbing material on unregulated platforms with no accountability.

Fionn McWeeney of the YAP said society should examine why far-right content is pushed at teenage boys while anorexia and self-harm material is targeted at teenage girls.

Alex Cooney, CEO of CyberSafeKids, said the internet was not designed with children in mind and meaningful protections have been slow to appear.

“Recommender algorithms shape much of what children see online. Platforms like TikTok, YouTube, and Instagram use personal data such as age, location, interests, to create detailed profiles and serve content," she said.

“While this can seem helpful, these algorithms often expose children to harmful material - including stuff they don’t want to see - with little transparency.

“We hear about this all the time from children themselves. A 13-year old girl told us earlier this year: 'Sometimes I can feel nervous when I’m on my phone. It is very easy to come across rude content that you don’t want to see'.

“Despite platforms claiming to ban harmful content, disturbing material still slips through, such as explicit videos or graphic violence.

“Last year, the Joint Committee on Children, Equality, Disability, Integration, and Youth recommended that recommender systems be turned off by default for children under 16.

“However, no significant reform has yet been published."

Ms Cooney called on government to ban the use of profiling data for recommending content to children.

Clare Daly of CyberSafeKids warned that AI-generated deepfakes are now a growing issue.

“A child’s likeness can be used to create explicit content with just 20 images. We’ve seen reports of children using AI to create sexual deepfakes of peers, leading to real-world harm. These tools remain largely unregulated and the omission of these risks from the AI Act is a missed opportunity."

She noted that the UK has proposed banning the creation or distribution of AI tools for child sexual abuse material (CSAM), while Australia is moving to ban “nudify” apps that manipulate images of minors. She urged Ireland to consider similar laws.

Ms Daly also warned about the rapid spread of AI chatbots.

“According to our latest report, over a quarter of primary school children and a third of 12-15-year-olds are using them.

“These chatbots are often designed to feel human — empathetic, warm, and engaging, increasing the risk of children — and particularly more vulnerable children — becoming emotionally attached," Ms Daly said.

“Recent cases taken by the families of children who have tragically taken their own lives, such as Raine v. OpenAI and Garcia v. Character.AI, highlight how children can be encouraged toward self-harm or suicide by interacting with these bots and, in the Raine case, discouraged from seeking help.

“While these cases are still under legal scrutiny, they underline the urgent need for safeguards, built in at the earliest stages rather than added as an afterthought.

“Unlike physical toys, which must pass rigorous safety and compliance checks, AI technologies remain largely unregulated. Children are effectively canaries in the digital coalmine."

Dr Emily Bourke of Belong To, a national LGBTQ+ youth organisation, said two major concerns have emerged: recommender algorithms and weak content moderation.

“Young people want to have the choice to opt in or out of online recommender systems and algorithms, allowing them more control over what they see online and the amount of time they spend on social media. 

"While many of the young people we work with speak of the positives of social media — as a place where they can find community and learn about their identity — they also express serious concerns about the content that is pushed to them and their peers. They see hateful content, anti-LGBTQ+ content, daily, and algorithms push it because it gets a reaction from people, despite the harm it causes. They are also troubled by the recent weakening of content moderation by online platforms."

Rob Byrne, a Belong To youth representative, raised concerns about how data collection linked to AI could endanger LGBTQ+ youth.

"The sale of our data to the highest bidder by big tech corporations, who then use the data to push specific targeted advertisements to us, could unintentionally out people [regarding their sexual identity].

"This is another important reason that we should be able to opt out of recommender algorithms," Mr Byrne said.

Reuban Murray, Youth Work and AI Project Officer with the National Youth Council of Ireland (NYCI), said young people are most exposed to the rapid adoption of AI, with youth recruitment in some sectors already down by 13%.

“One in three young people are turning to AI companions for “social interaction & relationships” – a trend that reflects the massive impact of covid-19 on this cohort’s social development," he said.

“We are also watching AI widen the digital divide as youth from poorer households are much less likely to use AI, particularly for tasks such as learning or for help in school.

"We must build a regulatory framework that addresses the dangers and concerns of AI, meets the new challenges it presents and harnesses its many benefits."

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited