'It should be illegal,' says survivor of child sexual abuse about AI 'nudification' technology

'It should be illegal,' says survivor of child sexual abuse about AI 'nudification' technology

In addition to Grok, people have also called for urgent action on other AI tools, including so-called 'AI-girlfriend' websites and apps, which allow the production of graphic deep-fake pornography of adult women and child sexual abuse imagery (CSAM).

The State’s specialist service for survivors of child sexual abuse has called for a “total ban” on AI technology, including X tool Grok, that can produce sexual abuse imagery of children and adults.

The Alders Unit at Children’s Health Ireland (CHI) said the harm caused by the production of these images is “equivalent” to that perpetrated face-to-face.

Similar calls and concerns have been made by other support organisations such as Rape Crisis Ireland, Dublin Rape Crisis Centre, One in Four, CARI, ISPCC, Women's Aid as well as Irish Internet Hotline and Sexual Exploitation Research and Policy Institute.

The Government’s own AI Advisory Council said the AI technology needs to be prohibited across the EU and urged the Government to use its presidency of the EU, beginning in July, to seek the support of other member states to achieve this.

Government ministers have said they are awaiting legal advice from the Attorney General on AI ‘nudification’ technology, as to whether additional laws are required to criminalise the generation of such imagery, and not just people who use the technology.

TRAUMA TO CHILDREN 

In a statement from CHI, the Alders Unit said these tools pose significant risks to children and young people, from psychological distress to long-term trauma.

It noted government discussions on online safety, but called for “urgent regulatory action, stronger accountability for technology platforms, and a co-ordinated societal response” involving health, education, child protection and digital safety sectors.

Keith O'Reilly, director at The Alders Unit CHI at Connolly in Blanchardstown, Dublin, said: “Any child, young person or adult could be impacted by this misuse of technology. Children deserve online environments that are safe by design and that protect their wellbeing, rights and dignity.” 

Brigid Maxell, interim director at The Alders Unit CHI at Tallaght, Dublin, added: “The creation of AI functions that enable the nudification or production of sexual images of children and adults further erodes societal norms regarding child sexual abuse and sexual violence.

“The online availability of such tools, often without safeguards and in contexts where accountability appears limited, is a matter of serious concern.”

TOTAL BAN 

The CHI statement said: “The Alders Unit, as a specialist service in child sexual abuse, supports efforts to uphold children’s rights and calls for a total ban on AI-based functions capable of producing deepfake sexual images of children and adults.

“The harms of online sexual violence are equivalent to that which is perpetrated face-to-face. Such harms can result in long-term re-victimisation, as images remain accessible to potential offenders over time.” 

One 13-year-old girl attending the service said: 

I don’t think you should be able to do it, I think it should be illegal, its bad, it’s weird and some weird guys would want to use that, and they might use it on girls who don’t want to be used in that way. 

The statement said parents and guardians play a crucial role in helping to keep their children safe online.

It added: “Talk openly with your child about the risks of sharing images and encourage them to come to you if they see or experience something worrying. If you become aware that your child's images have been 'nudified', this should be reported to An Garda Síochána or Tusla.” 

The Alders Unit works with children and young people aged three to 18 years living in Dublin, Kildare, Cavan, Monaghan, Louth, Meath and Wicklow, as well as their parents or carers.

Its multidisciplinary team includes social workers, psychotherapists, psychologists, and administrators.

In addition to Grok, people have also called for urgent action on other AI tools, including so-called 'AI-girlfriend' websites and apps, which allow the production of graphic deep-fake pornography of adult women and child sexual abuse imagery (CSAM).

STATE AI COUNCIL 

Separately, the Government’s AI Advisory Council has said AI technology that digitally ‘nudifies’ women and creates CSAM needs to be banned.

The council recommended to the Government that it uses its presidency of the EU, starting in July, to work with other members states to make the necessary legislative changes.

It said Irish law criminalises the “production and/or sharing” of CSAM and separately criminalises the “publication and distribution” of non-consensual intimate images.

It said the “most effective response” to the onset of technologies which allow for the production and publication of intimate images and CSAM at scale is a harmonised EU-wide approach. It said the EU AI Act “does not address AI technologies being used in this way”.

The statement said: “The AI Advisory Council is of the view AI generation of non-consensual intimate images and CSAM should be prohibited across the EU, as a prohibited practice pursuant to Article 5 of the AI Act.” 

It said Article 112 (1) of the AI Act requires the European Commission to carry out an annual assessment of whether the Article 5 prohibited practices list should be amended, and to report its findings to the European Parliament and the Council.

It said this creates a formal mechanism to keep the AI Act’s bans aligned with technological and societal developments.

The statement said: “The AI Advisory Council is of the view that the Irish Government should use its assumption of the EU Presidency in the latter half of 2026 to work with the other EU Member States and encourage the amendment of Article 5 of the AI Act pursuant to the Article 112(1) mechanism, in order to prohibit AI practices which permit users to generate non-consensual intimate images and CSAM.”

GROK ‘UNCHANGED’ 

Meanwhile, the Guardian has reported that X has continued to allow users to post highly sexualised videos of women in bikinis generated by its AI tool Grok, despite the company’s claim to have cracked down on misuse.

The Guardian said it was able to create short videos of people stripping to bikinis from photographs of fully clothed, real women.

It said it was also possible to post this adult content onto X’s public platform without any sign of it being moderated, meaning the clip could be viewed within seconds by anyone with an account.

The Guardian said this appeared to offer a straightforward workaround to restrictions announced by Elon Musk’s social network this week.

260-FOLD RISE IN AI IMAGES 

Separately, the British Internet Watch Foundation (IWF) warned on Friday that AI tools will become “child sexual abuse machines” without urgent action, saying that “extreme” AI videos were fuelling record levels of child sexual abuse material found online.

It said new data they had gathered showed that 2025 was the “worst year on record” for online CSAM, with a 260-fold increase in photo-realistic AI material.

It said analysts have seen a “frightening” 26,362% rise in photo-realistic AI videos of child sexual abuse, often including real and recognisable child victims.

In 2025, the IWF discovered 3,440 AI videos of child sexual abuse compared to only 13 in 2024.

The IWF said: “Criminals are using the improving technology to create more of the most extreme Category A imagery (material which can even include penetration, bestiality, and sexual torture).

“Of all the AI-generated videos of child sexual abuse discovered by the IWF in 2025, 65% (or 2,233 videos) were so extreme they were categorised as Category A.”

SUPPORT SERVICES 

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited