Gardaí investigating 200 reports of child sexual abuse images generated by Grok
Amid the resulting firestorm, X has made the image-generation service subscriber-only. However, this is unlikely to stem the tide of produced images. Stock Picture: Nicholas T Ansell
An Garda Síochána is currently investigating 200 child sexual abuse images generated by Grok, the AI chatbot operating on social media platform X.
Detective Chief Superintendent Barry Walsh, of the Garda National Cyber Crime Bureau (GNCCB), told the Oireachtas media committee that there have been 200 reports to date received by gardaí which “are being investigated involving content of child sexual abuse material (CSAM) or indicative of CSAM”.
The Grok chatbot has been at the centre of a backlash for several weeks after it emerged that it had been generating images for X-users based on pictures of real people which were then sexualised.
Amid the resulting firestorm, X has made the image-generation service subscriber-only. However, this is unlikely to stem the tide of produced images.
Supt Walsh told the committee that, while reports have been received by gardaí regarding Grok-generated images, it now has to be shown that the images in question are criminal in nature before further investigation can take place.
“That may lead to the execution of warrants,” he said.
He said that any investigation may not be into X itself, but into the images which have been shared on the platform.
Regarding the investigations into Grok-generated images, the superintendent confirmed that gardaí are acting both on referrals and of the force’s own volition regarding images they have encountered.
He said that the level of referrals for CSAM-related images received by the bureau has been increasing exponentially in recent years, with that being at least in part attributable to a change in legislation in the US — making the reporting of child sexual abuse material images mandatory — given most such referrals to gardaí originate with the US National Center for Missing and Exploited Children (NCMEC).
All told, some 25,000 child sexual abuse material images were investigated by gardaí following referrals in 2025, almost double the figure of 13,300 received in 2024, a figure which Fianna Fail TD Peter “Chap” Cleere said “blows my mind”.
Supt Walsh acknowledged that while some referrals for such material come to the gardaí from the media regulator, Coimisiún na Meán, the vast majority do still come from the American agency, though legislation is currently in train to create “a European NCMEC, for want of a better word for it”.
He admitted that investigations into problematic behaviour online often take a deal longer than those of traditional policework, and that the time taken to remove illicit material from a website can vary from "very quickly to a lot longer”.
“A very quick prosecution could take a number of months, that’s the reality,” he said, noting that there are often “complexities” to such online cases.
“Often we have to go outside the jurisdiction, that can create a time delay. There are procedures which need to be followed prior to a search. An immediate response is not always possible,” he said.
Regardless, Supt Walsh said Ireland has sufficient legislation currently in place to allow gardaí to investigate the digital crimes they are encountering.
Asked whether or not AI-generated images of a sexual nature, as opposed to photographs taken by humans depicting crimes, are definitely criminal, the superintendent said “my understanding is yes”.
“I haven’t seen any example where it isn’t,” he added.
In his opening statement, Supt Walsh said he believes the issue of AI-generated criminal imagery is unlikely to be confined to Grok alone.
“Conceptually, these are trained models, it seems to me they could be [used in that fashion]”, he said.



