Global outrage over Grok AI images on X puts Irish and EU regulators under pressure in their first major test

The Grok AI scandal has exposed how slow regulation, platform design choices and weak enforcement collide with devastating consequences
Global outrage over Grok AI images on X puts Irish and EU regulators under pressure in their first major test

Grok, the AI tool on the Elon Musk-owned social media platform X, last month started to allow requests to take existing photos of women and children and to generate new images of them in bikinis, in sexually suggestive poses, or covered in substances resembling semen. Picture: VINCENT FEURAY/Hans Lucas/AFP via Getty Images

Before Christmas, the EU’s tech chief said the European Commission was “holding X responsible for undermining users’ rights and evading accountability” when issuing it a €120m fine for breaching the law.

Also before Christmas, Ireland’s digital services commissioner told the Irish Examiner that the laws in place and the systemic issues it's tackling are all about “tipping the balance back in favour of the user” when dealing with social media sites.

But, as we entered 2026, the lofty ambitions of the authorities in Ireland and Europe faced what advocates have called “the first real test” of their powers, as this aim of putting power back with the individual was damaged in a way that has sparked a global outcry.

For reasons that aren’t yet clear, Grok, the AI tool on the Elon Musk-owned social media platform X, last month started to adhere to requests to take existing photos of women and children and to generate new images of them in bikinis, in sexually suggestive poses, or covered in substances resembling semen.

For a week or so, you could take a look in the comments under a post featuring a photo of a woman that has been widely seen on X and anonymous users with blue check marks would be asking Grok to generate new sexualised images.

It had appeared these vile requests were carried out with impunity and users got what they asked for.

Elon Musk’s ex-partner, and mother of one of his children, had such images generated of her. 

A 12-year-old Stranger Things actress had an image of her altered to put her in a bikini.

British TV presenter Maya Jama put it succinctly in a now-deleted post on X, responding to a photo of her and the comments underneath.

“This came up on my timeline and I know I shouldn’t look but all of the Grok asks underneath... Some of you are real sick bastards and should actually try going out sometimes,” she said.

After years of advocates warning about the dangers of ‘nudify’ apps, such non-consensual image creation and sharing has now gone mainstream.

‘Really dangerous’ 

The damage caused by such fake images is clear, with Women’s Aid chief executive Sarah Benson describing it as another tool in the arsenal available to those who want to abuse women.

“Its reach is endless,” she said. 

Women’s Aid chief executive Sarah Benson
Women’s Aid chief executive Sarah Benson

“The consequences of this behaviour are serious. Whether the image is real or not, it looks real. 

"Women have given up their jobs (over such abuse). Stepped back from public life. There has been suicidal ideation in some cases and children have died by suicide.

“Something like this reaches into every part of someone’s life. The emotional weight of it is terrible.” 

Alex Cooney, the chief executive of Cybersafekids, said it was clear this should not be allowed on a platform that is meant to be regulated, at home and at a European level. 

It constitutes an “egregious breach of privacy”, she said.

“They are dangerous,” she said of the images being generated. 

“Really dangerous. They’re sexual exploitation, they have real victims, and real harm. It’s shaping sexual expectations and sexual narratives.” 

Initially, Mr Musk expressed amusement at the trend, posting the crying-with-laughter emoji in response to a picture of a digitally manipulated toaster wearing a bikini. 

He said: “Not sure why, but I couldn’t stop laughing at this one.” 

After global outcry at the harmful nature of the content, he posted later that “anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content”.

Last week, X users began to notice that by going to Grok’s X page (the same way you might go to your own page or a page belonging to a politician or a public figure), you could see all the images it was generating under public posts by simply clicking the “photos” tab on that page.

It would take you a very long time simply to scroll down a minute’s worth of AI generated content. 

And, overwhelmingly, it was AI-generated photos of women in varying states of undress.

The pockets of anger among users at single images being generated swelled into a widespread fury when the extent of the trend could be seen all at once.

Someone at X obviously thought it was a problem to at least some extent. 

But, instead of disabling the undressing feature initially, a change was made to the photos tab on Grok’s page, which gave an error message instead of showing the pictures.

All of these images were being generated, but there was no one place to see them all in real time anymore.

“Instead of stopping Grok doing it, it’s shielding it from some potential audiences,” Ms Benson said, likening it to a meeting she once attended with Mindgeek, now Aylo, which owns some of the world’s most used pornographic websites.

She said that they showed steps they were taking to prevent specific content, such as rape, be searched on its platforms, but, just searching words that were related to these topics could still yield results.

“It was just the search function, there was nothing to address the actual content itself,” she said, likening it to the X and Grok situation.

For its part, an X spokesperson has said: “We take action against illegal content on X, including child sexual abuse material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary.

"Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content."

Regulation and political will 

Since the "trend" emerged, politicians have lined up to condemn the practice.

Taoiseach Micheál Martin called it “unacceptable” and “shocking”. 

TĂĄnaiste Simon Harris said that any Irish laws that need to be updated to crack down on the creation and sharing of explicit images will be implemented.

AI minister Niamh Smyth sought a meeting with X to discuss the matter, emphasising that the sharing of non-consensual intimate images is illegal, and the generation of child sexual abuse material is illegal.

X owner Elon Musk.  
X owner Elon Musk.  

Social Democrat TD Gary Gannon said this feature on X was not “accidental or incidental” but the “foreseeable and preventable outcome of design choices”.

“There must be consequences,” he said. 

“Strong fines, criminal charges, where appropriate, and serious scrutiny of any public body that continues to maintain a presence on a platform that has shown such a reckless and disturbing disregard for child safety.” 

Organisations have also ceased using X as a result including Women's Aid and CyberSafeKids.

CoimisiĂșn na MeĂĄn, when first contacted by this publication on 2 January about the matter, said that the sharing of non-consensual intimate images was illegal and should be reported to gardaĂ­ and the Irish national reporting centre, Hotline.ie.

By the following week, it was saying it was in contact with the European Commission on the matter while digital services commissioner John Evans said they were also in contact with the gardaĂ­ and that breaches of the law could lead to hefty fines for X.

In a lengthy statement, gardaí said they are “acutely aware of the proliferation of AI generated online material” and urged those affected to come forward.

The pressure appeared to finally tell on X and Grok by Friday, as the function was turned off for the vast majority of users and image generation and editing only made available to paid subscribers. But it took global condemnation and threat of significant fines to get there. 

Ms Cooney said that regulators here need to ensure they keep up the pressure on such platforms, especially when breaches seem so obvious and egregious. 

“It’s incumbent on our regulator, it shouldn’t take a long time to establish this is illegal,” she said. 

“We have all these laws. This is clearly in breach of Digital Services Act in terms of protection of minors. It shouldn’t be that hard.” 

Ms Benson said there are legislative responses that can be made here but, in terms of the non-consensual sharing of images being illegal, the inclusion of deepfakes within the legislation “hasn’t been tested yet”.

“This is high stakes,” she said. 

“It can be used to destroy a person’s reputation. In Europe, we have the best opportunity to model accountability at that level, but it’s moving too slowly. There’s hugely competing vested interests.” 

When X was fined over €100m by Europe in December, Mr Musk called for the European Union to be disbanded and received support from the highest levels of the Trump administration as it continues its disdain for the EU.

But, when it comes to his platform having allowed its AI tool to create images of children in bikinis, Mr Musk may well find himself with less support than before.

Regulators in France, the UK and India among others, have put the pressure on X as global outrage ramped up.

Both Ms Benson and Ms Cooney said there was no good reason to have such technology that can remove clothes from pictures of individuals and this should be banned outright.

“Regulation is too slow, and there’s too much optionality in the language of them,” Ms Cooney said. 

“This is clear, it’s black and white. It should not be happening. Ban these tools. They serve no good purpose.

“This is the first real test of the regulator’s powers. We cannot be seen to be prevaricating in any way. What are we going to do in the short term to ensure this is stopped?

“If we mess up on this one when it’s so clear cut, it’ll be so much tougher next time when it’s a much greyer area we’re trying to tackle.” 

Additional reporting from The Guardian

x

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited