UCC researchers unveil 'world-first tool' aimed at persuading Grok users to avoid creating explicit images

UCC researchers unveil 'world-first tool' aimed at persuading Grok users to avoid creating explicit images

The UCC researchers said educating internet users to not engage with such AI-generated sexual exploitation must be a part of the response.

A “world-first tool” aimed at reducing the kinds of harmful engagement with explicit artificial intelligence (AI) generated images seen in the Grok controversy has been unveiled by researchers at University College Cork (UCC).

The free 10-minute intervention dubbed ‘Deepfakes/Real Harms’ has been designed to reduce the willingness of users to engage with harmful uses of deepfake technology, like creating non-consensual explicit content.

“There is a tendency to anthropomorphise AI technology — blaming Grok for creating explicit images and even running headlines claiming Grok 'apologised' afterwards,” lead researcher John Twomey, from the UCC School of Applied Psychology, said.

But human users are the ones deciding to harass and defame people in this manner. Our findings suggest that educating individuals about the harms of AI identity manipulation can help to stop this problem at source. 

The issue has come to the fore on a large scale in recent weeks, after the Elon Musk-owned X social media site began to comply with the requests of users who asked its AI tool Grok to manipulate photos of women and children, undressing them to put them in bikinis or put them in sexually suggestive poses.

The UCC researchers said that, against this backdrop, educating internet users to not engage with such AI-generated sexual exploitation must be a part of the response.

They found people’s engagement with non-consensual deepfake imagery was associated with the belief in several myths about deepfakes.

This included the belief the images are only harmful if viewers think real or public figures are legitimate targets for the creation of such images.

Their 10-minute intervention focused on what they called the encouragement of “reflection and empathy with victims of AI imagery abuse”, and said it significantly reduced the belief in common deepfake myths.

It also lowered the intention of users to engage with such harmful users of deepfake technology, according to the researchers.

The research project principal investor Gillian Murphy said referring to such tech as “deepfake pornography” was deeply misleading as pornography generally refers to an industry where participation is consensual.

“What we are seeing is the creation and circulation of non-consensual synthetic intimate imagery, and that distinction matters because it captures the real and lasting harm experienced by victims of all ages around the world," she said.

“This toolkit does not relieve platforms and regulators of their responsibilities in tackling this appalling abuse, but we believe it can be part of a multi-pronged approach.”

More in this section

Lunchtime News

Newsletter

Keep up with stories of the day with our lunchtime news wrap and important breaking news alerts.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited