How can technology be used to prevent AI-generated sexual images?
Research has estimated that before Grok's nudification tool went behind a paywall, up to 6,700 undressed images were being produced every hour.
The global outcry over the sexualisation and nudification of photographs — including of children — by Grok, the chatbot developed by Elon Musk’s artificial intelligence company xAI, has led to urgent discussions about how such technology should be more strictly regulated.
But to what extent can technology also be used to prevent this explosion in the generation and sharing of deepfake content of real people, without their knowledge or consent?





