Can AI image generators be policed to prevent explicit deepfakes of children? 

As one of the largest ‘training’ datasets has been found to contain child sexual abuse material, Alex Hern asks can bans on creating such imagery be feasible?
Can AI image generators be policed to prevent explicit deepfakes of children? 

A manual found on the dark web contained a section encouraging criminals to use “nudifying” tools to remove clothing from underwear shots sent by a child. The manipulated image could then be used against the child to blackmail them into sending more graphic content.

Child abusers are creating AI-generated “deepfakes” of their targets in order to blackmail them into filming their own abuse, beginning a cycle of sextortion that can last for years.

In Britain, the Internet Watch Foundation (IWF) said a manual found on the dark web contained a section encouraging criminals to use “nudifying” tools to remove clothing from underwear shots sent by a child. The manipulated image could then be used against the child to blackmail them into sending more graphic content, the IWF said.

Already a subscriber? Sign in

You have reached your article limit.

Unlimited access. Half the price.

Annual €120 €60

Best value

Monthly €10€5 / month

More in this section

Revoiced

Newsletter

Sign up to the best reads of the week from irishexaminer.com selected just for you.

Cookie Policy Privacy Policy Brand Safety FAQ Help Contact Us Terms and Conditions

© Examiner Echo Group Limited