AI-generated child abuse imagery will proliferate extortion of minors, report warns
Europol's executive director Catherine De Bolle said the number of cybercriminals entering the market continued to grow steadily last year.
Online sexual extortion of minors is a “rising threat” and AI-generated imagery will further proliferate in the near future, according to a Europol cybercrime report.
The EU police coordination agency said that AI-assisted cybercrime has “only just begun” — not just in the creation of child abuse imagery but also in sophisticated online scams.
Europol’s Internet Organised Crime Threat Assessment 2024 also said that small and medium businesses were “increasingly popular targets” for ransomware attacks by cybergangs.
The agency’s executive director, Catherine De Bolle, said the number of cybercriminals entering the market continued to grow steadily last year, thanks to the adoption of new technologies as well as the increasing complexity of digital infrastructures.
The report said the “growing volume” of child abuse imagery online was posing difficulties to law enforcement.
“Child sexual abuse material keeps proliferating online,” it said. "The production and dissemination of [the material] remains a major concern, with a large portion of material detected now identified as self-generated explicit material.”Â
It said the latter material, created by children — especially teenagers — is voluntarily shared among peers and becomes child sexual abuse material when it is shared to others without permission of person who first sent it.
The material can also be generated by online sexual grooming and extortion perpetrators.
“In this setting, the perpetrator identifies the victim online, often on gaming platforms or social media," the report said.Â
Sometimes, the perpetrators might pretend to be peers looking for a romantic relationship, it said, then turn into blackmailers once they receive the first explicit image from the minor.
The report said there are also online groups sharing violent and sexual content, often hosted on end-to-end encryption communication platforms.
The report said that live-distant child abuse was a “persistent threat”, where offenders watch child sexual abuse on demand with the support of one or more facilitators who perpetrate the abuse on a victim for payment.
It said advances in AI meant the child sexual abuse material created “increasingly resembles genuine material”.
The report said this poses “great challenges” to police. It added that the production of this material does not require high levels of technical expertise, potentially broadening the number and spectrum of perpetrators.
The report said AI might be able to allow predators to overcome language barriers in targeting children and create fake content of a target in order to extort them.





