AI used to generate thousands more child abuse videos in 2025, campaigners warn
The Internet Watch Foundation (IWF) said its analysts found 3,440 AI-generated videos of child sexual abuse in 2025, compared to 13 in 2024.
Paedophiles and other criminals used AI to generate thousands more child sexual abuse videos last year, contributing to record levels of the harrowing material found online, campaigners have warned.
The Internet Watch Foundation (IWF) said its analysts found 3,440 AI-generated videos of child sexual abuse in 2025, compared to 13 in 2024.
In total, IWF staff dealt with 312,030 confirmed reports of abuse images found online in 2025, up from 291,730 the previous year.
Its research suggests that of the 3,440 AI-generated videos, 2,230 were the most extreme category under UK law, category A, and 1,020 were the second most extreme.
Kerry Smith, IWF chief executive, said: âWhen images and videos of children suffering sexual abuse are distributed online, it makes everyone, especially those children, less safe.
âOur analysts work tirelessly to get this imagery removed to give victims some hope. But now AI has moved on to such an extent, criminals essentially can have their own child sexual abuse machines to make whatever they want to see.
âThe frightening rise in extreme category A videos of AI-generated child sexual abuse shows the kind of things criminals want. And it is dangerous.
âEasy availability of this material will only embolden those with a sexual interest in children, fuel its commercialisation and further endanger children both on and offline.
âNow governments around the world must ensure AI companies embed safety by design principles from the very beginning. It is unacceptable that technology is released which allows criminals to create this content.â The research comes as X announced limits on its AI chatbot Grokâs ability to manipulate images following an outcry over reports of users being able to instruct it to sexualise images of women and children.
The company said earlier this week that it would prevent Grok âediting images of people in revealing clothesâ and block users from generating similar images of real people in countries where it is illegal.
Technology Secretary Liz Kendall said she still expects the regulator Ofcom to âfully and robustlyâ establish the facts, and while the watchdog welcomed the new restrictions, said its investigation will continue as it seeks âanswers into what went wrong and whatâs being done to fix itâ.
The IWF has previously said it wants all nudifying software banned, argues AI companies need to make tools safer before they are made available and has insisted Government should make this mandatory.
Childrenâs charity the NSPCC said the IWFâs findings were âboth deeply alarming and sadly predictableâ.
Its chief executive, Chris Sherwood, said: âOffenders are using these tools to create extreme material at a scale weâve never faced before, with children paying the price.
âTech companies cannot keep releasing AI products without building in vital protections. They know the risks and they know the harms that can be caused. It is up to them to ensure their products can never be used to create indecent images of children.
âThe UK Government and Ofcom must now step in and ensure tech companies are held to account.
âWe are calling on Ofcom to use every tool available to them through the Online Safety Act and for Government to introduce a statutory duty of care to ensure generative AI services are required to build childrenâs safety into the design of their products and prevent these horrific crimes.â Ms Kendall branded it âutterly abhorrent that AI is being used to target women and girlsâ, and insisted the Government âwill not tolerate this technology being weaponised to cause harm, which is why I have accelerated our action to bring into force a ban on the creation of non-consensual AI-generated intimate imagesâ.
She added: âAI should be a force for progress, not abuse, and we are determined to support its responsible use to drive growth, improve lives and deliver real benefits, while taking action where it is misused.
âThat is also why we have introduced a world-leading offence targeting AI models trained or adapted to generate child sexual abuse material. Possessing, supplying or modifying these models will soon be a crime.â The Lucy Faithfull Foundation, that works to support offenders to stop viewing images of child abuse, said it has also seen the number of people using AI to view and make abuse images double in the last year.
Young people who are worried that indecent images of them have been shared online can use the free Report Remove tool at childline.org.uk/remove Minister for Safeguarding Jess Phillips said: âThis surge in AI-generated child abuse videos is horrifying â this Government will not sit back and let predators generate this repulsive content.â She added: âThere can be no more excuses from technology companies. Take action now or we will force you to.â




