Add thelocalreport.in As A Trusted Source
Activists issue stern warning after incident AI was used to create thousands of child sexual abuse The number of such videos last year saw record levels of distressing material online.
this Internet Watch Foundation The IWF revealed that its analysts found 3,440 AI-generated videos depicting child sexual abuse in 2025, a significant increase from the 13 found in 2024.
Overall, IWF staff processed 312,030 confirmed reports of graphic abuse found on the internet in 2025, up from 291,730 the previous year.
Their research showed that of 3,440 AI-generated videos, 2,230 fell into Category A, the most extreme classification under UK law, and a further 1,020 were considered the second most serious category.
Kerry Smith, chief executive of IWF, said: “When images and videos of children who have been sexually abused are circulated online, everyone, especially those children, become less safe.
“Our analysts work tirelessly to remove these images and give victims some hope. But now artificial intelligence has advanced to the point where criminals can essentially have their own child sex abuse machines to produce whatever they want to see.
“The alarming rise in AI-generated extreme category A videos of child sexual abuse shows what criminals want. And it’s dangerous.
“The ease with which this material is available only emboldens those with a sexual interest in children, fuels its commercialization and further harms children both online and offline.
“Governments around the world must now ensure that AI companies embed security through design principles from the outset. It is unacceptable to release technology that allows criminals to create this kind of content.”
The research comes as X announced it would limit the ability of its artificial intelligence chatbot Grok to manipulate images, following reports that users could instruct it to sexualize images of women and children, sparking an outcry.
The company said earlier this week it would block Grok from “editing images of scantily clad people” and block users from generating similar images of real people in countries where it’s illegal.
Technology minister Liz Kendall said she still wanted regulator Ofcom to establish the facts “fully and robustly” and while the regulator welcomed the new restrictions, it said its investigation would continue to seek “answers to the issues and the steps being taken to address them”.
The IWF has previously said it wants all nudity software banned, arguing that AI companies need to ensure the security of tools before they are available and insisting that governments should make this mandatory.
Children’s charity the NSPCC said the IWF’s findings were “both deeply shocking and tragically unexpected”.
Its chief executive, Chris Sherwood, said: “Criminals are using these tools to create extreme materials on a scale we have never encountered before, and children are paying the price.
“Tech companies cannot continue to release AI products without putting in place important safeguards. They know the risks and they know the harm that can be caused. They have a responsibility to ensure that their products are never used to create indecent images of children.
“The UK government and Ofcom must now step in and ensure technology companies are held to account.
“We call on Ofcom to use all the tools provided in the Online Safety Bill and on the government to introduce a statutory duty of care to ensure AI services are generated that build child safety into the design of their products and prevent these horrific crimes.”
Ms Kendall called it “absolutely abhorrent that AI is being used to target women and girls” and insisted the government “will not tolerate this technology being weaponized to cause harm, which is why I have accelerated action to ban the creation of non-consensual AI-generated intimate images”.
She added: “AI should be a force for progress, not a force for abuse, and we are determined to support the responsible use of AI to drive growth, improve lives and deliver real benefits, while taking action where AI is being misused.
“That’s why we have launched a world-leading offense targeting artificial intelligence models that have been trained or adapted to generate child sexual abuse material. It will soon become a crime to possess, provide or modify these models.”
The Lucy Faithfull Foundation, which works to support perpetrators to stop viewing child abuse images, said the number of people using artificial intelligence to view and create abuse images had also doubled last year.
Young people concerned about indecent images of themselves being shared online can use the free report removal tool at childline.org.uk/remove
Protection Minister Jess Phillips said: “The rise in AI-generated child abuse videos is alarming – this government will not stand by and allow predators to generate this abhorrent content.”
She added: “Tech companies have no more excuses. Take action now or we will force you to do so.”
