Add thelocalreport.in As A Trusted Source
New one Law It will aim to crack down on “abhorrent” sexual abuse deepfake After the report of AyeChild sexual abuse depictions have doubled compared to the previous year.
The new law, which the government will introduce as an amendment on Wednesday Crime and Policing BillArtificial intelligence models will need to have safeguards in place to ensure that their technology cannot be used to create child sexual abuse material (CSAM).
This comes as data from a child abuse charity Internet Watch Foundation (IWF) shows that reports of AI-generated CSAM have doubled over the past year – from 199 in 2024 to 426 in 2025.
The charity said there had been a “disturbing” increase in generated images of the youngest children, with depictions of 0-2-year-olds rising from five to 92 over the past year.
In its ‘Trends of AI-generated CSAM’ research, IWF found that reported AI-generated content is also becoming more extreme. Category A images – the most serious type that include penetrative sexual activity, sexual activity with an animal or sadistic images – now make up more than half of content, compared to 41 per cent last year.
It said girls were “overwhelmingly” targeted, with 94 per cent of illegal AI-generated images appearing.
The charity welcomed the government’s announcement, which it said was an “important step” towards ensuring that AI products are safe before they are released.
Kerry Smith, chief executive of IWF, said: “AI tools have made it so that survivors can be re-victimized with just a few clicks, giving criminals the ability to create potentially unlimited amounts of sophisticated, photorealistic child sexual exploitation material.
“New technology needs to have security built into it by design. Today’s announcement can be an important step forward in ensuring that AI products are safe before they are released.”
The proposed new rules will allow the technology and home secretaries to designate “authorized testers”, including AI developers and child protection organizations like IWF. The government said these bodies will be “empowered” to check AI models to ensure that people exploiting children cannot misuse them.
Currently, developers are unable to conduct security testing on AI models due to the illegal nature of the images involved. This means that images can only be deleted after they have been created and shared online.
Last year 27-year-old Hugh Nelson was given a “historic sentence” Jailed for 18 years After using AI modeling software Daz 3D to transform legitimate photographs of real children into pornographic images.
Taking commissions from online hunters, Nelson created hundreds of illegal images using a plugin that allowed him to transfer real faces onto an AI model.
As part of the new law, the government said it will also create a group of experts in AI and child safety to design necessary safeguards to protect sensitive data and prevent any risk of leaking of illegal content.
Technology Secretary Liz Kendall said the government would “not allow” technological advances to go beyond the safety of children.
“These new laws will ensure that AI systems are made secure at the source, preventing vulnerabilities that could put children at risk,” he said. “By empowering trusted organizations to vet their AI models, we are ensuring that child safety is designed into AI systems, not an afterthought.”
Minister for Safety and Violence Against Women and Girls Jess Phillips said the measures would stop legitimate AI tools being used to create “hateful” content.