UK seeks to curb AI child sex abuse imagery with tougher testing

Liv McMahonTechnology reporter

Getty Images A man sits in front of a computer in the dark, his silhouette illuminated by the light of the screen.Getty Images

The UK government will allow technology firms and child safety charities to actively test artificial intelligence (AI) tools to ensure they cannot create images of child sexual abuse.

An amendment to the Crime and Policing Bill announced on Wednesday will allow “authorized testers” to evaluate models for their ability to generate unlawful child sexual abuse material (CSAM) before it is published.

Technology Minister Liz Kendall said the measures would “secure AI systems at their source”, although some campaigners say more needs to be done.

This comes after the Internet Watch Foundation (IWF) said the number of AI-related CSAM reports has doubled over the past year.

The charity, one of the few in the world licensed to actively search for child abuse content online, said it removed 426 reported items between January and October 2025.

That number increased from 199 during the same period in 2024, the department said.

Its chief executive, Kerry Smith, welcomed the government's proposals, saying they would build on its long-standing efforts to combat online CSAM.

“AI tools have made it possible for survivors to become victims again with just a few clicks, giving perpetrators the ability to create a potentially unlimited amount of sophisticated, photorealistic child sexual abuse material,” she said.

“Today’s announcement could be a vital step in ensuring the safety of artificial intelligence products before they are released.”

Rani Govender, child online safety policy manager at children's charity NSPCC, welcomed the measures to encourage firms to be more accountable and monitor their models and child safety.

“But to make a difference for children, it can't be optional,” she said.

“The government should ensure that AI developers are required to use this provision so that protection against child sexual abuse becomes an integral part of product design.”

“Ensuring the safety of children”

The government said its proposed changes to the law would also allow AI developers and charities to ensure AI models are adequately protected against extreme pornography and non-consensual intimate images.

Child safety experts and organizations often warn that artificial intelligence tools, developed in part from vast amounts of varied online content, are being used to create highly realistic images of violence against children or dissenting adults.

Some, including IWF and child safety charity Thorn said this risks jeopardizing efforts to control such material by making it difficult to determine whether such content is real or created by artificial intelligence.

The researchers suggested that demand for these images online is growing. especially on the dark weband that some of them created by children.

Earlier this year, the Home Office said the UK would become the first country in the world to ban the possession, creation or distribution of artificial intelligence tools designed to create child sexual abuse material (CSAM), with penalties of up to five years in prison.

Ms Kendall said on Wednesday that “by giving trusted organizations the opportunity to scrutinize their AI models, we are ensuring that children's safety is built into AI systems and not included as an afterthought.”

“We will not allow technological advances to outpace our ability to keep children safe,” she said.

Conservation Secretary Jess Phillips said the measures would also “mean that legitimate artificial intelligence tools cannot be manipulated to create disgusting material, and more children will be protected from predators as a result.”

Green advertising banner with black squares and rectangles forming pixels, moving from the right. The text reads:

Leave a Comment