British Tech Companies and Child Safety Officials to Test AI's Ability to Generate Exploitation Content
Technology companies and child safety agencies will receive permission to evaluate whether artificial intelligence systems can produce child exploitation material under new British legislation.
Significant Rise in AI-Generated Harmful Material
The declaration came as revelations from a protection monitoring body showing that reports of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.
Updated Regulatory Framework
Under the changes, the government will allow approved AI companies and child protection organizations to inspect AI models – the underlying technology for chatbots and visual AI tools – and ensure they have sufficient protective measures to prevent them from creating depictions of child exploitation.
"Fundamentally about preventing abuse before it happens," declared Kanishka Narayan, noting: "Experts, under rigorous conditions, can now identify the risk in AI models early."
Tackling Legal Obstacles
The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI developers and other parties cannot generate such images as part of a testing process. Until now, officials had to wait until AI-generated CSAM was published online before dealing with it.
This law is aimed at averting that problem by helping to stop the creation of those materials at their origin.
Legislative Framework
The amendments are being added by the authorities as revisions to the crime and policing bill, which is also establishing a ban on possessing, producing or sharing AI models designed to generate child sexual abuse material.
Practical Impact
This week, the official toured the London base of a children's helpline and listened to a simulated call to advisors featuring a account of AI-based exploitation. The interaction depicted a teenager seeking help after facing extortion using a explicit AI-generated image of themselves, constructed using AI.
"When I learn about young people facing extortion online, it is a cause of extreme frustration in me and rightful concern amongst families," he stated.
Alarming Data
A leading internet monitoring foundation reported that cases of AI-generated exploitation content – such as online pages that may contain numerous images – had more than doubled so far this year.
Instances of category A material – the most serious form of abuse – increased from 2,621 images or videos to 3,086.
- Girls were overwhelmingly victimized, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The legislative amendment could "constitute a vital step to ensure AI products are safe before they are launched," stated the head of the internet monitoring organization.
"Artificial intelligence systems have made it so survivors can be victimised all over again with just a few clicks, providing offenders the capability to make possibly endless amounts of sophisticated, photorealistic exploitative content," she added. "Material which further commodifies survivors' trauma, and makes young people, particularly girls, less safe on and off line."
Support Session Information
The children's helpline also released details of support sessions where AI has been mentioned. AI-related risks mentioned in the sessions include:
- Employing AI to evaluate weight, body and appearance
- AI assistants dissuading young people from talking to trusted guardians about harm
- Being bullied online with AI-generated content
- Digital blackmail using AI-manipulated images
Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and related terms were mentioned, four times as many as in the equivalent timeframe last year.
Half of the mentions of AI in the 2025 interactions were connected with mental health and wellbeing, including using AI assistants for assistance and AI therapeutic applications.