British Tech Companies and Child Safety Agencies to Examine AI's Capability to Generate Abuse Content
Technology companies and child protection organizations will be granted authority to evaluate whether AI tools can generate child abuse material under recently introduced UK legislation.
Significant Increase in AI-Generated Harmful Material
The announcement coincided with revelations from a safety watchdog showing that reports of AI-generated CSAM have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.
New Regulatory Framework
Under the amendments, the government will permit designated AI developers and child protection organizations to inspect AI models – the underlying systems for conversational AI and image generators – and verify they have sufficient protective measures to prevent them from producing depictions of child exploitation.
"Fundamentally about preventing exploitation before it occurs," stated Kanishka Narayan, noting: "Specialists, under rigorous protocols, can now identify the risk in AI systems promptly."
Addressing Legal Obstacles
The amendments have been introduced because it is against the law to create and own CSAM, meaning that AI developers and other parties cannot create such content as part of a testing process. Until now, officials had to wait until AI-generated CSAM was uploaded online before addressing it.
This legislation is designed to preventing that problem by helping to stop the creation of those images at their origin.
Legislative Structure
The changes are being added by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on owning, producing or distributing AI models developed to generate child sexual abuse material.
Real-World Consequences
This recently, the official visited the London base of Childline and listened to a mock-up call to advisors featuring a account of AI-based exploitation. The call portrayed a teenager seeking help after being blackmailed using a sexualised AI-generated image of himself, created using AI.
"When I learn about children facing extortion online, it is a cause of intense anger in me and justified concern amongst families," he said.
Concerning Data
A prominent internet monitoring foundation reported that instances of AI-generated exploitation material – such as online pages that may include numerous files – had significantly increased so far this year.
Instances of the most severe content – the gravest form of exploitation – rose from 2,621 visual files to 3,086.
- Girls were predominantly victimized, accounting for 94% of illegal AI images in 2025
- Portrayals of newborns to two-year-olds rose from five in 2024 to 92 in 2025
Sector Reaction
The legislative amendment could "constitute a crucial step to guarantee AI tools are secure before they are launched," commented the head of the online safety foundation.
"Artificial intelligence systems have made it so victims can be targeted all over again with just a simple actions, providing offenders the ability to make possibly endless quantities of sophisticated, lifelike child sexual abuse material," she continued. "Content which additionally exploits victims' trauma, and renders children, especially girls, more vulnerable both online and offline."
Counseling Session Information
The children's helpline also published details of support interactions where AI has been referenced. AI-related harms discussed in the conversations comprise:
- Employing AI to evaluate weight, physique and appearance
- Chatbots dissuading young people from talking to trusted adults about abuse
- Being bullied online with AI-generated material
- Online extortion using AI-manipulated pictures
Between April and September this year, the helpline conducted 367 counselling interactions where AI, conversational AI and related terms were mentioned, four times as many as in the same period last year.
Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for assistance and AI therapy applications.