UK Technology Firms and Child Safety Officials to Examine AI's Capability to Generate Abuse Content
Tech firms and child protection agencies will be granted permission to assess whether AI systems can generate child abuse images under recently introduced UK legislation.
Significant Rise in AI-Generated Harmful Content
The announcement coincided with findings from a protection watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
Updated Legal Structure
Under the changes, the government will permit approved AI companies and child protection groups to examine AI models – the foundational systems for conversational AI and visual AI tools – and verify they have adequate safeguards to prevent them from producing depictions of child sexual abuse.
"Fundamentally about preventing abuse before it occurs," declared the minister for AI and online safety, noting: "Specialists, under strict protocols, can now detect the danger in AI systems early."
Tackling Regulatory Obstacles
The amendments have been introduced because it is illegal to create and possess CSAM, meaning that AI developers and others cannot generate such content as part of a testing regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by enabling to halt the creation of those materials at their origin.
Legislative Structure
The changes are being introduced by the authorities as revisions to the criminal justice legislation, which is also implementing a prohibition on possessing, creating or sharing AI models developed to generate exploitative content.
Practical Consequences
This week, the minister visited the London headquarters of a children's helpline and listened to a mock-up conversation to advisors featuring a report of AI-based abuse. The interaction depicted a teenager requesting help after facing extortion using a explicit AI-generated image of himself, constructed using AI.
"When I hear about children facing extortion online, it is a cause of extreme frustration in me and justified concern amongst families," he stated.
Alarming Data
A prominent online safety foundation stated that instances of AI-generated exploitation material – such as webpages that may contain multiple files – had significantly increased so far this year.
Instances of the most severe material – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Female children were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of newborns to toddlers increased from five in 2024 to 92 in 2025
Industry Reaction
The law change could "represent a vital step to ensure AI products are safe before they are launched," commented the head of the online safety organization.
"Artificial intelligence systems have made it so victims can be victimised repeatedly with just a few clicks, providing offenders the capability to create possibly limitless quantities of sophisticated, lifelike child sexual abuse material," she added. "Content which further exploits survivors' trauma, and makes children, especially girls, more vulnerable both online and offline."
Counseling Interaction Information
Childline also published details of support sessions where AI has been referenced. AI-related harms mentioned in the conversations include:
- Using AI to evaluate weight, body and appearance
- AI assistants dissuading children from talking to safe adults about harm
- Facing harassment online with AI-generated content
- Digital extortion using AI-manipulated pictures
Between April and September this year, the helpline delivered 367 support sessions where AI, chatbots and related terms were discussed, four times as many as in the equivalent timeframe last year.
Fifty percent of the mentions of AI in the 2025 sessions were connected with psychological wellbeing and wellbeing, including utilizing chatbots for assistance and AI therapeutic applications.