Tech firms and child safety agencies will receive authority to evaluate whether artificial intelligence systems can generate child exploitation material under new British laws.
The announcement came as revelations from a protection monitoring body showing that cases of AI-generated CSAM have increased dramatically in the past year, rising from 199 in 2024 to 426 in 2025.
Under the amendments, the government will permit approved AI developers and child safety organizations to inspect AI systems – the underlying systems for chatbots and image generators – and ensure they have adequate protective measures to stop them from creating depictions of child exploitation.
"Ultimately about preventing exploitation before it occurs," stated Kanishka Narayan, adding: "Specialists, under rigorous protocols, can now detect the risk in AI models early."
The amendments have been implemented because it is illegal to produce and possess CSAM, meaning that AI creators and other parties cannot generate such content as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that problem by enabling to stop the creation of those images at source.
The changes are being added by the authorities as modifications to the criminal justice legislation, which is also implementing a ban on possessing, producing or sharing AI systems designed to create child sexual abuse material.
This recently, the minister visited the London headquarters of Childline and listened to a simulated conversation to counsellors featuring a account of AI-based abuse. The call depicted a teenager requesting help after being blackmailed using a explicit AI-generated image of himself, constructed using AI.
"When I learn about children experiencing extortion online, it is a source of extreme anger in me and justified anger amongst families," he said.
A leading online safety organization reported that instances of AI-generated exploitation content – such as webpages that may include multiple files – had significantly increased so far this year.
Instances of category A material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.
The legislative amendment could "constitute a vital step to guarantee AI tools are safe before they are launched," stated the head of the online safety organization.
"AI tools have enabled so survivors can be targeted repeatedly with just a few clicks, giving offenders the ability to make potentially endless amounts of sophisticated, photorealistic exploitative content," she continued. "Content which further commodifies victims' suffering, and makes children, especially female children, more vulnerable on and off line."
Childline also published details of support interactions where AI has been referenced. AI-related risks mentioned in the conversations include:
During April and September this year, Childline delivered 367 counselling sessions where AI, chatbots and associated terms were mentioned, four times as many as in the same period last year.
Half of the references of AI in the 2025 sessions were connected with psychological wellbeing and wellness, including using AI assistants for assistance and AI therapy apps.
Elara is a passionate esports journalist with over a decade of experience covering major gaming events and trends.