UK Technology Firms and Child Safety Officials to Examine AI's Ability to Generate Exploitation Images
Technology companies and child protection organizations will be granted permission to evaluate whether artificial intelligence systems can produce child exploitation images under recently introduced UK legislation.
Significant Increase in AI-Generated Harmful Material
The declaration came as revelations from a safety watchdog showing that reports of AI-generated CSAM have increased dramatically in the last twelve months, growing from 199 in 2024 to 426 in 2025.
New Legal Framework
Under the amendments, the authorities will permit designated AI companies and child safety groups to examine AI models – the foundational technology for conversational AI and visual AI tools – and verify they have adequate safeguards to stop them from creating depictions of child exploitation.
"Ultimately about stopping abuse before it happens," stated the minister for AI and online safety, noting: "Experts, under strict conditions, can now detect the risk in AI models early."
Addressing Legal Obstacles
The changes have been implemented because it is illegal to produce and own CSAM, meaning that AI creators and others cannot generate such images as part of a evaluation regime. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.
This law is designed to preventing that issue by helping to halt the production of those images at source.
Legal Structure
The changes are being added by the authorities as revisions to the criminal justice legislation, which is also implementing a ban on possessing, creating or distributing AI systems designed to create exploitative content.
Practical Impact
This recently, the official visited the London headquarters of Childline and listened to a mock-up conversation to counsellors featuring a account of AI-based abuse. The call portrayed a teenager seeking help after facing extortion using a sexualised deepfake of himself, constructed using AI.
"When I learn about young people experiencing blackmail online, it is a source of intense anger in me and justified anger amongst families," he stated.
Alarming Data
A prominent internet monitoring organization stated that instances of AI-generated exploitation content – such as online pages that may contain multiple images – had significantly increased so far this year.
Instances of category A content – the gravest form of abuse – rose from 2,621 images or videos to 3,086.
- Girls were predominantly targeted, making up 94% of prohibited AI depictions in 2025
- Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025
Industry Response
The legislative amendment could "constitute a vital step to guarantee AI tools are safe before they are launched," stated the chief executive of the online safety foundation.
"Artificial intelligence systems have enabled so victims can be victimised all over again with just a simple actions, providing offenders the ability to create potentially limitless amounts of advanced, lifelike child sexual abuse material," she continued. "Material which additionally commodifies victims' suffering, and makes children, especially female children, less safe on and off line."
Support Session Data
The children's helpline also released information of support interactions where AI has been mentioned. AI-related risks mentioned in the sessions include:
- Employing AI to rate weight, body and appearance
- Chatbots dissuading children from consulting safe guardians about abuse
- Facing harassment online with AI-generated material
- Online blackmail using AI-manipulated images
During April and September this year, Childline delivered 367 support sessions where AI, conversational AI and related topics were discussed, significantly more as many as in the equivalent timeframe last year.
Fifty percent of the references of AI in the 2025 interactions were connected with mental health and wellbeing, including utilizing chatbots for support and AI therapeutic apps.