UK Technology Firms and Child Safety Officials to Test AI's Capability to Create Exploitation Content

Technology companies and child protection organizations will receive permission to evaluate whether artificial intelligence tools can generate child abuse images under recently introduced UK legislation.

Substantial Increase in AI-Generated Illegal Material

The declaration coincided with revelations from a safety watchdog showing that cases of AI-generated CSAM have increased dramatically in the past year, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the changes, the authorities will allow designated AI companies and child safety organizations to inspect AI systems – the underlying technology for conversational AI and image generators – and ensure they have adequate protective measures to prevent them from producing depictions of child exploitation.

"Fundamentally about stopping exploitation before it occurs," declared the minister for AI and online safety, noting: "Experts, under rigorous conditions, can now identify the danger in AI models promptly."

Addressing Regulatory Obstacles

The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot generate such images as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This legislation is aimed at preventing that problem by enabling to halt the creation of those images at their origin.

Legal Structure

The changes are being introduced by the government as modifications to the crime and policing bill, which is also establishing a ban on owning, producing or sharing AI systems developed to generate exploitative content.

Real-World Consequences

This recently, the official visited the London headquarters of a children's helpline and listened to a mock-up conversation to counsellors involving a report of AI-based abuse. The interaction depicted a teenager seeking help after being blackmailed using a explicit deepfake of themselves, created using AI.

"When I hear about children experiencing extortion online, it is a source of intense anger in me and justified anger amongst families," he stated.

Concerning Data

A prominent online safety foundation reported that instances of AI-generated exploitation material – such as webpages that may include numerous images – had more than doubled so far this year.

Cases of category A content – the gravest form of abuse – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to toddlers increased from five in 2024 to 92 in 2025

Industry Reaction

The law change could "represent a vital step to ensure AI tools are safe before they are launched," stated the head of the online safety organization.

"Artificial intelligence systems have made it so survivors can be victimised all over again with just a few clicks, providing offenders the capability to make potentially endless quantities of sophisticated, lifelike child sexual abuse material," she added. "Material which further exploits survivors' suffering, and makes children, especially girls, less safe both online and offline."

Support Interaction Information

Childline also released information of support sessions where AI has been referenced. AI-related harms mentioned in the conversations include:

  • Using AI to rate body size, physique and appearance
  • AI assistants dissuading children from consulting safe adults about harm
  • Facing harassment online with AI-generated content
  • Online blackmail using AI-manipulated images

During April and September this year, Childline conducted 367 support sessions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.

Half of the mentions of AI in the 2025 sessions were connected with mental health and wellness, encompassing using chatbots for support and AI therapeutic apps.

Justin Valenzuela
Justin Valenzuela

A seasoned journalist and cultural critic with a passion for uncovering stories that connect communities worldwide.