UK Tech Companies and Child Safety Agencies to Test AI's Ability to Create Abuse Images

Technology companies and child safety agencies will be granted authority to assess whether AI systems can generate child abuse images under new UK laws.

Significant Increase in AI-Generated Illegal Material

The declaration came as revelations from a protection watchdog showing that cases of AI-generated child sexual abuse material have more than doubled in the past year, rising from 199 in 2024 to 426 in 2025.

New Regulatory Structure

Under the changes, the government will permit approved AI developers and child protection organizations to inspect AI systems – the foundational systems for conversational AI and image generators – and verify they have adequate protective measures to prevent them from producing images of child exploitation.

"Fundamentally about preventing abuse before it occurs," stated the minister for AI and online safety, adding: "Experts, under strict conditions, can now identify the danger in AI models promptly."

Addressing Regulatory Obstacles

The changes have been introduced because it is against the law to create and possess CSAM, meaning that AI creators and other parties cannot create such images as part of a evaluation regime. Until now, authorities had to wait until AI-generated CSAM was uploaded online before addressing it.

This law is designed to preventing that issue by helping to stop the creation of those materials at their origin.

Legal Structure

The amendments are being introduced by the government as revisions to the criminal justice legislation, which is also establishing a prohibition on owning, creating or sharing AI systems designed to create child sexual abuse material.

Practical Consequences

This week, the minister toured the London base of a children's helpline and heard a simulated conversation to counsellors involving a report of AI-based abuse. The call depicted a teenager seeking help after being blackmailed using a sexualised deepfake of himself, constructed using AI.

"When I hear about children facing extortion online, it is a cause of extreme anger in me and justified anger amongst parents," he stated.

Alarming Data

A prominent online safety foundation reported that instances of AI-generated abuse content – such as webpages that may contain numerous images – had more than doubled so far this year.

Instances of category A material – the gravest form of abuse – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
  • Portrayals of infants to two-year-olds increased from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a vital step to ensure AI tools are safe before they are released," stated the chief executive of the online safety foundation.

"Artificial intelligence systems have enabled so victims can be targeted repeatedly with just a simple actions, giving offenders the capability to make possibly endless quantities of sophisticated, lifelike exploitative content," she added. "Content which further commodifies survivors' suffering, and renders young people, particularly female children, less safe on and off line."

Support Session Data

Childline also released details of counselling sessions where AI has been referenced. AI-related harms mentioned in the sessions comprise:

  • Employing AI to evaluate body size, physique and looks
  • AI assistants dissuading children from talking to trusted guardians about harm
  • Being bullied online with AI-generated content
  • Digital blackmail using AI-faked pictures

Between April and September this year, the helpline delivered 367 counselling sessions where AI, chatbots and related terms were mentioned, significantly more as many as in the same period last year.

Half of the references of AI in the 2025 interactions were connected with psychological wellbeing and wellness, including utilizing AI assistants for assistance and AI therapeutic apps.

Shannon Mclaughlin
Shannon Mclaughlin

Elara is a cybersecurity expert with over a decade of experience in network security and proxy technologies, dedicated to enhancing online privacy.