UK Tech Firms and Child Protection Officials to Examine AI's Ability to Create Abuse Content

Tech firms and child protection agencies will be granted permission to evaluate whether artificial intelligence tools can generate child exploitation material under new UK laws.

Significant Increase in AI-Generated Illegal Content

The declaration came as revelations from a safety monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

Updated Legal Structure

Under the amendments, the authorities will allow designated AI developers and child safety organizations to examine AI models – the underlying technology for chatbots and image generators – and verify they have adequate safeguards to stop them from creating images of child exploitation.

"Ultimately about preventing abuse before it happens," stated Kanishka Narayan, noting: "Specialists, under rigorous conditions, can now detect the risk in AI models early."

Addressing Legal Challenges

The amendments have been introduced because it is illegal to produce and own CSAM, meaning that AI developers and others cannot generate such content as part of a testing process. Previously, authorities had to delay action until AI-generated CSAM was published online before dealing with it.

This law is aimed at preventing that problem by enabling to stop the production of those images at their origin.

Legislative Framework

The changes are being introduced by the authorities as revisions to the crime and policing bill, which is also establishing a prohibition on possessing, creating or sharing AI systems designed to create child sexual abuse material.

Practical Consequences

This recently, the minister visited the London headquarters of Childline and heard a mock-up call to advisors involving a account of AI-based abuse. The interaction portrayed a teenager seeking help after facing extortion using a sexualised AI-generated image of himself, constructed using AI.

"When I hear about children facing blackmail online, it is a cause of extreme frustration in me and justified anger amongst families," he said.

Alarming Data

A leading internet monitoring organization reported that instances of AI-generated abuse material – such as webpages that may contain numerous images – had more than doubled so far this year.

Cases of the most severe material – the gravest form of exploitation – increased from 2,621 images or videos to 3,086.

  • Female children were predominantly targeted, making up 94% of illegal AI depictions in 2025
  • Portrayals of newborns to two-year-olds increased from five in 2024 to 92 in 2025

Industry Response

The law change could "constitute a crucial step to ensure AI products are safe before they are released," stated the chief executive of the online safety organization.

"AI tools have made it so survivors can be targeted repeatedly with just a simple actions, giving criminals the capability to create potentially endless quantities of sophisticated, photorealistic child sexual abuse material," she added. "Material which further exploits survivors' trauma, and makes children, particularly girls, less safe on and off line."

Support Session Data

The children's helpline also published information of support interactions where AI has been referenced. AI-related harms mentioned in the sessions comprise:

  • Using AI to evaluate body size, body and appearance
  • Chatbots discouraging children from consulting trusted guardians about harm
  • Being bullied online with AI-generated content
  • Online blackmail using AI-faked images

During April and September this year, the helpline conducted 367 support sessions where AI, chatbots and related topics were mentioned, four times as many as in the same period last year.

Half of the mentions of AI in the 2025 interactions were related to psychological wellbeing and wellbeing, including utilizing AI assistants for assistance and AI therapy applications.

James Pruitt
James Pruitt

A passionate journalist and blogger with a focus on Central European affairs, dedicated to uncovering and sharing compelling narratives.