British Tech Firms and Child Protection Officials to Examine AI's Capability to Create Abuse Images

Tech firms and child protection organizations will be granted authority to evaluate whether artificial intelligence tools can produce child abuse material under recently introduced UK laws.

Significant Increase in AI-Generated Illegal Material

The declaration coincided with revelations from a protection monitoring body showing that reports of AI-generated CSAM have more than doubled in the last twelve months, growing from 199 in 2024 to 426 in 2025.

New Regulatory Framework

Under the changes, the government will permit approved AI developers and child protection groups to inspect AI systems – the foundational technology for chatbots and visual AI tools – and verify they have sufficient protective measures to stop them from creating images of child exploitation.

"Fundamentally about preventing exploitation before it happens," declared the minister for AI and online safety, noting: "Specialists, under rigorous protocols, can now detect the danger in AI models promptly."

Tackling Legal Challenges

The amendments have been implemented because it is illegal to create and possess CSAM, meaning that AI developers and other parties cannot generate such content as part of a evaluation regime. Until now, officials had to delay action until AI-generated CSAM was uploaded online before dealing with it.

This law is designed to averting that problem by enabling to stop the production of those materials at source.

Legislative Framework

The amendments are being added by the government as modifications to the crime and policing bill, which is also implementing a ban on owning, creating or sharing AI models designed to create child sexual abuse material.

Real-World Consequences

This recently, the official visited the London headquarters of Childline and heard a mock-up call to counsellors involving a report of AI-based abuse. The call depicted a adolescent requesting help after facing extortion using a sexualised deepfake of himself, created using AI.

"When I hear about young people facing blackmail online, it is a cause of extreme frustration in me and justified concern amongst parents," he said.

Alarming Data

A leading internet monitoring organization stated that cases of AI-generated exploitation material – such as online pages that may include numerous images – had more than doubled so far this year.

Cases of the most severe content – the gravest form of abuse – increased from 2,621 images or videos to 3,086.

  • Girls were overwhelmingly targeted, accounting for 94% of prohibited AI images in 2025
  • Portrayals of newborns to toddlers rose from five in 2024 to 92 in 2025

Sector Reaction

The law change could "represent a crucial step to ensure AI products are secure before they are launched," commented the chief executive of the internet monitoring foundation.

"Artificial intelligence systems have enabled so victims can be victimised all over again with just a few clicks, giving offenders the ability to make possibly endless amounts of sophisticated, lifelike exploitative content," she added. "Content which further exploits victims' suffering, and renders children, especially female children, less safe on and off line."

Counseling Session Data

The children's helpline also released information of counselling sessions where AI has been referenced. AI-related harms mentioned in the conversations include:

  • Using AI to rate body size, body and looks
  • Chatbots dissuading young people from consulting safe guardians about abuse
  • Facing harassment online with AI-generated material
  • Online blackmail using AI-manipulated images

Between April and September this year, the helpline conducted 367 support interactions where AI, conversational AI and associated topics were discussed, four times as many as in the equivalent timeframe last year.

Half of the references of AI in the 2025 sessions were related to psychological wellbeing and wellness, encompassing using chatbots for support and AI therapy apps.

Daniel Oconnor
Daniel Oconnor

Financial analyst with over a decade of experience in Dutch banking sectors, specializing in market trends and regulatory changes.