Britain deepfake detection system plans were unveiled on Thursday as the UK government announced a partnership with Microsoft, academics, and technology experts to create new tools for identifying manipulated and AI-generated content online. The move comes amid growing concern over the rapid spread of deepfakes and their use in fraud, abuse, and misinformation, as advances in generative artificial intelligence make fake images, videos, and audio increasingly realistic.
The government said the initiative aims to establish clear standards for detecting harmful deepfake material and to strengthen public trust in digital content. It follows recent legislative action in the UK that criminalised the creation of non-consensual intimate images, reflecting mounting pressure on governments to address the risks posed by AI-driven deception.
Also read: Spain Moves to Ban Social Media for Children Under 16
Rising Threat From AI-Generated Deepfakes
While altered media has existed for years, officials warned that the widespread adoption of generative AI tools, accelerated by products such as ChatGPT and other chatbots, has dramatically increased both the volume and sophistication of deepfakes circulating online.
Technology minister Liz Kendall said deepfakes are increasingly being used by criminals to defraud individuals, exploit women and girls, and undermine confidence in what people see and hear online.
“Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls, and undermine trust in what we see and hear,” Kendall said in a statement.
Framework to Set Detection Standards
As part of the initiative, Britain is developing a deepfake detection evaluation framework designed to assess and compare different detection tools and technologies. According to the government, the framework will test how effectively these tools identify harmful content across real-world scenarios, including sexual abuse, fraud, and impersonation.
The framework is intended to give policymakers and law enforcement clearer insight into where current detection technologies fall short, while also setting expectations for industry standards in combating deepfakes.
Sharp Increase in Deepfake Content
Government figures estimate that around 8 million deepfake items were shared in 2025, a sharp rise from approximately 500,000 in 2023. The surge has intensified pressure on regulators worldwide, many of whom are struggling to keep pace with the fast-evolving capabilities of artificial intelligence.
Concerns escalated further this year after Elon Musk’s Grok chatbot was found to generate non-consensual sexualised images of individuals, including children.
Also read: French Cybercrime Authorities Search X’s Paris Offices, Summon Elon Musk in Expanding Probe
Ongoing Regulatory Investigations
In response to those findings, the UK’s communications watchdog and privacy regulator have launched parallel investigations into Grok. The probes are part of broader efforts by British authorities to ensure AI developers comply with existing laws and safeguards, particularly where vulnerable groups are at risk.
The government said collaboration with technology companies such as Microsoft would be critical in developing practical solutions to counter the growing threat posed by deepfakes.