Brian Sims
Editor
Brian Sims
Editor
BRINGING TOGETHER leading technology companies (including Microsoft), academics and experts alike, the Government is set to develop and implement a world-first deepfake detection evaluation framework, duly establishing consistent standards for assessing all types of detection tools and technologies. This will help to position the UK as a global leader in tackling harmful and deceptive deepfake content.
The framework will evaluate how technology can be used to assess, understand and detect harmful deepfake materials, no matter their origin. By testing leading deepfake detection technologies against real world threats (such as fraud and impersonation), the Government and law enforcement will have better knowledge than ever before on where gaps in detection remain.
Once the testing framework is established, it will then be used to set clear expectations for industries on deepfake detection standards.
In 2025 alone, 8,000,000 deepfakes are estimated to have been shared. That figure is up from 500,000 in 2023.
Growing risk
Every person in the UK faces a growing risk from harmful deepfakes: Artificial Intelligence (AI)-generated fake images, videos and audio designed to deceive. These materials are often being used by criminals to deceptively steal money and spread harmful content online.
Criminals are already using this technology to impersonate celebrities, family members and trusted political figures in sophisticated scams, with tools to create convincing fake content being cheaper and more widely available than ever before. Little-to-no technical expertise is required.
Jess Phillips, the Minister for Safeguarding and Violence Against Women and Girls, explained: “A grandmother deceived by a fake video of her grandchild. A young woman whose image is manipulated without consent. A business defrauded by criminals impersonating executives. This technology doesn’t discriminate. The devastation of being ‘deepfaked’ without consent or knowledge is unmatched.”
Phillips continued: “For the first time, this framework will focus on the injustice faced by millions in order to seek out the tactics of vile criminals and close loopholes to stop them in their tracks so they have nowhere to hide. Ultimately, it’s time to hold the technology industry to account and protect members of the public who should not be living in fear.”
Weaponised by criminals
Tech Secretary Liz Kendall commented: “Deepfakes are being weaponised by criminals to defraud the public, exploit women and girls and undermine trust in what we see and hear. We are working with technology, academia and Government experts to assess and detect harmful deepfakes. The UK is leading the global fight against deepfake abuse. Those who seek to deceive and harm others will have nowhere to hide.”
Further, Kendall noted: “Detection is only part of the solution. That’s why we’ve criminalised the creation of non-consensual intimate images and are going further to ban the nudification tools that fuel this abuse.”
Recently, the Government led on (and funded) the Deepfake Detection Challenge, which was hosted by Microsoft. Across four days, more than 350 participants took part, including representatives from Interpol, members of the Five Eyes community and ‘big tech’.
Teams were immersed in high-pressure, real-world scenarios, challenging them to identify real, fake and partially manipulated audiovisual media. These scenarios reflected some of the most pressing national security and public safety risks: victim identification, election security, organised crime, impersonation and fraudulent documentation.
More than a technical challenge, the event brought together global experts united by a common mission: strengthening the UK’s ability to detect and defend against malicious synthetic media.
Criminal offence
Andrea Simon, director of the End Violence Against Women Coalition, observed: “We successfully campaigned for the new law that makes creating non-consensual sexually explicit deepfakes a criminal offence and welcome this move to better protect victims of this harmful abuse.”
Simon continued: “The onus cannot be on victims to detect and report abuse and battle with online platforms to have this violating material taken down.
It’s positive to see the Government’s and the regulator’s efforts to combat this global issue, but we know it’s the platforms themselves who could – and should – be doing so much more. It’s essential that the response to deepfake abuse and other forms of image-based abuse is on the front foot as these harms evolve.”
Deputy Commissioner Nik Adams of the City of London Police informed Security Matters: “This new framework is a strong and timely addition to the UK’s response to the rapidly evolving threat posed by AI and deepfake technologies. As the national lead force for fraud, the City of London Police witnessed first-hand how criminals are increasingly exploiting AI as a powerful tool in their arsenal to deceive victims, impersonate trusted individuals and scale harm at an unprecedented speed.”
Adams went on to comment: “By rigorously testing deepfake technologies against real-world threats and setting clear expectations for industry, this framework will significantly bolster law enforcement’s ability to remain ahead of the offenders, protect victims and strengthen public confidence as these technologies continue to evolve.”
Plan for Change commitment
This work forms part of the Government’s Plan for Change commitment to make Britain’s streets and communities safer, ensuring people are protected from evolving threats both online and offline.
The Government has fast-tracked work in order to bring into force legislation making it illegal for anyone to create or request deepfake intimate images of adults without consent.
Planned measures will ban ‘nudification’ tools, criminalising those who design and supply them. These actions form part of the Government’s ambition to halve violence against women and girls within a decade.
Western Business Media Limited
Dorset House
64 High Street
East Grinstead
RH19 3DE
UNITED KINGDOM