
Deepfake threat: Only 0.1% can spot AI-generated fakes
New research conducted by iProov has highlighted the significant challenges posed by deepfake technologies, revealing that only 0.1% of people can accurately identify AI-generated deepfakes. The findings expose the alarming susceptibility of both individuals and organisations to identity fraud and misinformation.
The study tested 2,000 consumers from the UK and US, providing insights into the limited ability of people to distinguish between real and synthetic content. It emphasised a particularly concerning trend where older generations appear more vulnerable, with 30% of those aged 55-64 and 39% of those aged 65 and above having never heard of deepfakes.
Andrew Bud, founder and CEO of iProov, commented on the findings, stating, "Just 0.1% of people could accurately identify the deepfakes, underlining how vulnerable both organisations and consumers are to the threat of identity fraud in the age of deepfakes. And even when people do suspect a deepfake, our research tells us that the vast majority of people take no action at all. Criminals are exploiting consumers' inability to distinguish real from fake imagery, putting our personal information and financial security at risk. It's down to technology companies to protect their customers by implementing robust security measures. Using facial biometrics with liveness provides a trustworthy authentication factor and prioritises both security and individual control, ensuring that organisations and users can keep pace and remain protected from these evolving threats."
Further compounding the issue, the research showed that deepfake videos are especially difficult for people to identify, with participants 36% less likely to correctly recognise a synthetic video compared to a synthetic image. This suggests significant potential for video-based fraud in identity verification contexts.
Professor Edgar Whitley, a digital identity expert at the London School of Economics and Political Science, noted, "Security experts have been warning of the threats posed by deepfakes for individuals and organisations alike for some time. This study shows that organisations can no longer rely on human judgment to spot deepfakes and must look to alternative means of authenticating the users of their systems and services."
iProov's survey also found that overconfidence in detection abilities persists, particularly among young adults aged 18-34, despite low correct identification rates. Additionally, there is a widespread belief that social media platforms like Meta and TikTok serve as significant sources for deepfakes, leading to diminished trust in online media.
About 74% of respondents expressed concerns over the societal impact of deepfakes, with misinformation ranked as a top concern. Among older adults, this worry is even more pronounced, with up to 82% of those aged 55 and over expressing anxiety regarding the spread of false information.
The report suggests that less than a third of the population takes action when encountering suspected deepfake content, with many unsure of how to report such findings. Notably, only 11% of people critically analyse the source and context of information to assess authenticity.
The rapid increase in deepfake technology, including a 704% surge in face swaps as noted in iProov's 2024 Threat Intelligence Report, signifies the urgent need for advanced technological solutions. Efforts to counter the deepfake threat should involve collaboration between technology providers, platforms, and policymakers to enhance security measures and ensure safety in digital environments.