IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Person examining computer screen split image real human face vs digital manipulation

Deepfake threat grows as public & business struggle to keep up

Wed, 15th Oct 2025

Deepfake scams are on the rise as both individuals and businesses find themselves exposed to increasingly convincing fraudulent audio and video content.

Public awareness lag

Recent survey data reveals that 72% of people report daily concerns about falling victim to deepfake videos, which can be used to extract money or sensitive information. However, there is a notable divide in public awareness, with 75% of respondents in countries such as Spain and Germany having never encountered the term "deepfake," despite growing prevalence of such attacks globally.

Business exposure to deepfake threats is also expanding. In 2024, 49% of organisations reported experiencing attacks that involved both fake audio and video - a significant increase compared to 29-37% just two years ago, according to industry findings.

Content verification issues compound the problem, with findings from UNESCO stating that two in three content creators are distributing material to large audiences without first verifying its authenticity. More than half of social media users share news without checking its accuracy, thus contributing to the widespread circulation of manipulated media. Although 83% of those surveyed have heard the term "deepfake," the majority cannot reliably identify it.

Threats and impact

In response to the growing risk, key government agencies including the FBI, NSA, and CISA have jointly described deepfakes as "a serious problem for everyone who uses the internet." Similarly, the U.S. Department of Homeland Security warned: "Any fake video can be turned into a weapon to hurt people."

Examples of these threats range from fraud in personal family communications, where scammers exploit video likenesses of relatives to solicit emergency funds, to workplace schemes that use fake executive instructions to result in significant financial losses for companies. Deepfakes have also been used to circulate false political messages during elections, with the potential to influence voting before authentic information can be disseminated.

Private individuals, journalists, and public figures have similarly been targeted by manipulated media campaigns, sometimes resulting in reputational damage or harassment.

"I remember exactly when this hit me. I was checking user content for a client's website, and I saw this video that just felt... wrong. The lighting looked perfect, the voice sounded clear, but something about how the person's mouth moved didn't match up."

The software engineer reporting these findings expanded on the experience: "I dug deeper. It was completely fake.

The terrifying part? That video had already been watched 10,000 times. People believed it. They were sharing it everywhere. Making real decisions based on something that never happened. That's when I knew we had a massive problem."

She highlighted the lack of accessible solutions, stating: "The tools to catch deepfakes exist. But they're not reaching the people who actually need them. Big companies pay thousands of dollars for detection software. Free tools are so complicated that you'd need a computer science degree. And meanwhile, your aunt is sharing manipulated videos in the family group chat because nobody ever taught her the warning signs."

Detection advice

Based on over a decade of experience building technology for public use, she offered practical advice for spotting deepfakes. She recommends watching out for unusual visual cues such as unrealistic blinking, inconsistent lighting, blurry edges, or unnatural skin tone. Audio indicators include robotic-sounding voices, mismatched background noise, and voice-mouth alignment issues.

Before sharing content, she advises users to pause and consider its origin, use reverse image search tools on key frames, verify sources, and trust instinct if something appears amiss.

She also highlighted the gap in suitable detection tools. Key features she believes are lacking include instant feedback, mobile compatibility, explanation of why a video or audio is fake, strict privacy safeguards, and free availability. "If your 60-year-old mom can't figure it out in 30 seconds, it's not designed well enough," she said. "Tech companies need to stop building only for young tech nerds and start serving everyone, including elderly people, folks in developing countries, and communities where most people don't know much about tech."

Broader impact

According to the United Nations, misinformation and deepfakes have exacerbated discrimination, hate speech, and racism. A UNESCO study found that 87% of people believe misinformation has affected politics in their country, while 88% support regulatory action from governments regarding social media companies.

The NSA, Department of Defence, and other federal agencies have documented how deepfakes can "mess up critical systems, spread lies, and trick people into handing over money and secrets."

"This isn't a future problem, it's happening right now, in every country, in every language," she said. "One viral deepfake can start riots in your city, destroy someone's business, or empty your bank account. We can't ban AI, but we can give everyone the tools to protect themselves from it."

Actions for industry

The software engineer called on the technology industry to accept greater responsibility, urging: "The same companies that built the AI systems creating deepfakes have a responsibility to build detection tools that regular people can use. We need standards across the whole industry, tools anyone can access for free, and massive campaigns to teach people what to watch out for. This should be as common as teaching people how to spot phishing emails."

Maria Rosey, founder of Techhoor, continued: There are indications of movement in the institutional sector, as election monitoring groups invest in real-time verification tools, successfully intercepting falsified political videos before they achieve widespread impact during election periods in several countries.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X