IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Ai romance scam dark room laptop shadows reaching for wallet heart

AI supercharges industrial-scale online romance scams

Wed, 11th Feb 2026

Romance scams are becoming more industrialised as criminal groups adopt artificial intelligence tools that remove traditional warning signs and make fraud harder to spot, according to Tenable researcher Satnam Narang.

He said the most profitable schemes now combine emotional manipulation with investment fraud, a trend law enforcement and consumer protection agencies have tracked for years. Reported losses from investment scams reached USD $5.7 billion in 2024, he noted, and the true figure is likely higher because many victims do not report crimes tied to romantic deception.

"2026 marks our entry into a dark age of romance scams," Narang said.

His comments reflect a broader shift in online fraud. Romance scams once relied on small-scale, manual outreach and crude impersonation. Many operations now run like call centres, using scripts, performance targets, and specialised roles across teams.

AI at scale

Narang pointed to large language models that generate fluent messages and keep stories consistent over time. This reduces familiar red flags such as poor grammar or stilted phrasing and allows scammers to maintain more conversations at once.

Both paid and free AI systems are now part of the toolkit, he said. Mainstream services include Google's Gemini, OpenAI's ChatGPT, and Anthropic's Claude, while open-source alternatives such as DeepSeek and Qwen are also being used. Because open-source models can run on private infrastructure and be modified, they can bypass built-in restrictions that might otherwise block content linked to fraud.

"The availability of some of the most powerful frontier artificial intelligence (AI) large language models (LLMs) from Google's Gemini, OpenAI's ChatGPT and Anthropic's Claude, has become digital gold for scammers. For the price of a cup of coffee, scammers can leverage these tools to generate linguistically perfect, emotionally resonant messages designed to ensnare victims. Even when restricted from frontier model services, scammers turn to free, open-source models like DeepSeek and Qwen, which now operate at near-parity with their paid counterparts," Narang said.

The shift matters because early-stage "grooming" has typically been the longest part of a romance scam, requiring patience, repetition, and a sustained intimate tone. AI-generated messaging can speed up that work and keep interactions consistent over weeks or months.

Deepfake calls

Narang also flagged the growing role of video deception. Scammers have long avoided live calls and video chats because real-time interaction increases the risk of exposure. Some networks now use face-swapping deepfake tools during video calls, undermining the common safety advice that a video call can verify identity, he said.

He described dedicated "AI Rooms" within scam operations that run the technology. The approach uses software that can intercept video feeds on consumer apps and overlay an artificial face on the caller. Visual artefacts can be masked with poor lighting or explained away as internet problems.

Narang noted that a firsthand Wired account described these rooms inside scam compounds, including the use of real-time face-swapping to reinforce false identities.

"Authorities have known about these scam compounds for years. However, a recent firsthand account published by Wired exposes the scale of their evolution. The report details a meticulous AI strategy including a dedicated "AI Room" where deepfake technology facilitates face-swapped video calls to "prove" the scammer's identity," Narang said.

Investment pivot

In Narang's account, romance is increasingly used as an entry point to investment fraud, often referred to as "pig butchering". The scheme relies on building trust, then steering victims to transfer funds to a platform presented as an investment service. Victims may be shown screenshots of profits or dashboards that appear to show rising balances.

The financial pitch may be direct or framed as caring advice from a partner. Once money becomes part of the interaction, victims should assume the relationship is fraudulent, he said.

"Don't be swayed by screenshots of earnings or claims of insider expertise. If a match brings up investments, whether aggressively or 'coyly', it is a scam. Cut contact, unmatch, and report," Narang said.

Human cost

Narang linked the technical evolution to an alleged labour model inside scam compounds in parts of South-East Asia. Journalists and rights groups have previously reported trafficking and forced labour in facilities that run online fraud, and Narang said these sites operate with quotas and tight supervision.

"These scams are the engine of a multi-billion-dollar industry, often built on the backs of trafficking victims. Within specialised compounds, these individuals work high-pressure "sales floors", where quotas play a major role, where bells and gongs ring out to celebrate the theft of a victim's life savings," Narang said.

He said improvements in AI-generated audio, video, and imagery will make these deceptions harder to distinguish from legitimate contact. That is likely to increase pressure on dating platforms, social networks, and payment firms to detect patterns of organised abuse, while also heightening the need for consumer awareness because the first line of defence often remains behavioural rather than technical.

"As the LLMs continue to improve their audio, video and image generation, these deceptions are going to become nearly indistinguishable from reality," Narang said.