Cybercriminals using ChatGPT to generate scams – report
Cybercriminals are using ChatGPT to create deepfake chatbots, phishing campaigns and malware, according to a new report.
The global research team from Norton released its quarterly Consumer Cyber Safety Pulse Report, detailing how cybercriminals can use artificial intelligence to create realistic and sophisticated threats. The latest report includes an analysis of how large language models can enhance cybercriminal tactics.
ChatGPT has captured the internet’s attention with millions using the technology to write poems, craft short stories, answer questions and even ask advice. Meanwhile, cybercriminals are using it to generate malicious threats through its impressive ability to generate human-like text that adapts to different languages and audiences.
Cybercriminals can now quickly and easily craft email or social media phishing lures that are even more convincing, making it more difficult to tell what’s legitimate and what’s a threat. In addition to writing lures, ChatGPT can also generate code. Just as ChatGPT makes developers’ lives easier with its ability to write and translate source code, it can also make cybercriminals’ lives easier by making scams faster to create and more difficult to detect.
“I’m excited about large language models like ChatGPT, however, I’m also wary of how cybercriminals can abuse it," says Kevin Roundy, Senior Technical Director of Norton.
"We know cybercriminals adapt quickly to the latest technology, and we’re seeing that ChatGPT can be used to quickly and easily create convincing threats," he says.
“It’s getting harder than ever for people to spot scams on their own, which is why Cyber Safety solutions that look at more aspects of our digital lives – from our mobile devices to our online identity, and the wellbeing of those around us – are needed to keep us safe in our evolving world.”
In addition to using ChatGPT for more efficient phishing, Norton experts warn bad actors can also use it to create deepfake chatbots. These chatbots can impersonate humans or legitimate sources, like a bank or government entity, to manipulate victims into turning over their personal information in order to gain access to sensitive information, steal money or commit fraud.
To stay safe from these new threats, Norton experts recommend:
Avoiding chatbots that don’t appear on a company’s website or app and being cautious of providing any personal information to someone you’re chatting with online.
Thinking before you click on links in response to unsolicited phone calls, emails or messages.
Updating your security solutions and making sure it has a full set of security layers that go beyond known malware recognition, such as behaviour detection and blocking.
Year in Review: Norton blocked over 3.5 billion threats in 2022
The latest Pulse Report also revealed that throughout 2022, Norton thwarted over 3.5 billion threats, or around 9.6 million threats per day.
In 2022, Norton blocked 90.9 million phishing attempts, 260.4 million file threats, 1.6 million mobile threats, 274 thousand ransomware attacks. Norton AntiTrack blocked over 3 billion trackers and fingerprinting scripts.
In the last quarter alone, Norton blocked over 787.7 million threats, or around 8.5 million threats per day.
From October through December 2022, Norton blocked 27 million phishing attempts, 49.4 million file threats, 770 thousand mobile threats, 46 thousand ransomware attacks. Norton AntiTrack blocked over 1 billion trackers and fingerprinting scripts.