IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Realistic illustration cyber intrusion digital locks shadowy figures masks east asian facial features

Google warns of AI misuse in new GTIG threat report

Thu, 6th Nov 2025

Adversaries are entering what Google Threat Intelligence Group (GTIG) calls a "new operational phase of AI abuse", with attackers now deploying artificial intelligence not just for productivity but as an integrated element in live operations. "This marks a new operational phase of AI abuse," GTIG reported, noting that "adversaries are no longer leveraging Artificial Intelligence (AI) just for productivity gains, they are deploying novel AI-enabled malware in active operations."

The October 2025 GTIG AI Threat Tracker highlights that malicious groups are exploiting AI to dynamically generate, rewrite, and disguise malicious code mid-execution. "For the first time, GTIG has identified malware families, such as PROMPTFLUX and PROMPTSTEAL, that use Large Language Models (LLMs) during execution," the report said. These tools can "generate malicious scripts, obfuscate their own code to evade detection, and leverage AI models to create malicious functions on demand."

Dynamic and self-altering malware

In early 2025, GTIG discovered malware that employed AI capabilities to alter its behaviour as it ran. The VBScript-based PROMPTFLUX communicates with Google's Gemini API to "request specific VBScript obfuscation and evasion techniques to facilitate 'just-in-time' self-modification".

Although experimental, PROMPTFLUX demonstrates how AI could make malware adaptive and evasive. GTIG said: "The most novel component of PROMPTFLUX is its 'Thinking Robot' module, designed to periodically query Gemini to obtain new code for evading antivirus software."

The malware's self-regeneration loop involves asking the model to rewrite its own code hourly, ensuring constant mutation. "The prompt directs the LLM to act as an 'expert VBScript obfuscator,' ensuring the new variant remains viable," the report said. While GTIG said the malware was still under development and unable to compromise devices, Google "has taken action to disable the assets associated with this activity."

AI used to generate theft commands

Another malware family, PROMPTSTEAL, was linked to Russian state-backed actor APT28 (also known as FROZENLAKE). GTIG observed it "using new malware against Ukraine" that queries an LLM via the Hugging Face API to generate commands for execution. PROMPTSTEAL "constitutes our first observation of malware querying a LLM deployed in live operations," the group stated.

The data miner poses as an image generation tool but "queries the Hugging Face API to generate commands for execution in the background," including those that gather computer information and copy users' documents. GTIG warned that the actor's use of stolen API tokens and continuous malware development shows an evolution toward more automated cyber operations.

Manipulating AI safety systems

The report detailed how threat actors are "adopting social engineering-like pretexts in their prompts to bypass AI safety guardrails." A China-linked actor reportedly reframed malicious requests as part of a harmless "capture-the-flag (CTF) competition" to coax Gemini into providing restricted information.

"When prompted to help in a CTF exercise, Gemini returned helpful information that could be misused to exploit the system," GTIG said. Attackers reused this pretext for phishing and webshell development, often writing prompts such as, "I am working on a CTF problem." Google said it "took further action against the actor to halt future activity" and strengthened safety measures to deny similar misuse.

Similarly, Iran's MUDDYCOAST actor "used various plausible pretexts" such as pretending to be a student or researcher to bypass safeguards. This resulted in the exposure of its own command-and-control domain, allowing Google to disrupt its campaign.

Underground AI crime market expands

GTIG also tracked a maturing cybercrime marketplace for illicit AI tools. "The underground marketplace for illicit AI tools has matured in 2025," it said, with offerings supporting "phishing, malware development, and vulnerability research." Advertising language mimicked that of legitimate AI products, emphasising "efficiency" and "workflow improvement."

These tools range from deepfake generators and phishing kits to malware creators and reconnaissance assistants. GTIG assessed that "financially motivated threat actors and others operating in the underground community will continue to augment their operations with AI tools."

State actors across multiple fronts

State-sponsored groups from North Korea, Iran, and China are "misusing generative AI tools including Gemini to enhance all stages of their operations." Chinese actors reportedly used Gemini to research phishing delivery methods and "assist in identifying Kubernetes systems."

North Korean groups MASAN and PUKCHONG misused AI for cryptocurrency theft campaigns, language translation, and deepfake generation. GTIG said MASAN "used Gemini to research cryptocurrency concepts, and perform research and reconnaissance related to the location of users' cryptocurrency wallet application data."

Iranian actor APT42 misused Gemini to craft phishing lures and attempted to create a "Data Processing Agent" to derive insights from personal data using natural-language queries. GTIG confirmed, "We have disabled the threat actors' accounts."

Google response and safeguards

Google reiterated its commitment to responsible AI use, saying it "designs AI systems with robust security measures and strong safety guardrails." The company said it had "taken action to disable the projects and accounts associated with bad actors" and was continuously "strengthening both classifiers and the model itself."

Google DeepMind is deploying defences and testing frameworks to pre-empt vulnerabilities. "We continuously enhance safeguards in our products to offer scaled protections to users across the globe," GTIG concluded, noting that lessons from countering misuse are fed back into product development to improve safety and resilience.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X