IT Brief New Zealand - Technology news for CIOs & IT decision-makers
Story image

OpenAI disrupts five covert influence operations in three months

Sun, 2nd Jun 2024

OpenAI has taken strides in combating covert influence operations (IO) in a bid to ensure the ethical and transparent use of artificial intelligence. The company disrupted five covert operations over the past three months, revealing how malicious actors attempted to misuse AI models for deceptive activities. However, these campaigns have not achieved significant audience engagement as a result of OpenAI’s services.

Committed to enforcing policies that prevent abuse, OpenAI has focused on detecting and disrupting IO efforts that aim to manipulate public opinion or influence political outcomes without revealing the true identity or intentions behind them. The company has introduced a trend analysis describing the behaviour of these malicious actors, shedding light on both attacker and defensive trends.

Among the disrupted operations, a previously unreported one from Russia, dubbed "Bad Grammar," primarily targeted Ukraine, Moldova, the Baltic States, and the United States. Operatives behind Bad Grammar used AI models to debug code for a Telegram bot and create short, politically motivated comments in Russian and English for publication on Telegram.

Another disrupted operation, also originating from Russia and known as “Doppelganger,” involved using AI to generate comments in multiple languages, including English, French, German, Italian, and Polish. These were posted on platforms such as X and 9GAG. Additionally, Doppelganger operatives translated and edited articles, created social media content and converted news articles into Facebook posts.

A Chinese network referred to as “Spamouflage” utilised AI models to research public social media activity, generate texts in several languages, and debug code for managing various online platforms, including an undisclosed domain. Their output was disseminated across numerous platforms, from X to Blogspot. In another case, an Iranian operation named "International Union of Virtual Media" (IUVM) used AI to generate and translate long-form articles, headlines, and website tags for publication on affiliated websites.

Lastly, a commercial entity based in Israel, known as STOIC, undertook an operation code-named “Zero Zeno.” This initiative leveraged OpenAI models to produce articles and comments, which were subsequently posted on platforms like Instagram, Facebook, and X.

Throughout these disruptions, the content produced by the various operations spanned a wide range of political and social issues, including Russia’s invasion of Ukraine, the conflict in Gaza, and elections in India. Criticism of the Chinese government by both Chinese dissidents and foreign governments was also a recurring theme. Notably, none of these campaigns demonstrated a marked increase in audience engagement, with scores remaining low on the Brookings Breakout Scale, indicating limited impact on authentic communities.

Beyond content generation, the investigations revealed that threat actors utilised OpenAI services to enhance productivity, summarise social media posts, and debug code. However, AI was not exclusively relied upon; it was used in conjunction with traditional methods like manually written texts and memes.

On the defensive front, OpenAI has highlighted the advantages offered by AI in detecting and disrupting such operations. The company’s models, designed with safety in mind, frequently refused to generate the content desired by threat actors. AI-powered tools also enhanced the efficiency of investigations, reducing the time required to days instead of weeks or months.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X