AI

OpenAI: Russia, China, Israel Use It for Influence Campaigns


OpenAI identified and removed five covert influence operations based in Russia, China, Iran and Israel that were using its artificial intelligence tools to manipulate public opinion, the company said on Thursday.

In a new report, OpenAI detailed how these groups, some of which are linked to known propaganda campaigns, used the company’s tools for a variety of “deceptive activities.” These included generating social media comments, articles, and images in multiple languages, creating names and biographies for fake accounts, debugging code, and translating and proofreading texts. These networks focused on a range of issues, including defending the war in Gaza and Russia’s invasion of Ukraine, criticizing Chinese dissidents, and commenting on politics in India, Europe, and the U.S. in their attempts to sway public opinion. While these influence operations targeted a wide range of online platforms, including X (formerly known as Twitter), Telegram, Facebook, Medium, Blogspot, and other sites, none managed to engage a substantial audience” according to OpenAI analysts. 

The report, the first of its kind released by the company, comes amid global concerns about the potential impact AI tools could have on the more than 64 elections happening around the world this year, including the U.S. presidential election in November. In one example cited in the report, a post by a Russian group on Telegram read, “I’m sick of and tired of these brain damaged fools playing games while Americans suffer. Washington needs to get its priorities straight or they’ll feel the full force of Texas!”

The examples listed by OpenAI analysts reveal how foreign actors largely appear to be using AI tools for the same types of online influence operations they have been carrying out for a decade. They focus on using fake accounts, comments, and articles to shape public opinion and manipulate political outcomes. “These trends reveal a threat landscape marked by evolution, not revolution,” Ben Nimmo, the principal investigator on OpenAI’s Intelligence and Investigations team, wrote in the report. “Threat actors are using our platform to improve their content and work more efficiently.”

Read More: Hackers Could Use ChatGPT to Target 2024 Elections

OpenAI, which makes ChatGPT, says it now has more than 100 million weekly active users. Its tools make it easier and faster to produce a large volume of content, and can be used to mask language errors and generate fake engagement. 

One of the Russian influence campaigns shut down by OpenAI, dubbed “Bad Grammar” by the company, used its AI models to debug code to run a Telegram bot that created short political comments in English and Russian. The operation targeted Ukraine, Moldova, the U.S. and Baltic States, the company says. Another Russian operation known as “Doppelganger,” which the U.S. Treasury Department has linked to the Kremlin, used OpenAI’s models to generate headlines and convert news articles to Facebook posts, and create comments in English, French, German, Italian, and Polish. A known Chinese network, Spamouflage, also used OpenAI’s tools to research social media activity and generate text in Chinese, English, Japanese, and Korean that was posted across multiple platforms including X, Medium, and Blogspot. 

OpenAI also detailed how a Tel Aviv-based Israeli political marketing firm called Stoic used its tools to generate pro-Israel content about the war in Gaza. The campaign, nicknamed “Zero Zeno,” targeted audiences in the U.S., Canada, and Israel. On Wednesday, Meta, Facebook and Instagram’s parent company, said it had removed 510 Facebook accounts and 32 Instagram accounts tied to the same firm. The cluster of fake accounts, which included accounts posing as African Americans and students in the U.S. and Canada, often replied to prominent figures or media organizations in posts praising Israel, criticizing anti-semitism on campuses, and denouncing “radical Islam.” It seems to have failed to reach any significant engagement, according to OpenAI. “Look, it’s not cool how these extremist ideas are, like, messing with our country’s vibe,” reads one post in the report.

OpenAI says it is using its own AI-powered tools to more efficiently investigate and disrupt these foreign influence operations. “The investigations described in the accompanying report took days, rather than weeks or months, thanks to our tooling,” the company said on Thursday. They also noted that despite the rapid evolution of AI tools, human error remains a factor. “AI can change the toolkit that human operators use, but it does not change the operators themselves,” OpenAI said. “While it is important to be aware of the changing tools that threat actors use, we should not lose sight of the human limitations that can affect their operations and decision making.”

More From TIME



Source

Related Articles

Back to top button