AI

ChatGPT Already Used to Spread Misinformation, OpenAI Says


(TNS) — Political leaders have long warned that artificial intelligence could supercharge online disinformation. Now OpenAI has detailed how operations in Russia, Iran, Israel and elsewhere are already using its software to do just that, pushing out false and misleading information about the wars in Ukraine and Gaza and other topics.

In a blog post, the San Francisco company detailed how its chatbots were used to help post political spam on Telegram channels, spread Russian propaganda on X, and generate entire articles posted online. The company also described the steps it is taking to stop the practice.

The report comes as AI is poised to potentially disrupt elections this year in the United States and around the world. On Wednesday, California Gov. Gavin Newsom posed the question to AI experts onstage at a daylong summit in San Francisco of whether AI-powered falsehoods could spell the end of free and fair elections as we know them.


To that end, the U.S. Federal Communications Commission has started the process of requiring political ads that use AI to be labeled as such. California lawmakers are carrying a host of bills this session to limit the use of deep fakes and other forms of AI to influence upcoming elections.

One operation traced to Russia used OpenAI’s tools to run a comment spamming bot on the messaging app Telegram that created posts in Russian and English aimed at users in the U.S., Ukraine and the Baltic states.

That operation employed about a dozen accounts to rapidly post content critical of U.S. support for Ukraine. Some of the posts inadvertently included automated responses from ChatGPT refusing to generate the requested content, giving away that the posts were AI-generated.

Another group of accounts run out of Israel mounted an influence campaign across X, Facebook, Instagram and YouTube. OpenAI said that campaign created fake accounts equating pro-Palestinian protesters with antisemitism and terrorism, and portraying Qatari investments in the U.S. as “a threat to the American way of life.”

The ChatGPT maker banned another group of accounts it said were linked with a Chinese outfit known as Spamouflage, which has been linked by Microsoft and government agencies to the Chinese government.

OpenAI said that operation used its software and fake accounts on X to criticize dissidents and other critics of the Chinese state, sometimes creating entire fake comment chains to boost the circulation of posts online.

The company said its teams constantly monitor online accounts for signs that its technology is being used to conduct influence campaigns, calling them out publicly when they appear.

OpenAI also said it shares information with other AI companies and researchers about these kinds of campaigns, and uses its own AI tools to detect spam and other suspicious content, among other tactics.

Numerous Russian military intelligence officers were charged in 2018 for their role in disseminating misleading and leaked information online and across social media in a campaign to undercut Hillary Clinton’s campaign and sow political discord. Those efforts did not involve AI.

©2024 the San Francisco Chronicle, Distributed by Tribune Content Agency, LLC.





Source

Related Articles

Back to top button