Meta shut down campaigns spreading AI-generated disinformation
Meta has detected and disrupted six new covert influence operations from countries including China, Iran, and Israel that used artificial intelligence-generated content to spread disinformation.
The tech giant, which also has ongoing research into a covert influence operation from Russia known as Doppelganger, shared its research into the new cross-internet campaigns in its quarterly adversarial threat report for the first quarter of 2024 released Wednesday. It is among many companies tackling the problem of disinformation and deepfakes in the advent of generative AI.
Many of the influence campaigns were removed early before being able to build up audiences of real users, Meta said, adding that it hasn’t seen strategies that would prevent it from being able to shut down the networks. The company said it has “observed” AI-generated photos and images, video news readers, and text. However, Meta said it hasn’t seen a trend of threat actors using photo-realistic AI-generated content of politicians emerge yet.
“Right now we’re not seeing gen AI being used in terribly sophisticated ways,” David Agranovich, Meta’s policy director of threat disruption, said Tuesday during a press briefing, according to Bloomberg. “But we know that these networks are inherently adversarial. They’re going to keep evolving their tactics as their technology changes.”
Meta did not immediately respond to a request for comment.
In its most recent report, Meta said it found threat actors are still using generative adversarial networks, or GANs, to create profile pictures for fake accounts, but that the company can detect the inauthentic networks behind these accounts.
In China, Meta said it found a network sharing poster images for a fictitious pro-Sikh activist movement that were likely generated by AI. Overall, Meta said it removed 37 Facebook accounts, 13 pages, five Groups, and nine Instagram accounts originating from China. The network, it said, targeted the Sikh community around the world, including in Australia, India, and the UK.
Meta said it found a coordinated inauthentic behavior, or CIB, network based in Israel that posted comments, likely generated by AI, about the Israel-Hamas war and Middle Eastern politics on pages belonging to media organizations and public figures. The comments, Meta said, were mostly written in English, and included “praise for Israel’s military actions” and “criticism of campus antisemitism, the United Nations Relief and Works Agency (UNRWA), and Muslims claiming that ‘radical Islam’ poses a threat to liberal values in Canada.”
Meta removed 510 Facebook accounts, 11 Pages, one Group, and 32 Instagram accounts originating from Israel, which mostly targeted audiences in the U.S. and Canada. Meta also found Facebook and Instagram accounts originating in Iran that targeted Israel and mostly posted in Hebrew criticizing Hamas and supporting ultra-conservative Israeli policies.
In addition to these networks, Meta removed 50 Facebook accounts and 98 Pages belonging to a network based in Bangladesh; 104 Facebook accounts, 39 Pages, and seven Instagram accounts based in Croatia; and 1,326 Facebook accounts, 80 pages, one Group, and one Instagram account from a network with an unknown origin targeting audiences in Moldova and Madagascar.