Generative AI Muddies the Election 2024 Outlook, and Voters Are Worried
“It’s important you save your vote for the November election,” a voice that sounded an awful lot like President Joe Biden’s told Democrats in New Hampshire during a January robocall that discouraged them from voting in that month’s primary.
But it wasn’t Biden. It was an AI-generated voice that one of the men behind it now says was intended to draw attention to how AI can be harnessed to influence voter behavior.
Technology has long been used to sway voters. In the last two presidential elections, it was primarily social media, where manipulated content — like videos of former House Speaker Nancy Pelosi, which went viral after they were edited to make her appear incompetent — spread like wildfire. By 2023, nearly two-thirds of US internet users said misinformation and/or fake news were widespread on these platforms.
The advent of generative AI tools, which can easily create realistic text, images and videos (and audio like the fake-Biden call), only exacerbates the potential for misinformation in 2024. It’s a new reality that government, tech companies and voters will be grappling with in the coming months.
It’s something that software giant Adobe, maker of Photoshop, is mindful of. Last week, Adobe released the results of a study, Future of Trust, in which 6,000 consumers in the US, the UK, France and Germany were asked about online misinformation and generative AI. The study found that a majority are concerned, particularly within the context of elections.
Adobe itself has a gen AI tool, Firefly, that’s part of a growing landscape that includes chatbot and image-generation options from the likes of Anthropic, Google, Microsoft and OpenAI. As these tools become more sophisticated — like, say, offering the ability to create lifelike images and videos — they increase the potential for creativity, as well as for misuse. The tech companies behind them have guardrails to limit the creation of harmful content, but users have found loopholes. The cat and mouse game will continue through Election Day 2024 and beyond.
(For more on generative AI tools, along with all the latest AI news, tips and explainers, see CNET’s new AI Atlas guide.)
Misinformation 2.0
According to a July 2023 report from the Brennan Center, a nonprofit public policy institute at New York University, gen AI tools help blur the lines between organized disinformation campaigns and recipients’ world views. That is, they make it easier to tell voters what they want to hear. And though traditional social media posts can be outed by, say, grammatical errors or strange turns of phrase, gen AI can help bad actors sound much more convincing.
These actors can also use large language models — the engines behind AI chatbots — to generate millions of posts and create false impressions of widespread belief in certain narratives. And they can use chatbots to personalize interactions based on voter characteristics.
Not surprisingly, the Adobe survey found that voters believe that deepfakes, or manipulated media, will once again be used to influence what happens at the polls.
Adobe found that 84% of US respondents are worried that online content is vulnerable to manipulation, and are therefore concerned about election integrity. Meanwhile, 70% believe it’s becoming difficult to verify whether online content is trustworthy, and 76% say it’s important to know if content was generated by AI. Social media platforms like Meta, TikTok and YouTube now require users to label digitally generated and edited content. And generative AI tools like Adobe Firefly, Dall-E 3 and Copilot include where a photo, video or document originated and who created it, as well as details about any subsequent alterations.
Potential solutions
Following the faked Biden robocall in New Hampshire, the US Federal Communications Commission made AI-generated voices in robocalls illegal. But there’s still a lot of work to do.
Nonprofit think tank the Brookings Institute says the government should invest in media literacy to help students learn how to distinguish factual content from misinformation. States like California, Colorado and Illinois have already implemented these programs. (84% of Adobe respondents in the US agreed that children should be taught media literacy in school.)
The Brookings Institute also called for an effort to reduce the ability of foreign interests to spread misinformation in US elections.
In Adobe’s survey, 83% of US respondents said they believe the government should also work with tech companies to protect election integrity.
The Brennan Center suggests lawmakers focus their efforts on regulating AI. It also wants to see developers refine election-related AI filters, and it wants social media platforms to develop policies to better balance political discourse with the potential harm from disinformation.
How to protect yourself
You can do your part. AI-generated content, especially images and video, has some telltale signs you can look for. Check where the content came from: Is it a reputable source?
There are fact-checking tools, too, like digital watermarks on social media or Adobe’s Content Credentials, which identifies where content comes from and whether it’s AI-generated. (In Adobe’s survey, 88% of US respondents said they believe it’s essential to have tools to verify the trustworthiness of online content.)
Residents of New Mexico and North Carolina can access state-run resources to help fact-check information about local elections. But no such resources are yet available on a national level.
We’re also starting to see voter education campaigns emerge.
In March, the nonprofit AIandYou released one such campaign, Behind the Headlines, in partnership with LeanIn.Org, Voto Latino and TelevisaUnivision. It targets women and Black and Hispanic communities and seeks to educate these voters about AI’s potential impact on misinformation and the electoral process.
In the age of generative AI, we’re going to need all the help we can get.
Editors’ note: CNET used an AI engine to help create several dozen stories, which are labeled accordingly. The note you’re reading is attached to articles that deal substantively with the topic of AI but are created entirely by our expert editors and writers. For more, see our AI policy.