Impact of Artificial Intelligence on Elections
Overview: Artificial Intelligence and Elections
AI refers to the “capability of computer systems or algorithms to imitate intelligent human behavior.” The term dates back to the 1950s, and AI capabilities have grown alongside improvements in computing overall. In recent years, AI sophistication has accelerated rapidly while the costs associated with accessing it have declined.
AI is already integrated into many aspects of the U.S. economy, and many people interact with AI tools in their daily life. For example, banks use AI to assist in a variety of functions including fraud protection and credit underwriting. Email services like Gmail utilize AI to block unwanted spam messages, and ecommerce websites like Amazon use AI to connect individual shoppers with the products they are seeking. Yet, despite its existing integration in the background of modern life, AI recently entered the public consciousness in a more direct way with the release of OpenAI’s ChatGPT, a chatbot that can create written content and produce highly realistic audio recordings, photos, and videos using “generative AI” tools. When AI is used to imitate well-known people, particularly for nefarious purposes, the result is known as a “deepfake.”
From an election perspective, these advances in AI technology are likely to impact the overall information environment, cybersecurity, and administrative processes. Although this emerging technology has garnered attention largely for its risks, as we outline below, there are also significant potential benefits to leveraging AI in our election environment.
Information Environment
An election’s information environment is the most obvious area in which AI could have a potential impact. AI tools can empower users to generate and distribute false information quickly, convincingly, and at a very low cost. The information itself can range from sophisticated deepfake videos created through generative AI programs to simpler text- or image-based disinformation.
In terms of distribution, AI tools can enable bots to flood social media sites with an overwhelming wave of misinformation. AI technology can also be paired with other communication methods such as cell phones to create and deliver targeted deceptive robocalls. This misinformation could be technical, such as lies about changes to voter eligibility or the return deadline for absentee ballots, or political, such as a video of a candidate speech that never actually happened.
Such AI-generated misinformation has already entered our political sphere. During the lead-up to the 2024 New Hampshire Democratic primary, many voters received a robocall message discouraging them to vote from a voice that sounded like it belonged to President Joe Biden. However, investigators determined that the audio recording was generated by AI and commissioned by a supporter of Dean Phillips, one of Biden’s opponents in the primary election.
On the Republican side, a Super PAC supporting Ron DeSantis for president ran a television advertisement featuring language written by Donald Trump in a social media post and read by a voice that sounded like Trump but was generated by AI. Unlike the Biden robocall, the message in the ad accurately represented something communicated by Trump, but the audience was still left with the false impression that Trump had made the statement in a recorded speech.
While each of these examples were quickly identified as deepfakes and were deconstructed extensively in the media, they function as a harbinger of future deceptive campaign tactics, particularly in lower-profile races with fewer resources and dimmer media spotlights. Even in high profile races with narrow margins, the use of such deceptive tactics in a key swing state could have a significant impact, especially considering that false information spreads quickly on social media.
Current responses to this threat are multipronged. In the private sector, major tech companies released an agreement in February 2024 outlining their approach to managing the threat of deceptive AI related to elections. The accord emphasizes voluntary measures to identify and label deceptive AI content and supports efforts to educate the public about AI risks. At the same time, individual companies have taken steps to limit the use of their products or platforms for political purposes altogether.
Meanwhile in the public sector, federal and state lawmakers have introduced new bills and regulations to prohibit or require greater public disclosure when AI is used in political communications. State and local election officials have also mobilized across the country by holding tabletop exercises to practice their responses to various potential disruptions, including AI-generated misinformation.
Cybersecurity
Cyberattacks, which seek to steal confidential information, change information within a system, or shut down the use of a system, have been a longstanding risk to elections. For example, a cyberattack on a voter registration database could increase the risk of identity theft or prevent the timely reporting of results if an election office website is taken offline. Cyberattacks can also be paired with coordinated disinformation campaigns to amplify the consequences of a minor cyber incident to damage public perception around the integrity of the election.
Recognizing this risk, the federal government designated elections as “critical infrastructure” in 2017 and, in 2018, tasked the Cybersecurity and Infrastructure Security Agency (CISA) with oversight of the nation’s election cybersecurity. While cyberattacks, particularly from overseas actors, continue to remain a pressing threat, CISA has recognized that “generative AI capabilities will likely not introduce new risks, but they may amplify existing risks to election infrastructure.”
For example, efforts to access confidential data often include attacks on voter registration databases that contain personally identifiable information such as names, birthdates, addresses, driver’s license numbers, and partial social security numbers. In 2016, hackers gained access to voter registration databases in Arizona through a phishing email to a Secretary of State’s office staffer and in Illinois through a structured query language (SQL) injection—a common technique used to take malicious action against a database through a website. More recently in Washington, D.C., hackers stole voter data and offered it for sale online in October 2023. Similarly, 58,000 voters from Hillsborough County, Florida, had their personal information exposed after an unauthorized user copied files containing voter registration data.
Beyond the theft of personal information, cyber-criminals can disrupt access to a system or website through a distributed denial of service (DDOS) attack. These attacks commonly seek to overwhelm a website, ultimately shutting the website down, by generating high volumes of traffic through the use of bots working in tandem. The Mississippi Secretary of State’s site suffered a DDOS attack on election day in 2022, which prevented public access to the website. While DDOS attacks disrupt the flow of information, they are relatively limited in impact and generally do not affect the voting process or ballot integrity.
Although it is clear that AI has accelerated the sophistication and power of cyberattacks, it is important to recognize that it can also bolster our cyber defenses. The technology can help improve threat detection and remediation, enhance efficiency of the cyber workforce by handling more routine tasks, and strengthen data security. While it may be easier for many election officials to identify the threats raised by emerging technologies like AI, they must also look for appropriate ways to incorporate AI into their cybersecurity strategies.
Even if election offices start to incorporate AI into their cyber defense, a number of existing cybersecurity best practices can help reduce the threat of AI attacks. Multifactor authentication, strong passwords, email authentication protocols, and cybersecurity training for staff can help stop AI-generated phishing and social engineering attacks. According to CISA, “election officials have the power to mitigate against these risks heading into 2024, and many of these mitigations are the same security best practices experts have recommended for years.”
Election Administration
One of the hallmarks of the election system in America is decentralization. Elections are administered at the state and local level, with more than 10,000 election jurisdictions across the country. This design can instill confidence in the process by maintaining a sense of connection between voters and the individuals administering their local election while making it more difficult for bad actors to disrupt elections at scale. At the same time, this approach can also have operational inefficiencies and workforce challenges, some of which could be improved through tools employing AI technology.
One example of where AI could help election administrators is in the tabulation of hand-marked paper ballots. Approximately 95 percent of U.S. voters will cast a paper ballot in 2024, and most of those votes will be cast by filling in a bubble or checking a box with a pen. The rest will use ballot-marking devices where the voter makes the selections on a machine that then generates a paper ballot with the voter’s choices. Both types of paper ballots are then processed through an optical scanner that records the votes.
Invariably, a certain number of hand-marked ballots will be unreadable by the optical scanner for some reason, ranging from physical damage to unclear markings. When this happens, the ballot must be reviewed by election workers and either replicated on a new readable ballot or hand counted, depending on the jurisdiction.
Researchers from three universities are developing a system that uses AI to assist in this process of reviewing hand-marked paper ballots. Specifically, the technology would serve as a check on the primary ballot scanner by identifying ballots for election workers to analyze more closely due to ballot anomalies such as marks outside the typical voting area or bubbles that are lightly filled in. They are also exploring how the technology could help identify fraudulent ballots completed by a single individual.
Signature verification is another process where AI is already helping improve the efficiency of election administration. To confirm that a ballot belongs to the intended voter, many states require signature verification for mail-in ballots, which involves comparing the on-file signature of an individual to the signature on the envelope containing the ballot. This is a time-consuming and labor-intensive process that can be expedited with the assistance of technology that identifies signatures that require additional human review. As of 2020, there were at least 29 counties using AI for this purpose.
An important consideration for the use of AI in these types of election administration tasks is to maintain human touchpoints throughout the process. AI tools are not perfect, so humans must be involved, especially when AI is used to help determine a voter’s eligibility to cast a ballot or whether a mail-in ballot will be counted based on the signature verification. It will also be important for election offices to be transparent with the public about the vendor, how the tool is used, and any plans for addressing problems that arise.
Public record requests provide a final example of AI deployment in election administration, and they succinctly illustrates both the potential risks and benefits of the technology. Since 2020, election offices have experienced an uptick in public record requests. While there are certainly legitimate reasons for seeking access to public documents and encouraging government transparency, there are also ample opportunities to make records requests in bad faith as a way of bogging down the system. Unfortunately, AI could be used to worsen the problem. A bad actor could use AI tools to rapidly generate and disseminate requests across multiple jurisdictions to divert resources in understaffed election offices and disrupt election processes.
Yet, in the same stroke, AI helps increase transparency while keeping local election officials focused on running elections. Local officials can use AI to process records requests and initiate the search for relevant records, ultimately leading to an overall increase in government efficiency and transparency. A number of federal agencies—including the State Department, Justice Department, and Centers for Disease Control and Prevention—are already experimenting with using AI tools to assist in managing and fulfilling public-record requests. Local election offices could benefit from similar AI tools to help manage the influx of requests while maintaining focus on their primary function of administering secure and trustworthy elections.
Policy Responses
In response to advancing AI technology, policymakers at the federal and state level are primarily focused on minimizing the impact of AI-driven election disinformation. The proposals they draft often seek to prohibit the use of AI for deceptive purposes in elections or require disclosure of the use of AI in campaign speech.
At the federal level, members of Congress have introduced at least five bills aimed at restricting AI in elections and two of these were approved by the Senate Rules committee on May 15. Independent agencies like the Federal Communications Commission (FCC) and Federal Elections Commission (FEC) have also taken or are considering action under their existing regulatory authority. Meanwhile, 17 states now have laws on the books that ban the use of AI in certain election circumstances or establish disclosure requirements.
These legislative moves indicate that there is clearly momentum in favor of continued government action around AI. Public opinion is generally in favor of these actions across party lines. However, questions remain regarding the constitutionality of these laws, as well as their effectiveness at limiting the impact of election disinformation. In the following section, we examine these efforts to minimize AI-driven electoral harms through legislation and regulation that establishes disclosure requirements and prohibitions. We also assess how a less restrictive legislative proposal in Congress and a recent decision by the U.S. Election Assistance Commission (EAC) could provide lessons for an alternative approach that empowers local election officials and emphasizes public education and individual responsibility.
Prohibition and Disclosure
In response to the potential impacts of AI on elections, policymakers have generally proposed or enacted laws and regulations that either ban the use of AI for certain purposes or require a disclosure indicating that AI was used to produce the image, video, or audio used in the election communication. Among the states that have enacted legislation related to AI and elections, requiring a disclosure is the most common approach.