New Hampshire robocalls stir up new firestorm over artificial intelligence
A DECEPTIVE CAMPAIGN tactic aimed at Granite State voters has added more fuel to an already angst-ridden conversation about the place of artificial intelligence in government, commerce, and civic life – accelerating regulatory changes and prompting new legal challenges across New England and the country.
New Hampshire Democrats received calls before their January primary asking them not to vote until the November general election – the caller purportedly President Joe Biden himself. It wasn’t a human impersonator on the other line, but an AI-generated voice that sounded like the president. Steve Kramer, a veteran political consultant working for a rival candidate, admitted to being behind the call and expressed no remorse in an interview with NBC News.
Outcry was swift. New Hampshire authorities sent cease-and-desist orders to the two Texas companies they believe were involved in transmitting the messages – Lingo Telecom and Life Corporation. The League of Women Voters for the US and for New Hampshire both filed suit in federal court and are now seeking an injunction to stop Kramer and the companies from producing, generating, or distributing AI-generated robocalls, text messages, or any form of spoofed communication impersonating any person, without that person’s express consent.
“These types of voter suppression tactics have no place in our democracy,” Celina Stewart, chief counsel at the League of Women Voters of the United States, said in a statement. “Voters deserve to make their voices heard freely and without intimidation.”
The Federal Communications Commission had released a notice of inquiry in November, asking for input into implications and use of AI in consumer communications. The Telephone Consumer Protection Act, which is the main way that federal authorities regulate junk calls, prohibits using artificial or prerecorded voices to deliver messages or calls without the prior consent of the person on the other line. Yet, the FCC seemed to be mulling whether future rules should consider whether artificial intelligence may at some point be able to interact with callers.
In response, a coalition of 26 state attorneys general, including Attorney General Andrea Campbell, called for the federal government to restrict usage of artificial intelligence in marketing phone calls. They offered a particularly sharp critique of any possible carveout that would lead to people receiving unwanted calls just because AI might be able to imitate something approaching a live agent.
Just two weeks after the New Hampshire primary, the FCC ordered that the person behind the Biden robocalls stop the “illegal effort” and two days later issued a declaratory ruling making it clear that the “the TCPA’s restrictions on the use of ‘artificial or prerecorded voice’ encompass current AI technologies that generate human voices.” In the decision, the FCC specifically notes the New Hampshire robocalls as a troubling instance of voter interference through artificially generated voice calls.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters,” FCC Chairwoman Jessica Rosenworcel said in a statement accompanying the rule change. “We’re putting the fraudsters behind these robocalls on notice.”
The League of Women Voters argues that the robocalls also run afoul of the Voting Rights Act, quite aside from its telecommunication law implications.
Since the FCC decision was released, Campbell sounded alarms over the use of AI to deceive consumers in the Bay State, issuing an advisory last month reminding developers and suppliers about their obligations under state consumer protection laws.
While major tech companies, including Amazon, Google, Meta, Microsoft, and OpenAI, signed a pact in February to voluntarily adopt “reasonable precautions” to prevent artificial intelligence from being used as a disruptive influence on democratic elections, federal government action aside from the FCC regulations is glacial, though many states including Massachusetts are grappling with the issue though their legislatures.
The issue of explicitly preventing the use of AI in campaign advertisements has had a rocky time before federal election authorities. The consumer advocacy nonprofit Public Citizen, founded in 1971 by Ralph Nader, submitted a petition to the Federal Election Commission that would prohibit deliberately deceptive artificial intelligence in campaign ads.
In considering the petition in August 2023, FEC Commissioner Allen Dickerson expressed overt irritation at what he insinuated was an attempt to “drive press coverage or public advocacy” through the petition process.
“I’ll note that there’s absolutely nothing special about deep fakes or generative AI – the buzzwords of the day – in the context of this petition,” he said. “If the statute reaches fraudulent attempts to show that an opponent ‘said or did something they did not do,’ it shouldn’t matter how the fraud is accomplished. Lying about someone’s private conversations or posting a doctored document, or adding sound effects in post-production, or manually airbrushing a photograph, if intended to deceive, would already violate our statute.”