11 States Now Have Laws Limiting Artificial Intelligence, Deep Fakes, and Synthetic Media in Political Advertising – Looking at the Issues
Artificial Intelligence was the talk of the NAB Convention last week. Seemingly, not a session took place without some discussion of the impact of AI. One area that we have written about many times is the impact of AI on political advertising. Legislative consideration of that issue has exploded in the first quarter of 2024, as over 40 state legislatures considered bills to regulate the use of AI (or “deep fakes” or “synthetic media”) in political advertising – some purporting to ban the use entirely, with most allowing the use if it is labeled to disclose to the public that the images or voices that they are experiencing did not actually happen in the way that they are portrayed. While over 40 states considered legislation in the first quarter, only 11 have thus far adopted laws covering AI in political ads, up from 5 in December when we reported on the legislation adopted in Michigan late last year.
The new states that have adopted legislation regulating AI in political ads in 2024 are Idaho, Indiana, New Mexico, Oregon, Utah, and Wisconsin. These join Michigan, California, Texas, Minnesota, and Washington State which had adopted such legislation before the start of this year. Broadcasters and other media companies need to carefully review all of these laws. Each of these laws is unique – there is no standard legislation that has been adopted across multiple states. Some have criminal penalties, while others simply imposing civil liability. Media companies need to be aware of the specifics of each of these bills to assess their obligations under these new laws as we enter this election season where political actors seem to be getting more and more aggressive in their attacks on candidates and other political figures.
While some of the laws are relatively clear that they are meant to govern only the creators of the political ad (see, e.g., the laws in Texas and Wisconsin), others are worded more broadly, and apply not only to those who create the ads, but also to those who disseminate the ads, potentially making media companies, including broadcasters, liable for transmitting the ads containing AI (or for not properly labeling such ads). Most but not all of the states that have adopted laws contain some exemption for broadcasters or other media companies that are simply paid to run political ads (thus far, Minnesota’s law contains no such exemption). But some, for example the law adopted in New Mexico (and the law for online publishers in Michigan), require that, to qualify for the exemption, the media entity must adopt and make available to advertisers a policy that prohibits the use of AI in political ads without complying with the disclosure or other requirements imposed by the state law. Thus, media companies that operate in these states need to be adopting and disseminating such policy statements.
We have worked with media companies and media organizations in many states advising them about pending legislation in their states. Among the biggest concerns have been states that have proposed some exemption for media companies paid to run political ads but require that the station make “reasonable” or “good faith” efforts to weed out ads that are not properly labeled or which otherwise violate the proposed state law. Another formulation has been to take concepts from the law of defamation, and graft them onto the AI laws, providing an exemption unless the media company “knows or should know” of the use of a “deep fake” or “synthetic media” in the advertising. These statutes raise significant compliance issues.
First, under FCC rules and the Communications Act, broadcasters and local cable operators are forbidden from censoring any ads from candidates or their authorized committees. This “no censorship” provision requires broadcasters to run candidate ads in the way that the candidate has produced those ads – no matter how objectionable the content in those ads may be. Because broadcasters and local cable companies cannot censor the ad, they are not liable for the content of those ads (for more on the “no censorship” rules, see our articles here and here). The FCC and the courts have even required broadcasters to run racist content or graphic anti-abortion ads that may be disturbing to some viewers – even in programming that may be targeted at children. Thus far, FCC informal guidance has only suggested that no censorship can be overcome by ads that are legally obscene or possibly ones that endanger public safety (e.g., if they contain an EAS tone). Thus, the no censorship provision could require broadcasters to run candidate ads containing AI even if the ads don’t comply with the labeling requirements imposed by a state statute.
An even broader issue arises as it may be impossible to determine when AI is used in the generation of a political ad. From articles that I have read and discussions that I have had with people familiar with AI technology, already many audio ads using AI are almost impossible to identify. The identification of video ads using AI is already extremely difficult, with many of the tools coming back with many “false positives,” identifying ads as using AI that in fact do not. These deep fake video ads will only get better – making them even harder to identify. How is a broadcaster, particularly a small broadcaster without access to sophisticated technologies, supposed to identify such ads? In these days where some candidates are all too ready to declare even true stories “fake news,” if a broadcaster is potentially held liable for running ads that may have used AI, it would seemingly become routine for candidates, every time an attack ad is run, to claim that the ad should be pulled as it contains AI, leaving broadcasters with the impossible task of determining whether or not to pull it from the airwaves. Many may shy away from all political advertising to avoid being put in that position (though broadcasters cannot reject advertising from federal candidate because of the Communications Act’s “reasonable access” provision).
This same quandary may well arise as defamation claims are made about ads containing AI (see our discussion of that issue here). But adding state criminal or civil liability on top of potential defamation claims adds a whole new risk for media companies. There may also be defenses available to a defamation claim that may not lie against some of these proposed state laws (for instance, in one of our articles on the issue of media liability for AI in advertising, we note the questions raised by an ad with a voice of a candidate reading the actual social media posts of that candidate – is putting the words of the candidate into their own voice really defamatory?).
Whether defenses to these state laws can be successfully raised, including questions as to their constitutionality, remains to be seen. Many of the proposed statutes are very broad, seemingly banning any use of AI, or requiring its disclosure, even if just used for technical production purposes rather than for imitating a candidate or other political figure. In addition, why are AI “synthetic media” uses banned, while photoshop, selective editing, or the use of actors to portray a candidate are not covered by many of these bills? But as this has been such an active area of legislative effort in the last few months, media companies must be paying attention to the legislative frenzy in this area. Plus, there are federal bills pending but not yet adopted, that we will try to cover in a subsequent post. Stay alert to all this activity to see how it may affect your operations in this most active political year.