AI

The FCC and Congress Advance Proposals to Regulate Artificial Intelligence in Political Advertising


We’ve written several times (see for instance our articles here, here, and here) about all of the action in state legislatures to regulate the use of artificial intelligence in political advertising – with approximately 17 states now having adopted laws or rules, most requiring the labeling of “deep fakes” in such ads, and a few banning deep fakes entirely.  Action on the federal level seems to be picking up, with two significant actions in the last week.  This week, FCC Chairwoman Jessica Rosenworcel issued a Press Release announcing that the FCC would be considering the adoption of rules requiring broadcasters and other media to include disclaimers when AI is used in political advertising. Last week, the Senate Committee on Rules and Administration considered three bills addressing similar issues.  These actions, along with a long-pending Federal Election Commission proceeding to consider labeling obligations on federal election ads (see our article here), are the federal government’s attempts to address this issue – though, with the time left before the election, none  of these proposals appear likely to have a significant effect during the current election cycle.

At the FCC, according to the Chairwoman’s Press Release, a draft Notice of Proposed Rulemaking is circulating among the Commissioners for their review.  The proposal is to require broadcasters, local cable companies, and other regulated entities with political broadcasting obligations under FCC rules, to include mandatory disclosures on political ads when AI is used.  The disclosures would be required on the air and in writing in a station’s FCC-hosted online public inspection file.  While the text of the NPRM is not yet public, the Press Release did provide some specifics as to the questions that would be asked in this proceeding.

According to the Press Release, issues to be addressed would include:

  • Whether to apply the disclosure rules to both candidate and issue advertisements,
  • What specific definition of AI-generated content should be used,
  • Whether to apply the disclosure requirements to broadcasters and entities that engage in origination programming, including cable operators, satellite TV and radio providers and section 325(c) permittees (companies originating programming in the US, and providing that programming to cross-border broadcast stations to be transmitted back into the US to a US audience).

As we recently wrote about the activity underway in various states, there are many important questions that need to be addressed in any rules, whether legislative or administrative, that govern the use of AI in political advertising.  Among the questions listed above is one of the most important – how to define “artificial intelligence.”  Artificial intelligence is a very broad term and can sweep in many technologies – some of which are used all the time in the production of media content without any nefarious motivations.  In video production, AI technologies can help with adjusting brightness, balancing colors, and assuring that audio and video are properly synchronized, none of which would usually matter to the public.  In adopting laws on AI in political ads, many of the state bills have been careful to limit their reach to “deep fakes,” portraying someone in a situation that never really occurred.  This can involve inventing a scene entirely or changing what was said or done to make it appear that something happened that did not.  While we may think that we know what these bills are intended to address, arriving at specific language defining when legal obligations apply and when they don’t is not easy.

Even if an appropriate definition can be crafted, the circumstances in which the AI is used is also important in any regulation.  Was the deep fake used to attack a candidate for political purposes, or was it used for purposes of satire or parody – or in a news story (perhaps even a story about the proliferation of deep fakes)?  Appropriate exceptions for these uses are also contained in most well-drafted state bills, and hopefully are under consideration by the FCC.

Perhaps most importantly, the obligation to root out the use of AI needs to be properly assigned.  As we noted in our article about state legislation, the obligation to identify deep fakes cannot be placed on broadcasters and local cable systems, as there simply is no technology that they can use to identify deep fakes accurately and quickly and with complete accuracy.  I have been told that the tools that are available to identify AI tend to over-exclude ads – identifying ads as problematic that really are not – perhaps because of all the innocuous uses of AI-technologies in video production and transmission.  By placing liability on broadcasters to determine if an ad should be prohibited if it does not include the proper disclaimers about the use of AI, many innocent ads may be rejected, inhibiting political speech. Potential liability on broadcasters would incentivize candidates being attacked to claim that attack ads are deep fakes or “fake news,” as broadcasters who cannot accurately judge whether the ad contains AI will be more likely to reject challenged ads to avoid potential liability.

Recent press articles highlight the problems with the identification of deep fakes.  Last weekend, this article from the Washington Post detailed how AI-detection systems that claim to be very accurate in fact may not be so accurate when used in the real world.  Another recent article lauded journalists in India for their efforts to root out election deepfakes in that country – though the article noted that the success that these journalists had was only through cooperative programs using resources provided by academics and other organizations, and that accurate detection and verification of deep fakes often took weeks.  In the US political advertising environment, where ads change weekly – or even daily – taking weeks to identify deep fakes does little good.  And, even if the technology and resources to identify deep fakes is available to big companies with deep resources, the vast majority of US broadcasters do not have that access or those resources.  How will a little radio station in rural America be able to tell if an ad attacking a school board or city council candidate is accurate or not? It is not unrealistic to assume that deep fakes will be used even in these local races.  See this article, also from the Washington Post (and an AP story here about the same incident), detailing how it took law enforcement months to determine that an audio recording purporting to document a high school administrator disparaging certain students was a deep fake – highlighting that the technology simply is not there for a broadcaster to determine when deep fakes are present in any political advertising.  Stations will be at risk, with no easy way to determine when the risk is real, likely leading to less political speech reaching the public.

In many of the state laws that we have been engaged to review for media companies and broadcast associations, state legislators have determined that, because of these concerns, liability must be placed on those who create the deep fake ads, rather than on broadcasters and other media that distribute those ads.  The FCC, of course, does not have jurisdiction over advertisers – it can only regulate the media companies subject to its rules, and only to the extent that Congress has provided.  Commissioner Carr has already issued a statement questioning whether the FCC’s authority is really broad enough to cover the imposition of any new AI identification requirement.  If the FCC concludes that it does have this authority, hopefully any FCC action that comes from this week’s Press Release will recognize that broadcasters cannot be asked to determine when an AI disclosure is needed – the most that they can be expected to do is ask a sponsor about AI use and rely on the sponsor’s disclosures, just as broadcasters do now when asking for the identity of the sponsor of a political ad. 

We will be waiting to see how these issues are addressed when the NPRM is released for public comment.  If the NPRM is approved by a majority of the Commissioners and released to the public, interested parties will need to be given adequate time to comment and reply to the arguments of others.  After comments and replies are filed, the Commission must review the record and formulate a final decision. All that takes time, and legal processes must be followed.  Given how long these processes normally take, action in time to take effect during this election period would seem unlikely.

As noted above, Congress is also looking at AI in political ads.  This past weekend, in our look back at the prior week’s regulatory activity relevant to broadcasters, we wrote about the Senate Committee on Rules and Administration having a meeting to review three bills addressing the effect of artificial intelligence on elections.  The first bill, the Protect Elections from Deceptive AI Act, prohibits the distribution of deceptive AI-generated audio or visual media relating to federal elections, with exceptions for use in bona fide newscasts by broadcasters and cable and satellite television providers if a disclaimer is used at the time of broadcast.  The second bill, the AI Transparency in Elections Act of 2024, requires the use of disclaimers in political advertisements including any AI-generated media.  The final bill, the Preparing Election Administrators for AI Act, requires the Election Assistance Commission to develop voluntary guidelines to be used by election administrators regarding the use and risks of AI in the upcoming 2024 elections.  All bills were voted out of committee – the first with two Republicans objecting, claiming that the language of these bills was too vague to determine what was prohibited, and arguing that the vagueness could violate free speech rights.  Those two bills do raise some of the same concerns that we note above, and we will address them in more detail in a later post. 

The approval by the Committee of these bills is only the first step in the legislative process.  These bills will only become law if approved by the full Senate and the House of Representatives, and then are signed by the President.  In an election year like this one, with Congress focused on getting done only what they absolutely must do so that they can spend time campaigning, it will be hard for there to be any substantial activity on these bills before the conclusion of this legislative session at the end of the year. 

We will be following these issues as they develop, as you should, to determine how they will affect your operations in the political broadcasting arena. 



Source

Related Articles

Back to top button