AI

Information on Four Artificial Intelligence Bills in New Hampshir


So we all know the story but for those who don’t here is the short version:

Political consultant Steve Kramer hired a New Orleans magician to make some recordings using an AI program. Those recordings imitated Joe Biden’s voice and told voters not to vote in the New Hampshire primary. Allegedly.

The FCC hit the guy with a major fine which John Henson told you all about: CLICK HERE FOR ARTICLE.

These incidents underscore the importance of the following four legislative measures currently making their way through New Hampshire’s legislative process (in no particular order):

HB 1432-FN: Prohibiting Fraudulent Use of Deepfakes

House Bill 1432-FN establishes the crime of fraudulent use of deepfakes, sets penalties for violations, and creates a private right of action for those harmed. This bill aims to combat the misuse of deepfake technology by criminalizing the creation and distribution of deceptive media that can harm individuals’ reputations or lead to financial damage. The bill also includes provisions to prevent lobbyists convicted of such offenses from registering. The effective date on this one is January 1, 2025.

In this bill, AI and deepfake are defined as follows:

AI: The capability of a machine to exhibit human-like cognitive functions, such as reasoning, learning, planning, and creativity. AI systems can adapt their behavior based on past actions and can operate under unpredictable conditions with minimal human oversight.

Deepfake: A video, audio, or other media where a person’s face, body, or voice has been digitally altered to make them appear as someone else, say things they haven’t said, or do things they haven’t done.

A person is guilty of a class B felony if they knowingly create, distribute, or present deepfakes intended to embarrass, harass, defame, extort, or otherwise cause financial or reputational harm to an identifiable individual. An additional offense is established if the deepfake results in the wrongful arrest of the individual depicted.

Additionally, “[a] person may bring an action against any person who knowingly uses any likeness in video, audio, or any other media of that person to create a deepfake for the purpose of embarrassing, harassing, entrapping, defaming, extorting, or otherwise causing any financial or reputational harm to that person for damages resulting from such use.”

The bill outlines exceptions where the provisions don’t apply, including:

  • Interactive computer services as defined in 47 U.S.C. § 230 for content provided by another party.
  • News reports that acknowledge questions about the authenticity of the deepfake.
  • Election communications where disclaimers are maintained.
  • Satirical or parodic content where the impersonation relies on human abilities rather than AI.

HB 1500-FN: Unlawful Distribution of Misleading Synthetic Media

Currently in interim study, HB 1500-FN addresses the unlawful distribution of misleading synthetic media, including AI-generated content. This bill specifically targets synthetic media used without consent, without clear labeling, and with intent to deceive. It includes stricter penalties for distributing such media close to elections with the intent to influence outcomes or harm candidates.

HB 1500-FN defines “synthetic media” as any media, including text, images, videos, or sounds, that is fully or partially created or modified through the use of artificial intelligence algorithms. This broad definition captures various forms of AI-generated content that can be manipulated to mislead or deceive.

Under the proposed legislation, a person is guilty of unlawful distribution of misleading synthetic media if they:

  1. Distribute or make publicly available synthetic media purported to be of or by an identifiable person without that person’s consent.
  2. Fail to display a conspicuous notice within the synthetic media identifying it as such.
  3. Intend to mislead others about the acts of the identifiable person.

A violation is a Class A misdemeanor.

A person is guilty of unlawful distribution of election-related misleading synthetic media if they:

  1. Distribute or make publicly available synthetic media purported to be of or by an identifiable person within 90 days of a state, county, or local election.
  2. Do so without the consent of the identifiable person.
  3. Fail to include a conspicuous notice identifying the media as synthetic.
  4. Intend to injure a candidate or influence the election result.

A violation is a Class B felony for repeat offenders within five years or when the distribution is made to provoke violence. In other cases, it’s a Class A misdemeanor.

HB 1596-FN: Disclosure of Deceptive AI Usage in Political Advertising

In the conference committee stage, HB 1596-FN mandates disclosures for any AI-generated content used in political advertising. This bill requires clear labels on synthetic media to inform the public when AI is used to create or manipulate media in political campaigns. The effective date on this one is August 1, 2024.

HB 1596-FN defines the following:

  • Synthetic Media: Any image, audio recording, or video recording of an individual’s appearance, speech, or conduct created or manipulated using generative adversarial network techniques or other digital technologies to produce a realistic but false depiction.
  • AI: The capability of a machine to perform cognitive tasks such as reasoning, learning, planning, and creativity, with the potential to adapt its behavior based on previous actions.
  • Generative AI: AI technology that generates text, images, or other media in response to prompts.
  • Deepfake: Media in which a person’s face, body, or voice is digitally altered to depict them as someone else or to show them saying or doing things they never did.

The bill prohibits the distribution of synthetic media within 90 days of an election if the media:

  • Purports to depict a candidate, election official, or party.
  • Lacks the consent of the depicted individual.
  • Does not include a conspicuous notice identifying the media as synthetic.
  • Is intended to mislead the public about the actions or statements of the depicted individual.

The bill mandates that any synthetic media distributed must include a clear disclosure stating: “This [image/video/audio] has been manipulated or generated by artificial intelligence technology and depicts speech or conduct that did not occur.” The disclosure requirements include:

  • For visual media, the text must be easily readable and appear for the entire duration of the video.
  • For audio media, the disclosure must be spoken clearly at the beginning, end, and at intervals not exceeding two minutes.

Candidates or election officials depicted in deceptive and fraudulent deepfakes can seek:

  • Injunctive relief to prohibit the publication of such media.
  • General or special damages, including reasonable attorney’s fees and costs, from the sponsor of the deepfake.

The bill outlines exceptions where the disclosure requirement does not apply:

  • Interactive computer service providers under 47 U.S.C. section 230.
  • Media entities like radio, television, newspapers, and online platforms reporting on deepfakes as part of bona fide news coverage, provided there is a clear acknowledgment of the questionable authenticity.
  • Media entities publishing election communications paid for by a sponsor, provided the sponsor’s disclaimer is not altered.
  • Satirical or parodic content that does not rely on AI for impersonation.

HB 1688-FN: Regulating AI Use by State Agencies

Adopted by both legislative bodies, HB 1688-FN regulates AI use by state agencies, prohibiting its use for manipulation, discrimination, or surveillance of the public. This bill sets strict guidelines for AI deployment, ensuring human oversight and transparency in AI-generated decisions affecting citizens’ rights and freedoms. The effective date on this one is July 1, 2024.

The bill begins by defining the following:

  • Artificial Intelligence (AI): The capability of a machine to perform tasks typically requiring human intelligence, such as reasoning, learning, and planning.
  • Generative AI: AI that can produce text, images, or other media in response to prompts.
  • Deepfake: Media where a person’s face, body, or voice is digitally altered to create realistic but false representations.
  • State Agency: Any governmental entity at the state level, including departments, commissions, boards, offices, law enforcement, and the legislative and judicial branches.

The regulations apply to all computer systems operated by state agencies, with specific exceptions for:

  • Research systems at state-funded institutions of higher learning.
  • Consumer systems in common personal use, like facial recognition on smartphones.

The bill prohibits the following uses of AI by state agencies:

  1. Discrimination: AI cannot be used to classify individuals based on behavior, socio-economic status, or personal characteristics if it results in unlawful discrimination.
  2. Surveillance: Real-time and remote biometric identification systems for surveillance in public spaces, such as facial recognition, are banned unless used by law enforcement with a warrant.
  3. Deceptive Deepfakes: Creating and using deepfakes for deceptive or malicious purposes is prohibited.

Some uses of AI are allowed, but they come with the following restrictions:

  • Human Oversight: Any AI-generated recommendation or decision that cannot be reversed must be reviewed by a human before implementation.
  • Specific Situations: This review requirement applies to decisions impacting individual rights, biometric identification, critical infrastructure management, law enforcement actions, and legal interpretations, including sentencing.
  • AI-Generated Content Disclosure: Content produced by generative AI must be disclosed as such if it has not been reviewed and potentially edited by a human.
  • User Awareness: Users interacting with AI systems must be informed that they are dealing with AI.



Source

Related Articles

Back to top button