AI

Massachusetts Attorney General Issues Advisory on Artificial Intelligence


On April 16, 2024, the Massachusetts Attorney General (AG) published an Attorney General Advisory on the Application of the Commonwealth’s Consumer Protection, Civil Rights, and Data Privacy Laws to Artificial Intelligence. The Advisory warns businesses “that existing state laws and regulations apply” to artificial intelligence (AI) and algorithmic decision-making systems in the same way as they apply to other laws.


Background


The Advisory starts with the context for its guidance, acknowledging AI’s “opportunities,” but emphasizing its risks. The Advisory lists a series of consumer harms from AI, including that (1) “consumers are harmed when AI does not function as intended or does not meet minimum quality and efficiency standards,” (2) AI developers “continue to market and sell AI systems knowing these shortfalls and that they may cause harm to consumers,” and (3) “AI systems are being deployed in ways that can deceive consumers and the public.”


Guidance


Against this backdrop, the Advisory provides guidance on how Massachusetts consumer protection laws, data security laws and antidiscrimination laws apply to AI systems. The Advisory identifies several activities as violations of the Massachusetts consumer protection law:


  • Falsely advertising the quality, value or usability of AI systems;
  • Misrepresenting the reliability, manner of performance, safety or condition of an AI system, including unsubstantiated claims about its capabilities;
  • Misrepresenting a person’s audio or video content for the purpose of deceiving another, including the misleading use of deepfakes, voice cloning or chatbots;
  • Supplying an AI system that is defective, unusable or impractical for the advertised purpose; and
  • Offering an AI system for sale that is not fit for its ordinary purpose or the specific purpose for which it is offered.


The Advisory then quotes a regulation that categorizes any violation of state laws “meant for the protection of the public’s health, safety, or welfare” as violations of the consumer protection law.1 Similarly, the Advisory states that violations of federal consumer protection laws may constitute a violation of state law, highlighting statements from the Federal Trade Commission that the deceptive use of AI, including AI that “impersonates a government, business, or their officials” is unlawful.


The Advisory makes clear that the state data security law applies to AI systems, meaning AI systems must safeguard any protected personal information they use, and must comply with the law’s breach notification requirements.


Regarding state antidiscrimination laws, the AG advises that it is unlawful for an AI system, including an algorithmic decision-making tool, to use “discriminatory inputs” or “produce[] discriminatory results,” including results that disadvantage someone based on protected characteristics even without a discriminatory intent. The Advisory gives a specific example here, advising that creditors covered by the federal Equal Credit Opportunity Act must give accurate and specific reasons to consumers whose loan applications were denied, including when the creditor uses AI systems.


Implications


The Advisory is a reminder that States don’t need new legislation to regulate AI. State consumer protection, data security and antidiscrimination laws rules already apply to the use of AI systems. The Advisory also signals that the Massachusetts AG is looking carefully at situations where AI may negatively affect consumers. The AG raises particular concern with (1) generative AI outputs that deceive consumers, including deep fakes and chatbot hallucinations, and (2) algorithmic decision-making tools that rely on biased input or produce disparate results. To launch an investigation, the AG only needs to have a reasonable belief that a person has engaged in an unfair or deceptive act. All businesses, including those that use or sell generative AI or algorithmic decision-making tools, should ensure that their AI systems comply with state law.


Eric Gold is the former Chief of the Health Care Division in the Massachusetts AG’s Office.


1 In Klairmont v. Gainsboro Restaurant, Inc., 465 Mass. 165, 174 (2013), the Massachusetts Supreme Judicial Court “read the regulation as being bound by the scope of” the consumer protection law.




Source

Related Articles

Back to top button