AI

Colorado enacts consumer protections for artificial intelligence | Orrick, Herrington & Sutcliffe LLP


On May 17, Colorado enacted SB 24-205 (the “Act”) concerning consumer protections in interactions with artificial intelligence (AI) systems. The Act requires developers of “high-risk” AI systems—defined as AI systems that make “consequential” decisions relating to education, employment, financial or lending services, housing, or insurance, etc.—to take reasonable care to protect consumers from “any known or reasonably foreseeable risks of algorithmic discrimination” that could arise from the use of such systems. The Act grants the Attorney General (AG) rule-making authority to implement and enforce the associated requirements.

 

Beginning in February 2026, developers must provide deployers with comprehensive documentation comprising a general statement of the AI system’s foreseeable uses and known harmful or inappropriate applications, high-level summaries of the training data, known or reasonably foreseeable limitations and risks, the system’s purpose, and its intended benefits and uses. Furthermore, developers must share how the AI system was evaluated for performance and algorithmic discrimination mitigation, the data governance measures applied to its data sets and sources, the intended outputs, the measures taken to mitigate risks, and the guidelines on the proper use and monitoring of the system.

Developers must share publicly a summary statement on their website or in a public use case inventory summarizing the types of high-risk AI systems that developers have developed or substantially modified and how they manage the potential risks of algorithmic discrimination associated with these systems. Additionally, deployers of high-risk AI systems must notify consumers of a system’s involvement in significant decision making, allow consumers to correct inaccurate personal data, and establish an appeal process for adverse determinations which, if technically feasible, allows for human review. 

Finally, developers of high-risk AI systems are required to disclose any known or reasonably foreseeable risks of algorithmic discrimination to the AG and all known deployers or other developers of the system. Such disclosure must occur without unreasonable delay and no later than 90 days after a developer becomes aware of the risk. Furthermore, under the Act, the AG has the authority to request documentation or statements from developers to ensure compliance.



Source

Related Articles

Back to top button