Colorado passes first-in-US AI regulations
The OpenAI logo is displayed on a cell phone with an image on a computer screen generated by ChatGPT’s Dall-E text-to-image model on Dec. 8, 2023, in Boston. (AP Photo/Michael Dwyer, File)
With the stroke of Gov. Jared Polis’ reluctant pen, Colorado this month became the first U.S. state to pass a law expressly regulating the use of artificial intelligence — a milestone that supporters said was an imperfect starting point to establish oversight of an emerging industry.
The state’s new law broadly targets the risk of discrimination when companies use AI, while requiring basic levels of transparency.
“I think Colorado’s AI act is a major shift in the approach to AI oversight in the United States,” said Duane Pozza, a Washington, D.C.-based lawyer who specializes in emerging technology and artificial intelligence. “… This is the first law to try to regulate AI — or at least certain kinds of AI — more comprehensively and really impose a lot of requirements on what gets defined as high-risk AI systems.”
Starting in early 2026, the state will require certain companies that use AI to make “consequential” decisions to disclose the use and purpose of the technology to consumers, job applicants and others who interact with it. The law is intended to help Coloradans who may be screened by an AI tool after they’ve applied for a job, a financial service, an educational program, or a home or apartment.
Under the law, a job applicant screened by an AI tool will be informed that they’re dealing with a machine, and they’ll be told why. If someone is rejected for a job or apartment, they’ll also be given an explanation.
The developers of such AI tools also will have to disclose more information about the systems, such as how they’re tested for biases. The measure broadly seeks to limit AI systems’ ability to discriminate against certain people or groups, supporters said, though it doesn’t change existing discrimination law.
“Depending on how you program the decision-making, (the AI) could take out certain names, it could take out certain people because of race. Bias is inherently happening in some of this stuff,” said state Sen. Robert Rodriguez, a Denver Democrat who sponsored the bill.
“All we’re asking,” he added, is that companies “notify somebody who’s interacting with it, and that you do risk assessments and other things just to show you’re updating (the AI) when you catch harms. Because it’s going to happen.”
Colorado’s law and similar attempts to pass legislation in other states have faced battles on many fronts, including between civil rights groups and the tech industry. Some lawmakers have been wary of wading into a technology few yet understand, and governors have worried about being the odd-state-out and spooking AI startups. Polis expressed worries about stifling AI innovation.
States have debated many more bills this year aimed at narrower slices of AI, such as the use of deepfakes in elections or to make pornography.
When it comes to more comprehensive regulation, similar measures to Colorado’s have failed in states including Washington and Connecticut, whose proposal served as a model for Colorado’s initial version. Another bill in California has survived so far.
Colorado’s new law is likely among the first broader AI laws in the world, said Rep. Brianna Titone, an Arvada Democrat who co-sponsored the bill with Rodriguez and Democratic Rep. Manny Rutinel. The European Union this year also has approved AI regulations.
Senate Bill 205 repeatedly was rewritten on its journey through the State Capitol. Amid opposition from the tech industry, rumors swirled that it would be vetoed — until the governor’s office announced that Polis had signed it in a Friday night announcement on May 17.
Still, he expressed doubts: In a statement accompanying his signature, the governor wrote that he was “concerned about the impact this law may have” on the tech industry and AI developers. He said he hoped lawmakers would improve the measure before it goes into full effect on Feb. 1, 2026.
Rodriguez, the Senate’s majority leader, said he didn’t think the bill needed to be further tweaked next year, though he and Titone both said additional AI regulations doubtless would be needed in the years to come.
The legislature this year passed a companion bill that would expand an existing task force to examine artificial intelligence.
“Extremely modest” requirements?
When the bill was introduced, it was viewed with skepticism from across the political spectrum. Scores of companies, industry associations and labor groups registered a formal position on the bill as legislators debated it; none said they supported it.
Labor groups were concerned the proposed law could be used to circumvent existing discrimination protections. Consumer advocates wanted tighter rules. Tech and industry groups opposed it as stifling a growing industry.
Those groups argued that the federal government should take the lead to ensure a unified approach.
Following amendments, labor groups and consumer advocates largely dropped their open opposition and were more comfortable with the bill, if not overly enthusiastic about it.
“It has some basic disclosure provisions that would at least provide some sunlight on the shadowy world of AI-driven decisions,” said Matt Scherer, the senior policy counsel for workers’ rights and technology at the Center for Democracy and Technology in Washington, D.C.
“Beyond that, the bill’s requirements are extremely modest,” he said. “And I would say even the disclosure provisions are far short of what public interest groups have been pushing for. But at least it’s a good baseline.”
Scherer said the bill offers exemptions for smaller businesses and for companies to protect information that they may consider a trade secret. He also argued that the bill’s enforcement provisions — which would ultimately allow the Colorado attorney general to intervene against a company that violates the law — should be strengthened.
Arguing against state-by-state approach
Tech companies and industry groups, meanwhile, remain opposed. Some opponents, including the national Consumer Technology Association, sent Polis letters requesting that he veto the bill.
Doug Johnson, the association’s vice president of emerging technology policy, said in an interview Wednesday that he was concerned the bill would limit an emerging technology and industry.
“We are in a brand new era of primordial soup,” said Logan Cerkovnik, founder of Colorado-based Thumper.ai, referring to the field of AI. “Having overly restrictive legislation that forces us into definitions and restricts our use of technology while this is forming is just going to be detrimental to innovation.”
Johnson and others advocated for Congress to take the lead on a federal approach.
“Overall, the effort on policymaking here needs to be focused on getting a balanced and appropriate AI policy that puts rules and guardrails in place at the national level,” Johnson said. ” … We do not need a fractured, state-by-state approach.”
Rodriguez and Titone both said they didn’t want to wait for a sluggish Congress to act, particularly as AI has seemingly burst forth from the realms of science fiction into reality over the past year.
Scherer dismissed the “patchwork argument” as one tech companies use to duck stricter regulations.
And anyway, he argued, Colorado’s law won’t extend some sort of iron grip over AI.
“My bottom line on this is it’s a foundation upon which to build,” he said. “And it is definitely not a high-water mark that should be whittled down. It’s a floor, not a ceiling.”
The Associated Press contributed to this story.
Stay up-to-date with Colorado Politics by signing up for our weekly newsletter, The Spot.