AI

AI industry pushes back on Colorado bill


DENVER (KDVR) — Artificial intelligence is the target of several bills making their way through the Colorado legislature, including one that seeks to protect consumers from AI.

The bill specifically looks at “high-risk” AI systems, which are largely defined as systems using artificial intelligence to direct or operate critical infrastructure, safety components, education and other avenues where if the AI fails or is biased, it could dramatically impact individuals’ quality of life.


The bill does not outline what it defines as “high-risk systems.”

As currently written, the bill requires developers of high-risk AI systems to disclose when they’re using “high-risk” systems, describe the data used to train the AI, implement a risk-management system to prevent algorithmic discrimination and identify when something is artificially generated or manipulated.

The measure is sponsored by Sen. Robert Rodriguez, the Senate majority leader representing portions of Arapahoe, Denver and Jefferson County.

“Every bill we run is going to end the world as we know it. That’s a common thread you hear when you run policies,” Rodriguez said Thursday to the Associated Press. “We’re here with a policy that’s not been done anywhere to the extent that we’ve done it, and it’s a glass ceiling we’re breaking trying to do good policy.”

One AI entrepreneur said the landscape of the artificial intelligence industry has changed dramatically since legislators began to contemplate regulating it.

“The technology is literally changing on a weekly basis, just like the early days of the World Wide Web,” said Kyle Shannon, co-founder of the AI Salon. “I also know from experience how stifling a bill like this would have been in the mid-90s. And I know that over-regulating AI right now is going to put Colorado businesses at a significant disadvantage.”

Business leaders have also said they understand the motive for the measure, but also argue the regulations should look at how technology is being used and seek deeper dives into legal ramifications.

Over 400 AI-related bills are being debated this year in statehouses nationwide, most targeting one industry or just a piece of the technology — such as deepfakes used in elections or to make pornographic images.

Some entrepreneurs have pushed back on the bill, saying the rules would be difficult to follow and that it could hurt Colorado businesses more than it would help. Many say they do not oppose AI regulations but want them to be based on how the technology is used — not on how it is made.

Bill seeks to ensure discrimination isn’t spread by AI

Many lawmakers have sought to regulate one of the technology’s most perverse dilemmas: AI discrimination. Examples include an AI that failed to accurately assess Black medical patients and another that downgraded women’s resumes as it filtered job applications.

Still, up to 83% of employers use algorithms to help in hiring, according to estimates from the Equal Employment Opportunity Commission.

If nothing is done, there will almost always be bias in these AI systems, explained Suresh Venkatasubramanian, a Brown University computer and data science professor who’s teaching a class on mitigating bias in the design of these algorithms.

“You have to do something explicit to not be biased in the first place,” he said.

These AI proposals, mainly in Colorado and Connecticut, are complex, but the core thrust is that companies would be required to perform “impact assessments” for AI systems that play a large role in making decisions for those in the U.S. Those reports would include descriptions of how AI figures into a decision, the data collected and an analysis of the risks of discrimination, along with an explanation of the company’s safeguards.

The Colorado bill, as written, would be enforced by the state attorney general’s office. Between July 1, 2025, and June 30, 2026, the AG would issue notices to an alleged violator, and if a cure to the violation is possible, the office would provide 60 days for the violator to fix things before bringing an enforcement action.

Under the bill, companies that use AI wouldn’t have to routinely submit impact assessments to the government. Instead, they would be required to disclose to the attorney general if they found discrimination — a government or independent organization wouldn’t be testing these AI systems for bias.

The measure is scheduled to be discussed by the Senate Judiciary Committee on Wednesday afternoon.

The Associated Press contributed to this report.



Source

Related Articles

Back to top button