AI

Colorado bill to regulate generative artificial intelligence clears its first hurdle at the Capitol


A Colorado bill that would require companies to alert consumers anytime artificial intelligence is used and to add more protections to the budding AI industry cleared its first legislative hurdle late Wednesday, even as critics testified it could stifle technological innovation in the state. 

At the end of the evening, most sides seemed to agree: The bill still needs work. 

It passed 3-2 out of the Senate Judiciary Committee along party lines.

“This bill isn’t about changing the world right now,” said Senate Majority Leader Robert Rodriguez, a Denver Democrat and the sole sponsor of Senate Bill 205. “It’s always been about providing a framework for accountability, for biases and discrimination and just making sure that people know when they’re interacting with it.”

Senate Majority Leader Robert Rodriguez explaining his proposed Consumer Protections for Artificial Intelligence bill before the Colorado Senate Judiciary Committee hearing on April 24, 2024. (Tamara Chuang, The Colorado Sun)

Proposed regulations were aimed at companies using AI in their software and services that could discriminate against consumers seeking jobs, housing, health care and more. It also purposefully coincides with a similar bill moving through Connecticut. But opponents, largely from the business and technology community, said any regulations of what’s called “generative AI” at this early stage would stifle innovation and cause companies to leave the state. 

“The technology is literally changing on a weekly basis, just like the early days of the World Wide Web. It’s mind numbing how fast it’s moving,” said Kyle Shannon, CEO and founder of Denver-based Storyvine, which is using generative AI to add transcription and summaries to its videos. “I also know from experience how stifling a bill like this would have been in the mid-’90s. And I know that overregulating AI right now is going to put Colorado businesses at a significant disadvantage.”

The newer generative AI became a public phenomenon Nov. 30, 2022, when OpenAI debuted a new chat bot, ChatGPT, which provided the most human-like responses to human questions so far. OpenAI is also behind DALL-E, which can generate complex images with just a few prompts. The company has come under lots of criticism, partly for using the open internet and potentially copyrighted material to train its AI systems. 

But other Big Tech companies have come under fire as well for training their AI perhaps a little too much in one direction. When Google debuted its updated AI system Gemini in February, it wound up apologizing for historical inaccuracies, such as depicting the U.S. founding fathers or Nazi-era German soldiers as people of color

The proposed bill is very similar to another one moving forward in Connecticut, which after several changes was pushed through by Senate Democrats and passed in the Senate on Wednesday night, though reportedly without the blessing of its governor, also a Democrat. It also had support from IBM and Microsoft.

In fact, Rodriguez said he’d worked with colleagues in Connecticut on uniform language in order to start building a national guideline, instead of each state having its own take on how to protect consumers from any of AI’s ills. Revisions made in Connecticut were part of an amendment Rodriguez introduced Wednesday night. 

“There will be some changes coming to the bill,” Rodriguez testified. “All that we’re asking for companies to do (is put) in a place of notice to consumers, (perform) risk assessments on their tools and have an accountability report when something goes wrong that results in discrimination, that’s what this bill does.”

Opponents to the bill said that regulation is needed in some form, but this is not the right bill. 

“The bill regulates AI means that might do harm, rather than the harms that are actually done by some means,” testified Michael Boucher, founder of Santa Technologies in Lafayette. He noted that singer Taylor Swift would likely be just as upset with a human who tweaked her image using digital-photo editing software as she would if someone used AI to create fake pornographic images of her. 

Kouri Marshall, with the Chamber of Progress, a progressive organization focused on technology and social justice, said that tech companies must and are investing in detection and mitigation products to prevent potential harms to society. 

“As a Black American, I know all too well the ugly head of discrimination and what that feels like,” Marshall testified Wednesday. “However, senators, pinpointing the sorts of catalysts of discriminatory outcomes of AI systems is not always possible, nor is consistently determining who or what is responsible for the act of discrimination. Unfairly biased outcomes are problematic for developers, deployers and users like all of us in this room.” 

He suggested that it’s better to strengthen other existing laws, like civil rights protections to make sure vulnerable citizens are protected online and offline. 

Several groups also asked the committee to amend the bill, including representatives from the Colorado Attorney General’s Office, ACLU, Colorado Technology Association and the Governor’s Office of Information Technology. 

“Legislating in an area where technology evolves daily is inherently challenging and fraught with unintended consequences. What may seem like a sensible regulation today could quickly become obsolete or counterproductive tomorrow,” testified Michael McReynolds, senior manager of government affairs at OIT.  With a federal effort also underway, he added, “we do have a concern that the state could get too far ahead or sideways of those upcoming rules.”

McReynolds said they would be happy to support the creation of a task force that would include industry and others impacted by the law to come up with a bill, much like in the past to create the state’s privacy laws and others.

Colorado Senate Judiciary Committee hearing on Senate Bill 205, the Consumer Protections for Artificial Intelligence, on April 24, 2024. (Tamara Chuang, The Colorado Sun)

While the bill now moves to the Senate for a first vote, there was skepticism that it would be passed this session, which ends in two weeks. 

“I do worry that bringing it with just 13 days left in session might be a big lift just because we already have many laws in the books that make discrimination illegal, whether it’s from a company, a person, or technology that the company is using,” said Sen. Kevin Van Winkle, R-Highlands Ranch, who voted no. “In just the definition section alone, it’s clear from the hearing today that definitions like algorithmic discrimination, high-risk artificial intelligence versus simple artificial intelligence or just a definition of artificial intelligence. … There’s a lot to it.” 

Rodriguez, who also worked on the state’s data privacy bill that is considered one of the strongest in the nation, referenced that law in his closing comments. The privacy law was three years in the making, including a year during which the AG’s office solidified rules.

“We were in a similar place — the bill goes too far, the bill doesn’t go far enough. It’s a hard thing to navigate and at the time when we passed the consumer privacy law, we were only the third state and the second in that year. And that’s what we’re attempting to do here is to set the groundwork,” he said. The proposed AI bill would take effect Jan. 1, 2026, if it passes. “And if there’s any conflicts that can’t be (resolved), we have another year to come back and do tweaks to this bill to fix that.” 

Senate Bill 205 will not be debated in the whole Senate. It’s unclear when that will happen, but Colorado’s legislative session ends May 8.





Source

Related Articles

Back to top button