AI

CT must protect against harm from AI, lawmakers say


After facing pushback from Gov. Ned Lamont, tech insiders and state officials, Sen. James Maroney said he is “nearing completion” on a rewrite of a sweeping artificial intelligence bill that seeks to shield residents from discriminatory algorithms and criminalize the spread of certain AI-generated pornographic and political material.

The Senate signaled Monday that the next draft of the bill will sacrifice key regulatory measures to assuage concerns from small businesses, big tech, the governor and the Department of Economic and Community Development who fear too much regulation could push tech companies out of the state.

Flanked by advocates, Senate leaders and colleagues at a press conference Monday, Maroney said that the state must pass the AI protections this session.

“We have an obligation to act now,” Maroney said. “We know that there are harms that we need to address.”

The proposal would establish a framework to regulate AI algorithms that often work surreptitiously in the public and private sectors to decide anything from credit scores to social services interventions.

It would also criminalize the spread of non-consensual AI-generated images, also known as deep fake pornography, and AI-generated election material that falls under the bill’s category of “deceptive media.”

Maroney said Monday that the bill will also move AI innovation and adoption forward, by funding AI education, training and research, and requiring state agencies to determine how they may use AI to improve performance.

With 16 days left in the legislative session, Senate President Pro Tempore Martin Looney said that passing Maroney’s AI bill is a top priority.

“We believe that this has to come to a vote. It’s such a critical issue,” Looney said. “This is fundamental.”

A version of the AI bill cleared the Judiciary Committee Monday in a bipartisan vote that was not without reservations from lawmakers who said they want to see changes to the language.

“This bill does remain something of a work in progress,” Committee Chair Rep. Stephen Safstrom said. “We’ll support it today to continue the conversation, but I do certainly hope that amendments are made before this sees the House.”

House Speaker Matt Ritter told reporters last week that he was “sympathetic” to arguments from the governor’s office and DECD that some provisions of the bill may be “onerous” on smaller tech startups.

After speaking with the Lamont administration, Maroney said he plans to scrap certain provisions in the original bill that applied to general-purpose AI model developers. Maroney said the now-deleted section “was causing the most angst among a lot of large companies.”

According to a summary of the proposal prepared by the Office of Legislative Research the section would have required the developers by Jan. 1, 2026, to establish a policy regarding federal and state copyright laws and “create, maintain, and make publicly available a detailed summary on the content used to train the general-purpose AI model,” in addition to other specified technical documentation and information.

While Maroney said he agrees with the Lamont administration and tech representatives that it “may be too soon for that section of the bill,” Maroney said he hopes to establish a task force to resurrect the provision and “get that section right” next year.

Maroney said the next draft will also address small business concerns. He explained that the next iteration of the bill will establish a partnership with the Connecticut Academy of Science and Engineering to develop an AI compliance checker and an algorithmic impact assessment based on existing international models that allow developers and deployers to check their systems against current law.

He said the new language will also shield deployers from some of the bill’s risk management framework and other provisions as long as they use non-customized “off the shelf” AI that has been tested by developers.

Maroney said he hopes the legislation will encourage tech companies to abandon Mark Zuckerberg’s famous “move fast and break things” mantra, and instead slow down and identify potential harms before releasing algorithmic technology to the public.

“We need to look at things, make sure they’re safe, make sure there aren’t any disparate impacts, test them, and then release them to the public,” Maroney said.

Maroney said the state wants to partner with companies to “do this together the right way.”

“We’re looking for ways to help small businesses come into compliance and build trustworthy AI because we know that’s how they’ll flourish,” Maroney said. “We know in the long run that’s really going to be cheaper and better for them.”

Matthew Wallace, the president and CEO of VRSim, a Connecticut company that has built training simulations using virtual reality for the last 20 years, said that the proposed AI bill will “foster growth, not inhibit it.”

“Regulation is an important way to avoid unintended consequences of technology,” Wallace said. “We’re building AI functions right now into our products. We like the concept of regulation. We like the concept of certainty. We like the concept of understanding where the guidelines go (and) what the requirements should be.”

Senate Majority Leader Bob Duff said that data privacy issues, social media harms and other problems have proliferated as a result of a hands-off mentality federal and state governments adopted as the internet became mainstream in the 1990s.

Duff said policymakers cannot afford to take the same laissez-faire approach with AI.

“Our job here is not to repeat the sins of the past. Our job is to make sure that we are putting the necessary guidelines and guardrails (and) parameters around this technology now,” Duff said.

“We shouldn’t be afraid to embrace this technology, but we also should not be afraid to put the rules of the road on there,” Duff added. “I think that will actually help our state and help our country.”



Source

Related Articles

Back to top button