AI

Flurry of California Legislation Takes Aim at AI Misuse


(TNS) — If you ask Sen. Tom Umberg, the legislature’s role in regulating artificial intelligence is multi-faceted.

It’s no secret that California is expected to play an outsized role in AI regulation. After all, the state is home to many of the world’s largest AI companies, the governor’s office boasts.

But it’s a balancing act, Umberg says, of tackling concerns related to bias and transparency in the AI space with encouraging innovation and start-ups.


“At one point, (the California Legislature) had 55 bills dealing with AI, mostly focused on risks. But we are creating both the regulatory entity that will provide guidance as well as some safety mechanisms to make sure that the risks that are inherent in AI are mitigated,” said Umberg, a Santa Ana Democrat who chairs the Senate Judiciary Committee, which hears many AI-related bills.

Gov. Gavin Newsom, speaking at an AI event in San Francisco last week, also warned against over-regulating AI, Politico reported.

“I don’t want to cede this space to other state or other counties,” Newsom reportedly said. “If we over-regulate, if we overindulge, if we chase the shiny object, we could put ourselves in a perilous position.”

So what are the bills legislatures are considering this year?

Bills still in consideration — after clearing a major legislative deadline last month — tackle deepfakes (digitally altered photos or videos that seem realistic but are, in fact, fake and can be extremely harmful), data transparency and election security, among other things.

One still in play is AB 1856 from Assemblymember Tri Ta, R-Westminster, which creates misdemeanor penalties for someone found to have knowingly distributed pornographic deepfake videos or photos of someone without that person’s consent.

The bill is supported by the California State Sheriffs Association which argues that AI technology has “exacerbated the prevalence and severity of revenge porn,” which is commonly used by an ex-partner to embarrass, coerce or otherwise harm someone. But the California Public Defenders Association opposes it, arguing that it could violate First Amendment protections.

Another, from Assemblymembers Marc Berman, D-Menlo Park, and Gail Pellerin, D-Santa Cruz, is billed as an election integrity bill. It would require large online platforms to block the posting of digitally altered false images, videos or audio recordings that purport to show a candidate saying or doing something they did not actually say or do.

Earlier this year, voters in New Hampshire received a robocall that sounded like the voice of President Joe Biden encouraging them not to participate in the primary elections, according to the state’s attorney general.

“We have entered the age of AI-generated disinformation, which poses a severe risk to our elections and our democracy,” said Berman. “Deepfakes are a powerful and dangerous tool in the arsenal of those who want to wage disinformation campaigns, and they have the potential to wreak havoc on our democracy by attributing speech and conduct to a person that is false or that never happened.”

Other bills include:

Data transparency: AB 2013 requires developers of AI systems to publicize information about the data used to create the system or site. From Assemblymember Jacqui Irwin, D-Thousand Oaks, the idea is to enhance consumer protection, according to the bill’s analysis, with a greater understanding of how these AI systems and services work.

“Consumers may use this knowledge to better evaluate if they have confidence in the AI system or service, compare competing systems and services or put into place mitigation measures to address any shortcomings of the particular system or service,” Irwin said in the analysis.

Safeguards for large-scale systems: If this bill is successful, developers of powerful and large-scale AI models and the technology that trains those models would need to implement certain safeguards related to safety and security.

SB 1047 would also create CalCompute, “a public AI research cluster that will allow startups, researchers and community groups to participate in the development of large-scale AI systems,” according to the bill’s analysis.

“By focusing its requirements on the well-resourced developers of the largest and most powerful frontier models, SB 1047 puts sensible guardrails in place against risk while leaving startups free to innovate without any new burdens,” said Sen. Scott Wiener, D-San Francisco. “We know this bill is a work in progress, and we’re actively meeting with stakeholders and seeking constructive input.”

California Artificial Intelligence Research Hub: SB 893 establishes a new entity that would “facilitate collaboration between government agencies, academic institutions and private sector partners to advance artificial intelligence research and development,” according to the bill.

The idea behind the hub from Sen. Steve Padilla, D-San Diego, is to facilitate innovation in AI development; ensure AI technologies are prioritizing fairness and transparency; provide researchers with access to data, training and education; and support AI development through the building of a public computing infrastructure and ensuring access to existing commercial infrastructures, among other things.

“Emerging AI technologies are costly and energy intensive and require broad-based coordination among institutions and other sectors,” said Padilla. “Shared resources will be vital to the continued development of AI technology in California. The creation of the California Artificial Intelligence Research Hub allows us to pool and leverage the state’s financial resources and the intellectual firepower of our academic sector to democratize AI and stop it from becoming monopolized by proprietary interests alone — the tech titans.”

Staff writer Hanna Kang contributed to this report.

©2024 MediaNews Group, Inc. Visit ocregister.com. Distributed by Tribune Content Agency, LLC.





Source

Related Articles

Back to top button