AI

Senate AI Group Recommends $32 Billion in Spending


A bipartisan Senate group wants the government to spend at least $32 billion on AI.

That funding, the senators said in a report issued Wednesday (May 15), would be spread over a three-year period and help develop artificial intelligence (AI) and create safeguards around the fast-growing technology.

Speaking to the Associated Press (AP), the four senators — two Democrats, two Republicans — stressed the need to come to a consensus around AI as other countries are investing heavily in the technology and products like ChatGPT are booming in popularity.

Sen. Todd Young (R-Ind.) said it was the development of that model that made the panel recognize that “we’re going to have to figure out collectively as an institution” how to grapple with the technology.

“In the same breath that people marveled at the possibilities of just that one generative AI platform, they began to hypothesize about future risks that might be associated with future developments of artificial intelligence,” Young said.

As the AP and other publications note, getting any sort of AI legislation passed by a divided Congress in an election year will be an uphill fight.

“It’s complicated, it’s difficult, but we can’t afford to put our head in the sand,” said Senate Majority Leader Chuck Schumer (D-N.Y.), who formed the group last year.

The report comes amid other efforts to promote AI safety around the world. Last week the U.K.’s AI Safety Institute released “Inspect,” a software library that lets startups, academics and AI developers to international governments test specific capabilities of individual AI models and then generate a score based on their results.

The institute says Inspect is the first AI safety testing platform overseen by a government-sponsored body and released for wider use.

“As part of the constant drumbeat of U.K. leadership on AI safety, I have cleared the AI Safety Institute’s testing platform — called Inspect — to be open sourced,” said Michelle Donelan, the U.K.’s secretary of state for science, innovation and technology.

And earlier this month, PYMNTS explored Europe’s efforts to regulate AI in a conversation with Lars Nyman, chief marketing officer at CUDO Compute.

“Big picture, Europe leans toward stricter regulations and ethical considerations — it’s a similar picture as that of privacy and data protection, where U.S. and EU diverge significantly,” Nyman told PYMNTS.

“The EU has a more comprehensive approach with binding rules for various AI applications impacting social and economic aspects. [The] U.S. is more focused on industry-driven innovation with lighter regulations. Individual agencies are starting to develop guidelines for specific sectors, though.”



Source

Related Articles

Back to top button