AI

Increasing growth of AI makes legislation necessary to expand safety measures, create NSA center


(© Zobacz więcej – stock.adobe.com)

The Secure Artificial Intelligence Act of 2024 would improve the tracking and processing of security and safety incidents and risks associated with Artificial Intelligence (AI).

Specifically, the legislation aims to improve information sharing between the federal government and private companies by updating cybersecurity reporting systems to better incorporate AI systems. The legislation would also create a voluntary database to record AI-related cybersecurity incidents including so-called “near miss” events.

U.S. Sens. Mark R. Warner of Virginia, Chairman of the Senate Select Committee on Intelligence, and Thom Tillis of North Carolina, who are the bipartisan co-chairs of the Senate Cybersecurity Caucus, introduced the legislation today.  

As the development and use of AI grow, so does the potential for security and safety incidents that harm organizations and the public. Efforts within the federal government led by the National Institute of Standards and Technology (NIST) and the Cybersecurity and Infrastructure Security Agency (CISA) play a crucial role in tracking of cybersecurity through their National Vulnerability Database (NVD) and the Common Vulnerabilities and Exposures Program (CVE), respectively. The National Security Agency (NSA), through the Cybersecurity Collaboration Center, also provides intel-driven cybersecurity guidance for emerging and chronic cybersecurity challenges through open, collaborative partnerships. However, the systems do not currently reflect the ways in which AI systems can differ dramatically from traditional software, including the ways in which exploits developed to subvert AI systems (a body of research often known as “adversarial machine learning” or “counter-AI”) often do not resemble conventional information security exploits.

The legislation updates current standards for cyber incident reporting and information sharing at these organizations to include and better protect against the risks associated with AI. The bill also establishes an Artificial Intelligence Security Center at the NSA to drive counter-AI research, provide an AI research test-bed to the private sector and academic researchers, develop guidance to prevent or mitigate counter-AI techniques, and promote secure AI adoption.

“As we continue to embrace all the opportunities that AI brings, it is imperative that we continue to safeguard against the threats posed by — and to — this new technology, and information sharing between the federal government and the private sector plays a crucial role,” Warner said. “By ensuring that public-private communications remain open and up-to-date on current threats facing our industry, we are taking the necessary steps to safeguard against this new generation of threats facing our infrastructure.”

The Secure Artificial Intelligence Act would:

·Require NIST to update the NVD and require CISA to update the CVE program or develop a new process to track voluntary reports of AI security vulnerabilities;

·Establish a public database to track voluntary reports of AI security and safety incidents;

·Create a multi-stakeholder process that encourages the development and adoption of best practices that address supply chain risks associated with training and maintaining AI models; and

·Establish an Artificial Intelligence Security Center at the NSA to provide an AI research test-bed to the private sector and academic researchers, develop guidance to prevent or mitigate counter-AI techniques, and promote secure AI adoption.

“Safeguarding organizations from cybersecurity risks involving AI requires collaboration and innovation from both the private and public sector,” Tillis said. “This commonsense legislation creates a voluntary database for reporting AI security and safety incidents and promotes best practices to mitigate AI risks. Additionally, this bill would establish a new Artificial Intelligence Security Center, within the NSA, tasked with promoting secure AI adoption as we continue to innovate and embrace new AI technologies.”

Christopher Padilla is vice president of Government and Regulatory Affairs for IBM Corp., which is proud to support the legislation.

“We commend Sen. Warner and Sen. Tillis for building upon existing voluntary mechanisms to help harmonize efforts across the government. We urge Congress to ensure these mechanisms are adequately funded to track and manage today’s cyber vulnerabilities, including risks associated with AI,” Padilla said. 

ITI President and CEO Jason Oxman said that paramount to facilitating public trust in the technology, AI systems must be safe and secure.

“ITI commends U.S. Senators Warner and Tillis for introducing the Secure Artificial Intelligence Act, which will advance AI security, encourage the use of voluntary standards to disclose vulnerabilities, and promote public-private collaboration on AI supply chain risk management. ITI also appreciates that this legislation establishes the National Security Agency’s AI Security Center and streamlines coordination with existing AI-focused entities,” Oxman said.

One company alone cannot tackle AI security, according to Jason Green-Lowe, Executive Director of the Center for AI Policy.

“AI developers have much to learn from each other about how to keep their systems safe, and it’s high time they started sharing that information. That’s why the Center for AI Policy is pleased to see Congress coordinating a standard format and shared database for AI incident reporting. We firmly support Sens. Warner and Tillis’s new bill,” Green-Lowe said.



Source

Related Articles

Back to top button