AI

Artificial Intelligence Legislative Outlook: Spring 2024 Update


Last October, the R Street Institute produced an “Artificial Intelligence Legislative Outlook,” which summarized many of the federal bills and legislative frameworks proposed to govern algorithmic systems and processes. Since then, artificial intelligence (AI)-related legislative proposals have grown steadily at all levels of government—according to one tracking system, there are 685 bills currently pending across the United States, 102 of which are federal bills.

This legislative update focuses on three of the most important broad-based federal AI governance proposals introduced recently. In a positive development, most of the major new bills avoid formal AI licensing schemes or big technocratic bureaucracies. Those two regulatory ideas should be non-starters for various reasons. Licensing systems would open the door to burdensome, bureaucratic compliance requirements that could stifle AI innovation, thus hurting competition and global competitiveness more broadly. Further, the costly and controversial nature of top-down licensing schemes and new regulatory agencies would generate considerable opposition and result in protracted policy battles. This would delay—and potentially even derail—the likelihood of achieving legislative consensus.

The Triumph of “Soft Law”

Recent AI governance proposals seem to appreciate those problems, focusing instead on tapping existing “soft law” governance approaches (albeit with various strings attached). Soft law refers to non-binding governance tools and mechanisms like multi-stakeholder processes, voluntary best practices, industry standards, third-party oversight mechanisms, and government guidance documents, among other tools and strategies. Over the past few decades, soft-law mechanisms have become dominant governance approaches for the internet and digital markets because such processes can evolve rapidly and flexibly to address a variety of fast-moving policy concerns.

However, government actors play an important role in shaping soft law. Over the past 20 years, two agencies within the U.S. Department of Commerce—the National Telecommunications and Information Administration (NTIA) and the National Institute of Standards and Technology (NIST)—have done important work to facilitate ongoing multi-stakeholder efforts to develop standards and best practices that address a variety of complicated digital governance issues.

Specifically, NIST has developed a comprehensive AI Risk Management Framework (AI RMF), a voluntary, consensus-driven, and iterative governance framework to help AI developers manage risks over time in consultation with various other stakeholders. The AI RMF builds on earlier NIST governance frameworks for cybersecurity and privacy.

These soft-law governance approaches are important because many new AI legislative proposals seek to leverage them in some way—especially the NIST AI RMF. In other words, these new bills blend hard-law and soft-law elements to address algorithmic policy concerns. This could be a useful way to address AI policy in a more agile and iterative fashion, but if the proposals move too far in the hard-law direction, it could derail the benefits of the more flexible soft-law governance mechanisms developed over the past decade.

Comparing Three New Comprehensive AI Governance Frameworks

Three notable new Senate AI-related bills meld hard- and soft-law elements for AI governance.  Introduced in late February by Sens. Mark Warner (D-Va.) and Marsha Blackburn (R-Tenn.), the “Promoting United States Leadership in Standards Act of 2024” would require NIST to submit a report to Congress that identifies current U.S. participation in standards development activities for AI and other critical technologies. NIST would also be required to establish a new portal that identifies relevant international standardization efforts. The measure would create a $10 million pilot program for hosting AI-related standards meetings, as well.

This relatively uncontroversial set of proposals would enhance the current approach NIST and the NTIA use to build multi-stakeholder consensus through voluntary best practices and global standards. The measure has attracted widespread industry support.

A newer bill introduced last week by Sens. Maria Cantwell (D-Wash.) and Todd Young (R-Ind.) would go much further than the Warner-Blackburn bill. The “Future of Artificial Intelligence Innovation Act of 2024” formally blesses the newly created U.S. AI Safety Institute housed within NIST. Launched in February by the Department of Commerce, this public-private partnership aims to create collaborative standards and best practices for AI safety. While the Cantwell-Young bill stresses that this system should remain rooted in voluntary standards, it pushes NIST to work with other federal agencies to coordinate metrics, benchmarks, and evaluation methodologies for AI safety on a variety of matters. The bill also calls for more information sharing with international allies on AI safety-related concerns and, like the Warner-Blackburn bill, calls for efforts to promote common global AI standards.

The Cantwell-Young measure includes public-private partnership testbeds to evaluate system safety and make beneficial AI discoveries more widely available. It would also create new “grand challenge” prize competitions to encourage algorithmic and robotic innovation. The bill includes some other industrial policy efforts to promote various types of AI development. Importantly, the bill also calls for a new government report on “identifying regulatory barriers to innovation,” which would include “significant examples of Federal statutes and regulations that directly affect the innovation of artificial intelligence systems, including the ability of companies of all sizes to compete in artificial intelligence” and would also “account for the effect of voluntary standards and best practices developed by the Federal Government.”

Finally, the “Artificial Intelligence Research, Innovation, and Accountability Act of 2023,” introduced late last year by Sens. John Thune (R-S.D.) and Amy Klobuchar (D-Minn.), also builds on the NIST AI RMF but  goes much further than the Cantwell-Young bill. The Thune-Klobuchar bill would establish a tiered evaluation process for “critical-impact” AI systems versus slightly less sensitive “high-impact” AI systems. The measure would require different levels of government oversight for each bucket of AI applications.

In the process, the best practices and voluntary standards NIST has developed through multi-stakeholder processes would take on greater importance and carry the threat of fines or regulatory actions if developers did not follow new federal self-certification policies to ensure compliance with those standards. The Thune-Klobuchar bill also includes labeling requirements for digital platforms to clearly indicate whether they use generative AI to create content for users.

Where to Draw the Hard vs. Soft Law Line

Again, all three measures blend hard-law and soft-law governance techniques to varying degrees. The bills all push for the continued development of voluntary best practices for AI safety but also envision an expanded role for government in helping to formulate or steer those policies. This leaves a lot of policy discretion to NIST and the NTIA to determine the scope and nature of these standards. The Warner-Blackburn bill is the more open-ended and least restrictive approach, while the Thune-Klobuchar bill is the most detailed and potentially regulatory approach. The Cantwell-Young bill falls in-between.

This move to expand public-private partnerships on AI standards and have government steer soft-law processes more aggressively worries some analysts, who fear that the added government meddling in technical standards will foster a cozy relationship between large companies and government bureaucrats. “Regulatory capture” is a legitimate concern with a long history in the information technology sector because powerful interests have repeatedly used regulatory systems to limit choice, competition, and innovation. Moreover, every step lawmakers take to “add teeth” to the NIST framework moves it closer to becoming a formal and potentially cumbersome regulatory regime instead of the voluntary and highly flexible process that has made it so popular thus far.

More worrisome is the potential for expanded government influence over algorithmic systems to be “weaponized,” such that the government will lean on AI innovators to punish disfavored speech or content from particular people or organizations. It is one thing for government officials to encourage the development of AI safety best practices, but it is quite another for policymakers to convert that process into a backdoor regulatory regime—which would not only undermine innovation but also potentially lack legal accountability.

Despite these dangers, the approach envisioned by these new Senate bills would be far less onerous in practice than more sweeping proposals to mandate AI licensing schemes through large new regulatory bureaucracies. These proposals are also preferable to efforts that would mandate algorithmic audits or impact assessments, thereby creating costly paperwork requirements and raising a variety speech and intellectual property-related concerns.

Importantly, these bills would also see the legislative branch exercise at least a small degree of constraint over the Biden administration’s overzealous efforts to blaze its own trail on AI regulation. With its recent 100-plus page AI executive order and other major actions, the administration seems intent on unilaterally crafting far-reaching AI regulations without formal authorization from Congress.

Is Broad-based Federal AI Legislation Possible?

It remains unclear whether these new measures can advance with the legislative clock ticking faster during this election year and many other policy priorities up for debate. The complexity of AI policy and these bills also makes action challenging. As argued in an earlier R Street legislative analysis, “there is likely an inverse relation between the ambitious nature of AI proposals and the possibility that anything advances in Congress this session.” Sen. Warner put it best, telling one journalist, “If we try to overreach, we may come up with goose eggs.”

The temptation to “go big” with comprehensive bills will likely prove complicated and challenging in practice. There are many other AI-related measures pending now that are narrower in focus and could have a better chance at short-term success. Oversight of government uses of AI is one topic where great consensus exists. The Federal AI Governance and Transparency Act (H.R. 7532), introduced by House Committee on Oversight and Accountability Chairman James Comer (R-Ky.) and Ranking Member Jamie Raskin (D-Md.), would limit its attention to how federal agencies should use AI systems. Again, the Thune-Klobuchar bill contains similar requirements for a government review of how agencies use AI systems. This targeted focus on government use of AI systems has a better chance of winning widespread support.

Another thing Congress could do immediately is take a hard look at state and local over-regulation of AI markets and consider limiting those efforts. Unfortunately, none of the major AI proposals currently under consideration addresses preemption of the growing patchwork of state and local regulations. With 583 state measures pending currently, the danger exists that a confusing patchwork of costly compliance burdens could undermine the development of a robust nationwide market in algorithmic innovations.

 



Source

Related Articles

Back to top button