Generative AI: Should We Innovate or Regulate? It’s Time for Choosing – American Legislative Exchange Council
These bills—one federal, one state—are emblematic of the fundamental choice before us: Should we innovate, or regulate, when it comes to AI policy?
Two key AI proposals seem to be gaining momentum. These bills—one federal, one state—are emblematic of the fundamental choice before us: Should we innovate, or regulate, when it comes to AI policy?
In his famous 1964 speech, Ronald Reagan cautioned against allowing “a little intellectual elite in a far-distant capitol [to] plan our lives for us better than we can plan them ourselves.”
Decades ago, in the early phases of the commercial internet, Congress and the Clinton Administration followed Reagan’s advice and to regulate the emerging technologies of the time with a light touch. As a result, the United States soared ahead of our peers on the world stage, becoming the unquestioned global leader in advanced technology for decades.
Today, we face a new set of challenges. Generative artificial intelligence (AI) is poised to cause disruptions in a way we haven’t seen since the internet itself became widely available in the 1990s and early 2000s. Despite our storied tradition of self-determination and limited government in the digital economy, we follow Europe’s lead by offloading these thorny and nuanced AI decisions to government bureaucrats.
Regulatory bodies around the world—from the European Commission to the United Kingdom’s Competition and Markets Authority—are increasingly targeting American companies with punitive regulations like the Digital Markets Act, the Digital Services Act, and the EU’s new Artificial Intelligence Act.
It is estimated that over 500 legislative bills focused on artificial intelligence were introduced across the states and in Congress within the past two years. That astounding number does not even include the numerous state executive orders, municipal or county-level legislation, or other administrative rulemaking designed to study, limit, or regulate the technology.
Innovate: Voluntary Standards, Regulatory Reform, and Non-Binding Tools Needed to Support the Industry
Two of the U.S. Senate’s leading voices on artificial intelligence policy, Sens. Maria Cantwell (D-WA) and Todd Young (R-IN), recently debuted their latest attempt at federal legislation on the topic: The Future of AI Innovation Act. While there are aspects of the bill that could be improved, the proposal on balance sticks to ALEC’s principles that self-governance and voluntary initiatives are the ideal tools to solve new challenges in technology policy.
The Cantwell-Young bill would continue ongoing research at U.S. institutions like the National Institute of Standards and Technology (NIST) to develop voluntary standards, best practices, and testbeds for AI development and deployment. This voluntary approach avoids burying American innovators in stacks of unnecessary compliance paperwork, and eschews calls from so-called “AI safety advocates” to deliberately slow-roll the pace of advanced AI development.
NIST began much of this work back in 2020, culminating in the landmark AI Risk Management Framework (RMF) that incorporated substantial feedback from industry, academia, and research groups. If enacted, the Future of AI Innovation Act would formally establish a new AI Safety Institute within NIST tasked with three specific missions:
- Conduct research, evaluation, testing, and support voluntary standards development;
- Develop voluntary guidance and best practices for the use and development of AI;
- Engage with the private sector, international standards organizations, and multilateral organizations to promote AI innovation and competitiveness.
Candidly, there are some potential pitfalls with creating yet another federal sub-agency dedicated to AI regulation. One under-reported aspect of the AI debate on Capitol Hill is just how much existing authority and jurisdiction government agencies already enjoy over AI and automated systems. Many agencies are already exploring and testing their capacity to address today’s AI concerns, exemplified by the FCC’s recent action to target deepfake robocalls by invoking a 1990s-era law.
A new office could be particularly vulnerable to mission creep—especially with an overeager Biden Administration looking to expand government’s role in the development of emerging tech. It could be all too tempting for federal bureaucrats to capitulate under pressure from the many state and federal lawmakers who seek to turn the NIST AI RMF’s voluntary guidance into mandatory obligations that would stifle innovation.
However, compared to some other federal proposals on the table that would impose burdensome licensing requirements or strict sectoral regulation of “high-risk” AI tools, the Cantwell-Young option on paper prioritizes voluntary standards and cooperation with industry players to protect consumers. Congress should carefully craft the AI Safety Institute’s authorizing language to ensure NIST remains laser-focused on its statutory missions, explicitly preventing agencies from exceeding their mandate.
To mitigate unnecessary or duplicative regulations, the Cantwell-Young bill further directs the Government Accountability Office to study regulatory impediments to AI innovation and competitiveness. This is a welcome development that will ensure current rules on the books are serving their purpose to address actual cases of consumer harm, not increasing the barrier to entry to the marketplace. The report will also examine opportunities for AI to improve government services, an area where many state governments like Texas are primed to lead.
Regulate: California Seeks to Strangle AI, Threatening Open-Source Models
Conversely, some lawmakers in the Golden State are fast-tracking legislation they say would prevent large AI models from posing any severe risk to public safety. To do this, the State of California would establish a mandatory government certification and licensing program through a new Frontier Model Division of California’s Department of Technology. Under this new regime, AI developers would be required to certify and attest that any covered foundation models do not have a “hazardous capability,” or even come close to possessing hazardous capabilities, before they could even start initial training of the model, let alone perform a public beta test.
This means, even if there is a theoretical chance that AI could be abused by bad actors to commit atrocities, the developer of the underlying AI model could be held liable and subject to harsh civil penalties, possibly even the court-ordered deletion of their AI model “in response to harm or an imminent risk or threat to public safety.” This could be especially problematic for developers of open-source AI models like Meta’s Llama 3, which by design opens its developer tools and resources outside the four walls of one specific company for the benefit of the entire open-source AI community.
California’s proposed law seems to imply that industry is not taking safety seriously. However, pioneers developing advanced AI models at companies like OpenAI, Meta, Google, Microsoft, and Anthropic, have devoted significant resources to integrating risk-mitigation and trustworthiness across their platforms and products.
In the case of OpenAI’s forthcoming generative video tool, Sora, the company voluntarily pledged to conduct “red team” safety tests before widely deploying to the public. OpenAI believes that “learning from real-world use is a critical component of creating and releasing increasingly safe AI systems over time,” but California’s proposed law would make that nearly impossible.
It’s Time for Choosing
The best thing our government can do to support American entrepreneurs is to simply get out of the way. This is a make-or-break moment for generative AI, as the technology’s initial novelty from ChatGPT’s world debut wears off. Businesses must now begin integrating AI and proving in the marketplace that real-world applications of AI can improve livelihoods and deliver clear value.
The last thing these startups and small businesses need to worry about is meddling from the “little intellectual elite” in Sacramento and Washington. Choosing to saddle this nascent industry with burdensome regulations for the sake of it ignores the significant research already being done in the private sector to increase safety, transparency, and trust in AI.
Other nations, including the United Arab Emirates and the People’s Republic of China, are determined to dislodge our pole position in the race for AI. Let’s instead choose innovation, embrace our competitive advantage, and unleash AI’s full potential.