Should The Federal Government Regulate Artificial Intelligence?
Artificial intelligence is in the public policy spotlight. In October 2023, the Biden Administration issued a Presidential Executive Order on AI directing federal agencies to cooperate in protecting the public from potential AI-related harms. President Biden said in his March 2024 State of the Union address that government enforcers will crack down on the use of AI to facilitate illegal price-fixing. Congress is in the preliminary stages of considering legislation that could pave the way for future regulation of AI.
The European Union went “one better” by adopting a far-reaching AI act in March 2024. It bans AI systems that are “considered to pose an unacceptable risk for the health, safety, and fundamental rights of individuals.” The act also regulates general purpose AI models that can be applied to a wide variety of tasks, imposing obligations on them according to the level of risk they impose, or are believed to impose, on the public.
Should the federal government follow in Europe’s footsteps and adopt comprehensive AI regulation to forestall bad conduct? The short answer is no, not at this time.
Laws Already On The Books Disincentivize AI Abuses
First, keep in mind that AI, like all other technologies, is already fully subject to generally applicable U.S. criminal and civil laws.
Indeed, the 2023 AI Executive Order highlighted the application of these laws (which address civil rights, national security, antitrust, privacy, labor rights, health care, and foreign affairs) to challenge the full array of potential AI abuses.
This is more than rhetoric. The Justice Department, for example, is reported to be “escalating” its antitrust probe of a criminal scheme to raise and fix rental housing prices with the use of AI algorithmic software. And in January 2024, the Federal Trade Commission announced it was investigating the competitive effects of AI-related partnerships and investments.
These and similar government initiatives send a strong signal to AI developers to avoid harmful conduct.
Regulation Is The Foe Of Innovation And Economic Growth
Case-by-case enforcement of general laws allows specific business abuses to be targeted and addressed, without otherwise interfering in business planning and initiatives.
Regulation, by contrast, establishes a framework of rules governing private sector conduct. Too often, this turns into inflexible, one-size-fits-all approaches which can ignore specific commercial circumstances and fail to respond in a timely manner to changes in technology and the business environment. Often, it’s the already-entrenched business interests which benefit by leveraging their influence and manipulating this regulation to maintain their advantages.
A statistical analysis published in 2017 in the Journal of Regulatory Economics found that more-regulated industries had fewer new firms and slower employment growth in all firms. It also noted that “[l]arge firms may even successfully lobby government officials to increase regulations to raise their smaller rivals’ costs.”
Regulation also tends to reduce innovation, as found in a 2023 American Economic Review article. Innovation is a key to higher productivity and economic growth, as explained in a 2017 European Central Bank paper.
Those factors counsel strongly against rushing to regulate a new technology which is currently is accelerating innovation in a wide range of applications and may require new governing approaches, just as the commercialization of the internet once did. Specific cases spotlight the benefits of avoiding too-much, too-soon regulation.
American Internet Freedom Versus The EU’s Stifling Approach
The U.S. decision not to overregulate the internet in its infancy is a great public policy success that public policymakers examining AI should study closely. Economic analyst Mohamed Mouti explores the fruits of this decision in EconLog:
“In the mid-1990s, the Clinton administration made a wise choice. They declared the internet a ‘market-driven area,’ not regulated, with limited government involvement only to support and enforce a predictable, minimalist, consistent, and simple legal environment. This policy allowed a new generation of creative minds to explore this frontier for business and commerce. This approach led to the internet’s success, resulting in a surge of innovation. Today, the US is home to the most innovative tech firms, hosting vibrant internet-based companies and bringing countless benefits to consumers and small businesses.”
In contrast, the European Union, which favors the “precautionary principle” of avoiding risk through early adoption of regulation, has had a terrible innovation track record.
Take the case of the EU’s efforts to regulate data privacy through its 2018 General Data Protection Regulation. GDPR requires firms to guarantee user rights related to access, consent, erasure, and data portability. There is already substantial evidence that it tends to impose high compliance costs and entrench incumbent companies, raise barriers to entry, and harm startups, entry, and smaller firms.
This history should give pause to anyone who believes that the AI Act will help position Europe to be an AI leader.
Is Now Really The Time?
Regulation is justified when its benefits can be shown to outweigh its costs. Of course, cost-benefit appraisals are imperfect and error-prone. In a famous 1969 article, the renowned economist Harold Demsetz warned against comparing an idealized version of a regulatory proposal (assuming it will work perfectly) to the actual outcomes of the current unregulated system.
The history of harmful regulatory imperfections underscores the wisdom of Demsetz’s warning. U.S. adoption of AI regulation would in all likelihood slow the rate of innovation in technology systems that appear poised to confer enormous benefits on society. Even small reductions in the AI growth rate can lead to huge long-term social welfare losses. Imagine what innovation-driven benefits might have been lost had the government decided to essentially control the internet 30 years ago.
What about AI-related harm? That is highly speculative at this point. Moreover, the federal government is closely monitoring the AI sector, ready to apply existing targeted legal sanctions at the first sign of a problem.
In sum, the AI landscape today features rapidly growing benefits and uncertain costs that, in all likelihood, can be well addressed under existing law if problems arise.
The time may (or may not) come when serious new and unanticipated AI-related problems arise that can best be handled through targeted regulatory solutions. But as of now, the case to regulate AI today has not been made.