AI

Letter to U.S. House AI Task Force: Support a Pro-Innovation Framework for Artificial Intelligence – American Legislative Exchange Council



On May 6, 2024, ALEC and 14 organizations sent an open letter to Members of Congress and the co-chairs of the U.S. House Task Force on Artificial Intelligence, encouraging lawmakers to support a pro-innovation framework for artificial intelligence (AI) policy.

Our letter urges lawmakers to avoid creating overly restrictive rules, regulations, and AI licensing regimes that would stifle innovation and burden small businesses. Congress and the states should instead rely on existing laws and agencies to address consumer harms; where genuine gaps exist, any new laws should be narrowly tailored and strike the proper balance between innovation and safety.

————-

Dear Chair Obernolte, Co-Chair Lieu, and members of the Task Force on Artificial Intelligence:

We, the undersigned organizations, are united in our belief that adopting flexible pro-innovation governance strategies for artificial intelligence (AI) is critical to unlocking its transformative potential.

AI holds immense promise to revolutionize countless fields, from health care and scientific discovery to economic growth and social progress. However, an over-reliance on restrictive regulatory and licensing regimes could stifle this nascent technology, hindering its ability to flourish and deliver its full benefits to society.

With that in mind, we urge policymakers to adopt the following principles as they work to create laws governing AI:

  1. Utilize existing authorities and institutions: Before creating any new regulations, take stock of and examine the ways in which existing structures are already set up to handle potential issues with AI. For example, many of the concerns related to AI fraud fall into the category of unfair and deceptive practices, which is already illegal and enforced by the Federal Trade Commission. Rather than creating entirely new laws or government agencies aimed at AI fraud, we should first examine the ways in which fraud can be dealt with under the current statutes. Instead of broad new regulatory edicts, we should make targeted changes if and only if existing law is found to be insufficient.
  2. Draw from current expertise and investments: In addition to examining existing regulations, we should also explore the present expertise and investments that the federal government has made in the field of AI. The National Institute of Standards and Technology, for example, released its AI Risk Management Framework in January 2023 after collaboration with public and private sectors. This voluntary, consultative approach is a much better way of managing potential AI risks than a top-down ex-ante regulatory regime—especially one that creates a new agency or bureau for AI.
  3. Enable experimentation: A culture of experimentation is vital for exploring the boundaries of AI and identifying unforeseen challenges and opportunities. We need a regulatory framework that allows for responsible experimentation without stifling creativity. This includes embracing the idea that innovation should be treated as innocent until proven guilty, also known as the Innovator’s Presumption.
  4. Avoid ceding the AI space to illiberal countries: While we fully support the responsible development and deployment of AI, we should not blind ourselves to the fact that illiberal countries like China, Russia, and the United Arab Emirates are investing heavily into AI systems. It is crucial that the United States continues to be a leader in this space and that we avoid hamstringing ourselves in ways that allow other countries—which may not share our goals of responsible AI development—to win the global race for AI dominance. If we allow other countries to dominate this space, the consequences could be catastrophic for both the United States and the world.

The United States became the global leader in technology and innovation because we embraced a balanced approach to regulating technology that did not stifle innovation. This is also why we are currently leading the world in AI development. In order to maintain that leadership, we must avoid creating rules and regulations that will stifle innovation. Instead, we should utilize existing laws to ensure that AI is developed in an effective and responsible way to pave the path for responsible AI on the global stage.

As you begin the important work of exploring this issue and producing policy recommendations for Congress, we are eager to work collaboratively with you to develop a framework that maintains U.S. leadership in the field, strikes the proper balance between innovation and safety, and unlocks the immense potential of AI.

Sincerely,

R Street Institute

The American Consumer Institute

American Legislative Exchange Council

Americans for Prosperity

Americans for Tax Reform

The Buckeye Institute

Committee for Justice

Competitive Enterprise Institute

Digital Liberty

Frontier Institute

Innovation Economy Alliance

James Madison Institute

Jeffrey Westling (American Action Forum)

Open Competition Center

Taxpayers Protection Alliance



Source

Related Articles

Back to top button