The President’s Inbox Recap: The Impact of AI on Warfare
The latest episode of The President’s Inbox is live! This week, Jim sat down with Andrew Reddie, an associate research professor of public policy at the University of California Berkeley. They discussed how AI will transform warfare.
Here are five highlights from their conversation:
1.) The integration of artificial intelligence into the military is an evolution more than a revolution. Andrew noted that the U.S. military has used AI tools for decades. He put it this way, “What’s increasingly invoked today are conversations about how some of the latest and greatest technologies might impact decision support.” Many decisions in the military are made based on data analysis. AI could provide “decision support” by fusing different data streams—particularly among different intelligence gathering branches.
More on:
2.) AI and autonomous systems shouldn’t be conflated. Artificial intelligence isn’t constrained to any one definition. AI is understood to capture a machine capable of doing tasks generally associated with humans. However, AI can look different—from a chatbot to facial recognition technology. Autonomous systems, on the other hand, are defined as systems where “the human is entirely out of the conversation.”
3.) AI isn’t guaranteed to reach a better decision than a human decision-maker. Many people assume that because AI makes faster decisions that it also makes better decisions. It may or may not. At the same time, the use of AI and autonomous systems in particular raises questions about who is responsible for the decisions being made. Andrew put it this way, “I think one of the things that makes us feel comfortable about a human making decisions is that, ultimately, liability and responsibility lives somewhere that we can grasp on to.”
4.) Testing and evaluation of AI before its integration into military targeting and other operations is necessary for the United States, its allies, and its adversaries. Untested or poorly understood AI systems could cause immense problems. Andrew gave an example of accidental targeting by an adversary. “If you have an adversary who calls the hotline and says, ‘Hey, look, the system is not behaving as we would expect,’ and ultimately still does something to an ally or a partner, what does that mean for how the United States is going to respond in terms of Article 5 commitments if it’s a NATO country?”
5.) The U.S. government is working to establish norms for the use of AI in its military and other militaries around the world. Last year, the U.S. Department of State took principles that came from the Department of Defense’s Joint AI Center and announced its Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy to try to build a consensus on the use of AI in military operations. The declaration was opened up to signatures, and so far, more than fifty countries have signed on. Andrew said that “the Political Declaration is really exciting because it moves the ball forward in a way that those broader global governance conversations haven’t been able to yet.”
If you’re looking to read more of Andrew’s work, check out the piece he co-wrote for Lawfare on the tools need to mitigate the risks that come with integrating AI into the military. Noah Berman wrote a Backgrounder for CFR.org that answers the question “What is artificial intelligence?”
More on: