AI

Artificial Intelligence Systems and Humans in Military Decision-Making: Not Better or Worse but Better Together


In conversations about the military use of artificial intelligence (AI), I am frequently presented with the following question: might AI systems be better than humans at complying with international humanitarian law (IHL) in military decisions on the use of force?

There are differing perspectives on this matter, but they all share at least two features. First, they offer a binary depiction of the human and machine either as “better” or “worse” in implementing IHL. Second, they assume that the use of an AI system must be evaluated based on the system’s ability to pass a human standard set by IHL. This premise underpins the many analyses of whether AI systems can adhere to IHL rules and principles on the conduct of hostilities, such as the principles of distinction, proportionality, and precaution.

This, I contend, shows how the initial question reinforces a tendency to anthropomorphize AI systems by taking the human as the point of reference. But it also impels one to take either a pro-AI system or anti-AI system stance. Most importantly, it shows how the framing of legal queries dictates the manner in which we search for answers.

It is crucial to recognize how binary framing, coupled with our tendency to anthropomorphize AI systems, prevents us from grasping, let alone regulating, the socio-technical reality of contemporary armed conflicts. As I explore below, this is because humans are already dependent on and intertwined with AI systems across a wide range of military decision-making processes.

Shifting the Binary Discourse toward Human-AI System Interaction

The binary character of AI legal discourse tends to separate humans and systems from the outset. The discussion on lethal autonomous weapon systems (LAWS) and the idea that humans must exert some form of “control” over the critical functions of weapon systems exemplifies the point. While a precise definition of “meaningful human control” is subject to ongoing debate, the common idea presupposes that humans can preserve full agency over the technology they employ.

It builds on a strict separation between humans and technology that misrepresents the reality in which AI systems and humans are inextricably linked and interact. In practice, humans interact with AI systems throughout the entire lifecycle of the technology, whether by researching, developing, programming, operating, or continuously monitoring and maintaining AI systems.

Further, the tight distinction between humans and AI systems fosters a belief that the challenges associated with employing AI systems in military decisions, such as lack of predictability, reduced understandability, or bias, can be overcome through technological advancement.

Many of these challenges, however, do not pertain to the performance of the AI system but to the broader human-AI system interaction. A variety of external factors, such as the type of task, the characteristics of the operational environment, the length of deployment, and the human users’ capacity to engage with AI systems causes them. The result is a socio-technical ecosystem that is hard, if not impossible, to account for using a discourse that maintains a rigid division between humans and AI systems.

The well-known challenges related to biases in the output of AI systems illustrate this well. A binary discourse depicts biases as a distinct human feature which in some instances corrupts AI systems. Such challenges are considered “solvable” through technological improvement. Following this line of reasoning, AI systems may become bias-free and thus transcend human error, ultimately facilitating a neutral application of the law. Militaries invoke this logic when they cite the potential of AI systems to support faster and more accurate decisions, including but not limited to targeting decisions.

What this vision of AI systems is unable to account for is the fact that “technical solutions will not be sufficient to resolve bias.” The truth is that these systems can only be as good as the data itself, the humans involved in their development, and those interpreting their outputs. Biases are a product of the complex socio-technical ecosystem that we live in. As long as these biases remain inherent to our societies, they will feature in AI systems too.

In short, the binary discourse is unable to grasp the present reality of humans acting within a complex assemblage of human and technological forms of existence. To overcome the challenges associated with the use of AI systems in military decision-making, we must move beyond the binary representation of humans and AI systems towards one that accounts for the complexity of the human-AI system interaction.

Being Cautious About Anthropomorphism

As discussions of the legal challenges related to the use of AI systems in military decision-making advance, it is imperative to take account of our tendency to anthropomorphize these systems. This tendency is reflected in the metaphors used to describe those systems, such as “training,” “learning,” “neurons,” “memory,” and “autonomy.” Inter alia, these metaphors create the impression that an AI system has its own mind. From a technical perspective, while such language may be useful in informing how AI systems are developed, their use in legal discussions can be misleading.

For instance, when it comes to legal discussions on LAWS, the anthropomorphizing of the notion of “autonomy” often leads to the attribution of human characteristics to the technology. Legal discourse frequently alludes to AI systems as possessing the ability to “make lethal force decisions” and “comply with” IHL, treating these systems as though they have sufficient independent agency to reason. The problem with these semantic choices is that they conflate human normative reasoning with the probabilistic reasoning of AI systems. Moreover, this increases the risk of ignoring the reality that, notwithstanding the complexity of autonomy in weapon system, “these remain merely computer programs, written by human beings.”

Similarly, the use of the term “prediction” for an AI system output makes it easy to overlook the fact that they diverge from the normative logic of prediction. Instead of being based on the (present) status or activity of an individual under scrutiny, as IHL would require in targeting decisions, an AI system output utilizing predictive analytics is based on data about the past behaviors of various other individuals. While the AI systems’ output does not classify the individual’s status under IHL as such, it will inform the military decision-makers’ IHL categorization.

Therefore, being aware of these logics and assumptions that attach to anthropomorphism in legal discourse is critical in assessing the lawfulness and risks of relying on AI system outputs in certain military decisions.

Concluding Thoughts

I would invite others not only to move beyond binary discourse but also to exercise caution when anthropomorphizing AI systems in legal discussions regarding their use in military decisions. Both are deeply engrained in present discussions, and both tend to misrepresent the reality of contemporary armed conflict.

Moving towards a reality-sensitive perspective—one that accounts for the human-AI system interaction throughout the technology life cycle, and is cautious of anthropomorphizing AI systems—is a sine qua non for addressing the challenges of the use of AI systems in military decision-making. Such a perspective requires us to pay attention to the distinctive capabilities and limitations of both humans and AI systems, as well as the challenges arising from their interaction.

Reflective of such an approach is the recently published expert consultation report by the Geneva Academy of International Humanitarian Law and Human Rights and the International Committee of the Red Cross on “Artificial Intelligence and Related Technologies in Military Decision-Making on the Use of Force in Armed Conflicts” (to which I contributed). It suggests that not all challenges associated with the technical limits and human-AI system interaction can be overcome. We may, therefore, be required to invoke the change of perspective suggested here to develop recommendations or limitations on certain AI system applications in military decision-making. As the report suggests, the lawful use of AI systems in military decisions on the use of force may demand:

restricting the use of AI [decision support systems] to certain tasks or decisions and/or to certain contexts; placing specific constraints on the use of AI [decision support systems] with continuous learning functions, due to their more unpredictable nature; and slowing down the military decision-making process at certain points to allow humans to undertake the qualitative assessments required under IHL in the context of specific attacks.

Additionally, the adoption of measures enhancing military users’ technical literacy to critically engage with AI systems outputs could be developed, including by training military users with different AI system interfaces.

These are just a few examples of how a change in perspective is critical to identifying effective measures to reduce humanitarian risks, address ethical concerns, facilitate compliance with IHL, and maximize successful human-machine interaction across a broad range of military decision-making processes. It is not an exhaustive list. Rather, the aim is to raise awareness of, on the one hand, our tendency to anthropomorphize AI systems, and on the other, the significance of how legal queries are formulated to find practical answers to the contemporary challenges of AI systems in military decision-making. As noted at the outset, the framing of our legal query dictates the way we search for answers.

More concretely, an important step towards reduced anthropomorphism in our present discussion would be to start questioning the idea that AI systems must pass a human standard set by IHL to be considered lawful. Another is recognizing that framing our legal discussions on the use of AI in military decision-making around the question of whether AI systems might be better or worse than humans at applying IHL principles and rules may be inadequate here. The better question might be: How can human decision-makers, in their interaction with AI systems, best uphold the rules and principles of IHL?

***

Anna Rosalie Greipl is a Researcher at the Academy of International Humanitarian Law and Human Rights (Geneva Academy).

 

 

 

 

Photo credit: Unsplash



Source

Related Articles

Back to top button