Robotics

Explainer: how ‘AI killer robots’ are threatening global security


The threat from artificial intelligence (AI) autonomous weapons and the need for international cooperation to mitigate the potentiality of “AI killer robots” was re-emphasised at the recent Humanity at the Crossroads: Autonomous Weapons Systems and the Challenge of Regulation conference in Vienna, Austria.

Allowing AI control over weapons systems could mean targets being identified, struck and killed without human intervention. This raises serious legal and ethical questions.

Highlighting the gravity of the situation, Austria’s foreign minister Alexander Schallenberg said: “This is the Oppenheimer Moment of our generation.”

Current use of AI killer robots

Indeed, to what extent the genie is already out of the bottle is a question in itself. Drones and AI are already widely used by militaries around the world.

GlobalData defence analyst Wilson Jones tells Army Technology: “The use of drones in modern conflict by Russia and Ukraine, by the US in targeted strike campaigns in Afghanistan and Pakistan and, as recently revealed last month, as part of Israel’s Lavender programme, shows that AI’s ability to process information is already being used by world militaries to increase striking power.”

Investigations from The Bureau of Investigative Journalism into drone warfare of US brought to light the repeated airstrikes by the US military killing civilians in Pakistan, Afghanistan, Somalia and Yemen. More recently, the IDF’s Lavender AI system has been used to identify tens of thousands of targets, with civilians killed as a result of the strikes.

Access the most comprehensive Company Profiles
on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free
sample

Your download email will arrive shortly

We are confident about the
unique
quality of our Company Profiles. However, we want you to make the most
beneficial
decision for your business, so we offer a free sample that you can download by
submitting the below form

By GlobalData







Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Sources quoted in a report by +972 said that, at the start of the IDF’s assault on Gaza, it permitted the deaths of 15 or 20 civilians as collateral for strikes aimed at low-ranking militants, with up to 100 allowed for higher-ranking officials. The system has been said to have a 90% accuracy rate in identifying individuals affiliated with Hamas, meaning that 10% are not. Moreover, deliberate targeting of militants in homes has reportedly occurred, resulting in entire families being killed at once due to the AI’s identification and decisions.

A threat to global security

The use of AI in this way emphasises the need for regulation of the technology in weapons systems.

Dr Alexander Blanchard, senior researcher for the Governance of Artificial Intelligence programme at the Stockholm International Peace Research Institute (SIPRI), an independent think tank focussing on global security, explains to Army Technology: The use of AI in weapon systems, especially when used for targeting, raises fundamental questions about us – humans – and our relationship to warfare, and, more particularly our presumptions of how we may exercise violence in armed conflicts.”

“AI changes the way militaries select targets and apply force to them. These changes raise in turn a series of legal, ethical and operational questions. The biggest concern is humanitarian.”

There are big fears amongst many that depending on how autonomous systems are designed and used they could expose civilians and other persons protected under international law to risk of greater harm. This is because AI systems, particularly when used in cluttered environments may behave unpredictably, and may fail to accurately recognize a target and attack a civilian, or fail to recognize combatants who are hors de combat.”

Elaborating on this, Jones notes that the issue of how culpability is determined could be called into question.

“Under existing laws of war there is the concept of command responsibility,” he says. “This means that an officer, general, or other leader is legally responsible for the actions of troops under their command. If troops commit war crimes, the officer bears a responsibility even if they did not give orders, the burden of proof falls on them proving they did everything possible to prevent war crimes.”

“With AI systems, this complicates everything. Is an IT technician culpable? A system designer? It’s unclear. If it’s unclear, then that creates a moral hazard if actors think their actions are not covered by existing statutes.”

Historical arms control conventions

Several major international agreements limit and regulate certain uses of weapons. There are bans on the use of chemical weapons, nuclear non-proliferation treaties and the Convention on Certain Conventional Weapons, which bans or restricts the use of specific types of weapons that are considered to cause unnecessary or unjustifiable suffering to combatants or to affect civilians indiscriminately.

“Nuclear arms control required decades of international cooperation and treaties after that to be enforceable,” explains Jones. “Even then, we continued to have atmospheric tests until the 1990s.”

“A major reason anti-proliferation worked was because of US-USSR cooperation in a bipolar world order. That doesn’t exist anymore, and the technology to make AI is already accessible by many more nations than atomic power ever was.”

“A binding treaty would have to sit everyone involved down at a table to agree to not use a tool that increases their military power. That isn’t likely to work because AI can improve military effectiveness at minimal financial and material costs.”

Current geopolitical stances

While the need for the responsible use of AI by militaries has been recognised by states at the UN, there is still some way to go.

Laura Petrone, principal analyst at GlobalData, tells Army Technology: “In the absence of a clear regulatory framework, these declarations remain largely aspirational. It doesn’t come as a surprise that some states want to retain their own sovereignty when it comes to deciding on matters of domestic defence and national security, especially in the context of the current geopolitical tensions.”

Petrone adds that, while the EU AI Act does lay out some requirements for AI systems, it does not address AI systems for military purposes.

“I think that despite this exclusion, the AI Act is an important attempt to establish a long overdue framework for AI applications, which could lead to a certain level of alignment of relevant standards in the future,” she comments. “This alignment will be critical to AI in the military domain as well.”






Source

Related Articles

Back to top button