Robotics

Killer robots not just stuff of sci-fi anymore – Winnipeg Free Press


Opinion

Canada’s naval commander recently told CBC News that his staff have started studying how unmanned vessels could enhance the country’s maritime defence forces. “We haven’t figured out what percentage we want,” said Vice-Admiral Angus Topshee, referring to how autonomous units might complement Canada’s conventional warships.

Truth is, they’re quite late to the game.

Amid conflict raging in Europe and the Middle East, and China menacing Taiwan, a new arms race is underway. Global military spending shattered records last year, rising by 6.8 per cent to US$2.4 trillion. And a significant portion of this is going toward the development and acquisition of lethal autonomous weapons systems — commonly known as killer robots.



These systems differ from semi-autonomous weapons, such as self-guided bombs, advanced air defence systems and standard military drones. These have existed for decades. A machine provides situational awareness, a human operator selects a target and the machine completes the attack.

But huge leaps in artificial intelligence and machine learning have sparked an evolution in warfare, pushing the world into a new era. One where machines may eventually be granted the agency to identify and kill targets on their own, based solely off programming and algorithmic decision-making.

A UN report from 2021 claims the first real-world use of a fully autonomous drone, produced by a Turkish weapons manufacturer, took place in Libya, where it was unleashed to hunt down rebel fighters in March 2020.

Since then, the war in Ukraine has doubled as a laboratory for new digitally networked weapons platforms. Desperate to gain an edge in their grinding war of attrition, Ukrainian and Russian forces are reportedly now both using AI-powered drones capable of killing without human oversight.

The U.S. already has more than 800 active military AI projects — most of them relating to improving process and logistics efficiency. But last November, for the first time an American navy boat in the Persian Gulf successfully attacked a fake enemy target using live rockets, without any tactical direction from a human operator.

To compete with China, the Pentagon’s “Replicator” program aims to deploy thousands of autonomous weapons systems in multiple domains by the end of 2025. China under President Xi Jinping has for years implemented a doctrine of civil-military fusion, which seeks to align domestic private sector innovations in visual recognition, robotics and machine learning with Beijing’s military ambitions.

What’s more, all of this is occurring within a legal and regulatory vacuum. A decade of talks at the UN around addressing the legal, ethical and humanitarian complexities presented by killer robots have gone nowhere.

Proponents, a group which includes most military powers, envision intelligent weapons making war more humane and even reducing armed conflict by strengthening deterrence. They say the objective use of force by machines — which don’t experience stress, fatigue or hate — will also reduce civilian casualties and collateral damage in war zones. When accidents or abuses occur, many agree that the officer closest to the robot unit in the military hierarchy should be held responsible.

Critics instead desire a legally binding treaty that prohibits certain autonomous weapons and places strict controls on others. Led by the Campaign to Stop Killer Robots, which is supported by dozens of countries, their counterclaim is that outsourcing lethal force to computers risks making war more appealing. They suggest the technology remains unproven and will always be prone to making deadly errors. Then there is the possibility it falls into the hands of terrorist groups. Or despotic dictators use it to crush civilian uprisings.