Killer robots not just stuff of sci-fi anymore – Winnipeg Free Press
Canada’s naval commander recently told CBC News that his staff have started studying how unmanned vessels could enhance the country’s maritime defence forces. “We haven’t figured out what percentage we want,” said Vice-Admiral Angus Topshee, referring to how autonomous units might complement Canada’s conventional warships.
Truth is, they’re quite late to the game.
Amid conflict raging in Europe and the Middle East, and China menacing Taiwan, a new arms race is underway. Global military spending shattered records last year, rising by 6.8 per cent to US$2.4 trillion. And a significant portion of this is going toward the development and acquisition of lethal autonomous weapons systems — commonly known as killer robots.
These systems differ from semi-autonomous weapons, such as self-guided bombs, advanced air defence systems and standard military drones. These have existed for decades. A machine provides situational awareness, a human operator selects a target and the machine completes the attack.
But huge leaps in artificial intelligence and machine learning have sparked an evolution in warfare, pushing the world into a new era. One where machines may eventually be granted the agency to identify and kill targets on their own, based solely off programming and algorithmic decision-making.
A UN report from 2021 claims the first real-world use of a fully autonomous drone, produced by a Turkish weapons manufacturer, took place in Libya, where it was unleashed to hunt down rebel fighters in March 2020.
Since then, the war in Ukraine has doubled as a laboratory for new digitally networked weapons platforms. Desperate to gain an edge in their grinding war of attrition, Ukrainian and Russian forces are reportedly now both using AI-powered drones capable of killing without human oversight.
The U.S. already has more than 800 active military AI projects — most of them relating to improving process and logistics efficiency. But last November, for the first time an American navy boat in the Persian Gulf successfully attacked a fake enemy target using live rockets, without any tactical direction from a human operator.
To compete with China, the Pentagon’s “Replicator” program aims to deploy thousands of autonomous weapons systems in multiple domains by the end of 2025. China under President Xi Jinping has for years implemented a doctrine of civil-military fusion, which seeks to align domestic private sector innovations in visual recognition, robotics and machine learning with Beijing’s military ambitions.
What’s more, all of this is occurring within a legal and regulatory vacuum. A decade of talks at the UN around addressing the legal, ethical and humanitarian complexities presented by killer robots have gone nowhere.
Proponents, a group which includes most military powers, envision intelligent weapons making war more humane and even reducing armed conflict by strengthening deterrence. They say the objective use of force by machines — which don’t experience stress, fatigue or hate — will also reduce civilian casualties and collateral damage in war zones. When accidents or abuses occur, many agree that the officer closest to the robot unit in the military hierarchy should be held responsible.
Critics instead desire a legally binding treaty that prohibits certain autonomous weapons and places strict controls on others. Led by the Campaign to Stop Killer Robots, which is supported by dozens of countries, their counterclaim is that outsourcing lethal force to computers risks making war more appealing. They suggest the technology remains unproven and will always be prone to making deadly errors. Then there is the possibility it falls into the hands of terrorist groups. Or despotic dictators use it to crush civilian uprisings.
Human rights organizations plan on raising these issues at the next UN General Assembly in late October. But efforts to achieve a global consensus-based treaty regarding lethal autonomous weapons systems is being gravely undermined by national interests and the rapid pace of technological development.
An alternate, albeit imperfect solution may be to forge a series of acceptable norms and understandings between countries, akin to a non-binding code of conduct. For example, the U.S. government’s political declaration on the responsible military use of AI calls for autonomous weapons to remain “within a responsible human chain of command and control.” Since being released in November 2023, it has been endorsed by 49 countries, including Canada.
But this cohort essentially encompasses America’s Western allies. It will be much harder to convince rival authoritarian regimes in Beijing, Moscow and elsewhere to limit their use of the technology for the collective good.
All told, killer robots — a fixture of 20th-century science fiction — look set to become a staple of 21st-century conflict.
Kyle Hiebert is a Winnipeg-based researcher and political risk analyst, and the former deputy editor of the Africa Conflict Monitor.