AI

Can US and China overcome mutual mistrust to agree rules on military use of artificial intelligence?


“The US is applying artificial intelligence to weapons systems as quickly and extensively as possible. This brings more risks to the world,” a senior officer with the People’s Liberation Army said during the security conference in Singapore, speaking on the condition of anonymity.

“And what are the consequences if the US uses artificial intelligence in nuclear weapon systems? This should attract the attention of the world.”

The PLA officer also outlined Beijing’s efforts to manage the risks posed by the technology through the United Nations, as well as through Beijing’s own proposals in the Global AI Governance Initiative launched last year.

The United States has also attempted to give a lead through a political declaration on the responsible military use of AI and autonomy, joined by more than 50 countries, which did not include China.

The technology has already been used on the battlefield in the conflicts in Gaza and Ukraine.

Zhao Tong, a senior fellow at the Carnegie Endowment for International Peace’s nuclear policy programme, said the US and China had to overcome a series of obstacles to address the issue, but “the fundamental obstacle is this increasingly competitive bilateral relationship”.

The two countries held their first talks on AI in early May in Geneva, where US officials raised concerns about China’s “misuse of AI” while Beijing rebuked Washington over its “restrictions and suppression”.

Zhao said Beijing is particularly hesitant to limit its development of military AI because of its potential uses in any future confrontation with Washington.

He added that the US-led declaration has “limited appeal” in China in line with its broader objections to what it sees as Western constructs such as the rules-based international order.

In early May, US State Department arms control official Paul Dean said in an online briefing that the US had made a very “clear and strong” commitment that only humans, and never artificial intelligence, would make decisions on deploying nuclear weapons. He also called on China and Russia to make a similar statement.

The two sides so far are not known to have held any specific talks on the military uses of AI, although the broader risks from the technology came up in the talks in Geneva, which were not attended by military representatives.

Sam Bresnick, a research fellow at Georgetown’s Centre for Security and Emerging Technology said: “Though military AI is certainly an important topic, it’s a new addition to an already robust suite of US-China security issues, some of which appear more pressing than others.”

He said the barriers to an agreement on regulating the military use of AI include “the lack of bilateral trust” and “concerns about revealing information about their capabilities… or the desire not to limit the development and deployment of AI-enabled military systems just as the related technologies appear to be developing more quickly”.

Senior Colonel Zhu Qichao, the deputy director of the National Defence Science and Technology Strategic Research think tank at the National University of Defence Technology, recently accused the US in being “two-faced” about discussing AI.

He told the nationalist newspaper Global Times that it was only seeking discussions on the topic with China to learn more about its capabilities.

Admiral Rob Bauer, chairman of Nato’s military committee, told a panel discussion at the Shangri-La forum: “I am deeply concerned about the unrestricted use of new technologies on the battlefield…And as technology is increasing our ability to destroy, our ability to regulate is rapidly decreasing.”

After two world wars, a worldwide belief arose that great power struggle should never again be fought on the battlefield and that weapon systems needed to be regulated and controlled, he said.

“If the tectonic plates of power are shifting and the world is split up into several parallel systems with different sets of rules, can they coexist?,” he added.



Source

Related Articles

Back to top button