The U.S. and China Need Ground Rules for AI Dangers

The threats are bigger, the stakes are higher, and the level of trust between the United States and China is lower today than it was in 2014, when experts from both countries first began discussing the risks posed by artificial intelligence (AI). At a time when about 9 in 10 U.S. adults consider China to be a “competitor” or “enemy,” calls for Washington and Beijing to cooperate on shared challenges routinely fall on deaf ears. But, as laboratories in both countries continue to unveil dramatic capabilities for AI systems, it is more important than ever that the United States and China take steps to mitigate existential threats posed by AI accidents.

As a technology, AI is profoundly fragile. Even with perfect information and ideal operating circumstances, machine learning systems break easily and perform in ways contrary to their intended function. Since 2017, the Global Partnership on AI has logged “more than 1,200 reports of intelligent systems causing safety, fairness, or other real-world problems,” from autonomous car accidents to racially biased hiring decisions. When the stakes are low, the risk of an AI accident can be tolerable—such as being presented with an uninteresting Netflix recommendation or suboptimal driving route. But in a high-pressure, low-information military environment, both the probability and consequences of AI accidents are bound to increase.

Weapon systems put at high alert, for instance, could mistake a routine incident for an attack—and even automatically respond. Some of the Cold War’s most dangerous nuclear warning malfunctions were narrowly avoided because human judgment prevailed. For now, nuclear command and control systems in the United States and China still require that element of human decision-making—but, for instance, shipboard defense systems that might be involved in naval confrontations do not.

The threats are bigger, the stakes are higher, and the level of trust between the United States and China is lower today than it was in 2014, when experts from both countries first began discussing the risks posed by artificial intelligence (AI). At a time when about 9 in 10 U.S. adults consider China to be a “competitor” or “enemy,” calls for Washington and Beijing to cooperate on shared challenges routinely fall on deaf ears. But, as laboratories in both countries continue to unveil dramatic capabilities for AI systems, it is more important than ever that the United States and China take steps to mitigate existential threats posed by AI accidents.

As a technology, AI is profoundly fragile. Even with perfect information and ideal operating circumstances, machine learning systems break easily and perform in ways contrary to their intended function. Since 2017, the Global Partnership on AI has logged “more than 1,200 reports of intelligent systems causing safety, fairness, or other real-world problems,” from autonomous car accidents to racially biased hiring decisions. When the stakes are low, the risk of an AI accident can be tolerable—such as being presented with an uninteresting Netflix recommendation or suboptimal driving route. But in a high-pressure, low-information military environment, both the probability and consequences of AI accidents are bound to increase.

Weapon systems put at high alert, for instance, could mistake a routine incident for an attack—and even automatically respond. Some of the Cold War’s most dangerous nuclear warning malfunctions were narrowly avoided because human judgment prevailed. For now, nuclear command and control systems in the United States and China still require that element of human decision-making—but, for instance, shipboard defense systems that might be involved in naval confrontations do not.

Neither side trusts the other on this issue. Over the past six months, I have spoken on a handful of occasions with retired Chinese military leaders about the risks involved with AI systems. They view the U.S. Defense Department’s AI ethics principles and broader approach to “responsible AI” as bad-faith efforts to skirt multilateral negotiations aimed at restricting the development of autonomous weapons. Meanwhile, U.S. observers don’t believe China is serious about those negotiations, given its extraordinarily narrow definition of lethal autonomous weapons systems. (China has called for a ban only on autonomous weapons that cannot be recalled once initiated and which kill with indiscriminate effect.) Both militaries are developing automated target recognition and fire control systems based on AI, and the last substantial discussion among the United Nations Group of Governmental Experts focused on these issues is set to conclude in mid-2022.

In Washington, a major source of trepidation is the uncertainty surrounding the testing and evaluation of Chinese military AI systems. Relative to most countries, the United States is extremely rigorous in terms of its software testing and development process. DoD Directive 3000.09, for example, requires AI systems to be equipped with safeties and anti-tamper mechanisms. When pressed on the question, Chinese military leaders verbally insist that their AI systems must also pass rigorous inspection—but no public records document what this process might look like in practice.

To head off the most significant risks associated with AI’s technical or operational failure, the United States and China should agree on at least three measures: enforcing rigorous testing processes for their respective military AI systems, formalizing a written channel for crisis communication, and refusing to integrate AI with nuclear command and control systems. These steps are the bare minimum necessary to avert a potential catastrophe but are still likely insufficient to handle long-term stresses on the bilateral relationship.

First, Chinese leaders should understand that U.S. efforts to adopt stringent testing and evaluation protocols are focused on preventing technical failure. They are not meant to replace negotiations at the United Nations or in any other forum. For their part, leaders in the United States should clarify this fact and should publicly commit to reaching some sort of international code of conduct for weapons systems capable of online learning—real-time updates based on newly collected information. Given its ambitious AI development goals, China’s opaque AI testing regime is a source of serious concern. Leaders in the People’s Liberation Army should clarify what processes AI systems must complete to ensure that they are safe, trustworthy, and reliable, in accordance with China’s own next-generation AI governance principles.

Second, the United States and China must formalize a mechanism for crisis communication. Regularly scheduled calls between the heads of the U.S. and Chinese joint staff are a source of stability in the U.S.-China relationship but do not include U.S. officials in the operational chain of command. A U.S.-China hotline should include direct channels between the White House and Chinese leadership compound in Zhongnanhai, as well as the Pentagon and Chinese Central Military Commission. In both cases, Washington and Beijing should develop standard operating procedures for exchanging secure, written communications: Forcing a conversation between two single interlocutors could prevent commanding officers in China from picking up the phone. For example, when a U.S. EP-3 spy plane collided with a Chinese fighter jet off the coast of Hainan Island in 2001, then-Deputy Secretary of State Richard Armitage complained that U.S. phone calls went unanswered.

Finally, U.S. Defense Secretary Lloyd Austin and his Chinese counterpart, Xu Qiliang should independently commit not to delegate nuclear launch authority to an AI system capable of online learning. Both the United States and China acknowledge that the other is likely to adopt surveillance and early warning systems based on AI. But under no circumstances should a decision to launch a nuclear weapon be made by a system that is “learning” about the rules of engagement in real time. In September 2019, my colleague and then-director of the U.S. Joint AI Center, Jack Shanahan, expressed discomfort at the idea of automating nuclear command and control systems. No Chinese political or military leader has so far issued a similar statement.

These measures—robust testing and evaluation, crisis communications, and a pledge against “launch on learning”—represent the three most basic steps the United States and China can take to mitigate catastrophic AI accidents. Of course, both countries should adopt a much more extensive suite of confidence-building measures, ideally built on government-to-government dialogues and mutual trust. But before they can even consider taking such baby steps, both countries must learn to crawl.

Read More

Ryan Fedasiuk