The steam engine changed the world. Artificial intelligence could destroy it.

Industrialization meant the widespread adoption of steam power. Steam power is a general-purpose technology — it powered factory equipment, trains, and agricultural machinery. Economies that adopted steam power left behind — and conquered — those that did not.

Get Today in Opinion

Globe Opinion’s must-reads, delivered to you every Sunday-Friday.

AI is the next significant general-purpose technology. A 2018 report from the McKinsey Global Institute predicted that AI may deliver $13 trillion in additional global economic activity by 2030 and that the countries leading the development of AI will capture an outsized portion of these economic benefits.

AI also enhances military power. AI is increasingly applied in situations that require speed (such as short-range projectile defense) and in environments in which human control is logistically inconvenient or impossible (such as under water or in signal-jammed areas).

What’s more, countries that lead in the development of AI will be able to exert power by setting norms and standards. China is already exporting AI-enabled surveillance systems around the world. If Western countries can’t offer an alternative system that protects human rights, then many countries may follow China’s techno-authoritarian example.

History suggests that as a technology’s strategic importance grows, countries are more likely to exert control over that technology. The British government supplied funding for early steam engine development and provided other forms of support for the development of steam power, such as patent protection and tariffs on imported steam engines.

Similarly, in fiscal year 2021, the US government spent $10.8 billion on AI R&D, $9.3 billion of which came from the Department of Defense. Chinese public expenditure on AI is less transparent, but analysts estimate that it is roughly comparable. The United States has also tried to restrict Chinese access to specialized computer chips, which are crucial for the development and deployment of AI, while securing our own supply with the CHIPS and Science Act. Think tanks, advisory committees, and politicians constantly urge American leaders to keep up with China’s AI capabilities.

So far, the AI revolution fits the pattern of previous general-purpose technologies. But the historical analogy breaks down when we consider the risks posed by AI. This technology is much more powerful than the steam engine, and the risks it poses are far greater.

The first risk comes from accident, miscalculation, or malfunction. On Sept. 26, 1983, a satellite early warning system near Moscow reported that five American nuclear missiles were headed toward the Soviet Union. Fortunately, a Soviet lieutenant colonel, Stanislav Petrov, decided to wait for confirmation from other warning systems. Only Petrov’s good judgment prevented him from passing the warning up the chain of command. If he had, the Soviet Union may well have launched a retaliatory strike, provoking a full-scale nuclear war.

In the near future, countries may feel forced into relying entirely on AI decision-making because of the speed advantage it provides. The AI might make dramatic miscalculations that a human wouldn’t, resulting in accident or escalation. Even if the AI is behaving roughly as intended, the speed at which combat might be fought by autonomous systems could result in rapid escalatory cycles, similar to “flash crashes” caused by high-speed trading algorithms.

Even if they are not integrated into weapons systems, poorly designed AI could be extremely dangerous. The methods we use to develop AI today — essentially rewarding the AI for what we perceive to be correct outputs — often produce AI systems that do what we told them to do but not what we wanted them to do. For example, when researchers sought to teach a simulated robotic arm to stack Lego bricks, they rewarded it for getting the bottom face of a brick higher off of a surface — and it flipped bricks upside down rather than stacking them.

For many tasks a future AI system might be given, amassing resources (computational power, for example) and preventing itself from being turned off (by, say, concealing its intentions and actions from humans) might be useful. So, if we develop a powerful AI using the most common methods today, then it may not do what we built it to do, and it might hide its true goals until it perceives that it doesn’t have to — in other words, until it can overpower us. Such an AI system wouldn’t need a physical body to do that. It could recruit human allies or operate robots and other military equipment. The more powerful the AI system, the more worrying this hypothetical situation. And competition between countries may make accidents more likely, if the competitive pressures lead countries to devote more resources to developing powerful AI systems at the expense of making sure those systems are safe.

The second risk is that the contest for AI superiority might increase the chance of a conflict between the United States and China. For example, if it appears that one country is close to developing a particularly powerful AI, then another country (or coalition of countries) might launch a preventative attack. Or imagine what might happen if, for example, advances in marine sensing, partially enabled by AI, reduce the deterrence effect of submarine-launched nuclear missiles by making them detectable.

Third, it will be hard to prevent AI capabilities from spreading once developed. AI development is currently far more open than the development of the strategically important 20th century technologies such as nuclear weapons and radar. State-of-the-art findings are published online and presented at conferences. Even if AI research became more secretive, it could be stolen. While early developers and adopters may get some first-mover advantage, no technology — even top-secret military ones like the nuclear bomb — has ever been kept exclusive.

Rather than calling for an end to competition between nations, it is more practical to identify pragmatic steps that the United States could take to reduce the risks from AI competition and encourage China (and others) to do the same. Such steps do exist.

The United States should start with its own systems. Independent agencies should regularly assess the risk of accident, malfunction, theft, or sabotage from AI developed in the public sector. The private sector should be required to carry out similar assessments. We don’t know yet how to assess how risky AI systems are — more resources must be put toward this tricky technical problem. On the margin, these efforts will be at the expense of efforts to improve capabilities. But the investment in safety would improve US security, even if it delays AI development and deployment.

Next, the United States should encourage China (and others) to make its systems safe. The United States and the Soviet Union agreed on several nuclear arms control agreements throughout the Cold War. Similar steps are now needed for AI. The United States should propose a legally binding agreement that bans the use of autonomous control of nuclear weapons launches and explore “softer” arms control measures, including voluntary technical standards, to prevent accidental escalation from autonomous weapons.

Nuclear security summits convened by President Obama in 2010, 2012, 2014, and 2016 were attended by the United States, Russia, and China and led to significant progress in securing nuclear weapons and materials. Now the United States and China must cooperate on AI safety and security, for example by pursuing joint AI safety research projects and promoting transparency in AI safety and security research. In the future, the United States and China might jointly monitor for signs of computationally intensive projects, to detect unauthorized attempts to build powerful AI systems, as the International Atomic Energy Agency does with nuclear materials to prevent nuclear proliferation.

The world is on the verge of a transformation as dramatic as the Industrial Revolution. This transformation will pose immense risks. During the Cold War, the leaders of the United States and the Soviet Union understood that nuclear weapons tied together the fates of their two countries. Another such linkage is being created in technology company offices and defense labs across the globe.

Will Henshall is pursuing a master’s in public policy at Harvard’s Kennedy School of Government.

Read More

Will Henshall