The Military Dangers of AI Are Not Hallucinations
Originally posted at TomDispatch.
I give myself credit for being significantly ahead of my time. I first came
across artificial intelligence (AI) in 1968 when I was just 24 years old and,
from the beginning, I sensed its deep dangers. Imagine that.
Much as I’d like to brag about it, though, I was anything but alone.
I was, in fact, undoubtedly one of millions of people who saw the movie 2001:
A Space Odyssey, directed by Stanley Kubrick from a script written with
Arthur C. Clarke (inspired
by a short story, “The Sentinel,” that famed science-fiction writer
Clarke had produced in – yes! – 1948). AI then had an actual name,
HAL
9,000 (but call “him” Hal).
And no, the first imagined AI in my world did not act well, which should
have been (but didn’t prove to be) a lesson for us all. Embedded in
a spaceship heading for Jupiter, he killed
four of the five astronauts on it and did his best to do in the last of them
before being shut down.
It should, of course, have been a warning to us all about a world we would
indeed enter in this century. Unfortunately, as with so many things that are
worrying on planet Earth, it seems that we couldn’t help ourselves.
HAL was destined to become a reality – or rather endlessly multiplying
realities – in this world of ours. In that context, TomDispatch
regular Michael Klare, who has been warning
for years about a “human” future in which “robot generals”
could end up running armed forces globally, considers wars to come, what it
might mean for AI to replace human intelligence in major militaries globally,
and just where that might lead us. I’m not sure that either Stanley
Kubrick or Arthur C. Clarke would be surprised. ~ Tom Engelhardt
Human Extinction as Collateral Damage
By Michael Klare
A world in which machines governed by artificial intelligence (AI) systematically
replace human beings in most business, industrial, and professional functions
is horrifying to imagine. After all, as prominent computer scientists have
been warning us, AI-governed systems are prone
to critical errors and inexplicable “hallucinations,” resulting in potentially
catastrophic outcomes. But there’s an even more dangerous scenario imaginable
from the proliferation of super-intelligent machines: the possibility that
those nonhuman entities could end up fighting one another, obliterating all
human life in the process.
The notion that super-intelligent computers might run amok and slaughter
humans has, of course, long been a staple of popular culture. In the prophetic
1983
film “WarGames,” a supercomputer known as WOPR (for War Operation Plan
Response and, not surprisingly, pronounced “whopper”) nearly provokes a catastrophic
nuclear war between the United States and the Soviet Union before being disabled
by a teenage hacker (played by Matthew Broderick). The “Terminator”
movie franchise, beginning with the original 1984 film, similarly envisioned
a self-aware supercomputer called “Skynet” that, like WOPR, was designed to
control U.S. nuclear weapons but chooses instead to wipe out humanity, viewing
us as a threat to its existence.
Though once confined to the realm of science fiction, the concept
of supercomputers killing humans has now become a distinct possibility in
the very real world of the near future. In addition to developing a wide variety
of “autonomous,”
or robotic combat devices, the major military powers are also rushing to create
automated battlefield decision-making systems, or what might be called “robot
generals.” In wars in the not-too-distant future, such AI-powered
systems could be deployed to deliver combat orders to American soldiers, dictating
where, when, and how they kill enemy troops or take fire from their opponents.
In some scenarios, robot decision-makers could even end up exercising control
over America’s atomic weapons, potentially allowing them to ignite a nuclear
war resulting in humanity’s demise.
Now, take a breath for a moment. The installation of an AI-powered command-and-control
(C2) system like this may seem a distant possibility. Nevertheless, the U.S.
Department of Defense is working hard to develop the required hardware and
software in a systematic, increasingly rapid fashion. In its budget submission
for 2023, for example, the Air Force requested
$231 million to develop the Advanced
Battlefield Management System (ABMS), a complex network of sensors and
AI-enabled computers designed to collect and interpret data on enemy operations
and provide pilots and ground forces with a menu of optimal attack options.
As the technology advances, the system will
be capable of sending “fire” instructions directly to “shooters,” largely
bypassing human control.
“A machine-to-machine data exchange tool that provides options for deterrence,
or for on-ramp [a military show-of-force] or early engagement,” was how Will
Roper, assistant secretary of the Air Force for acquisition, technology, and
logistics, described
the ABMS system in a 2020 interview. Suggesting that “we do need to change
the name” as the system evolves, Roper added, “I think Skynet is out, as much
as I would love doing that as a sci-fi thing. I just don’t think we can go
there.”
And while he can’t go there, that’s just where the rest of us
may, indeed, be going.
Mind you, that’s only the start. In fact, the Air Force’s ABMS is intended
to constitute the nucleus of a larger constellation of sensors and computers
that will connect all U.S. combat forces, the Joint All-Domain Command-and-Control
System (JADC2, pronounced “Jad-C-two”). “JADC2 intends to enable commanders
to make better decisions by collecting data from numerous sensors, processing
the data using artificial intelligence algorithms to identify targets, then
recommending the optimal weapon… to engage the target,” the Congressional
Research Service reported
in 2022.
AI and the Nuclear Trigger
Initially, JADC2 will be designed to coordinate combat operations among “conventional”
or non-nuclear American forces. Eventually, however, it is expected to link
up with the Pentagon’s nuclear command-control-and-communications systems
(NC3), potentially giving computers significant control over the use of the
American nuclear arsenal. “JADC2 and NC3 are intertwined,” General John E.
Hyten, vice chairman of the Joint Chiefs of Staff, indicated
in a 2020 interview. As a result, he added in typical Pentagonese, “NC3 has
to inform JADC2 and JADC2 has to inform NC3.”
It doesn’t require great imagination to picture a time in the not-too-distant
future when a crisis of some sort – say a U.S.-China military clash
in the South China Sea or near Taiwan – prompts ever more intense fighting
between opposing air and naval forces. Imagine then the JADC2 ordering the
intense bombardment of enemy bases and command systems in China itself, triggering
reciprocal attacks on U.S. facilities and a lightning decision by JADC2 to
retaliate with tactical nuclear weapons, igniting a long-feared nuclear holocaust.
The possibility that nightmare scenarios of this sort could result
in the accidental or unintended onset of nuclear war has long troubled
analysts in the arms control community. But the growing
automation of military C2 systems has generated anxiety not just among them
but among senior national security officials as well.
As early as 2019, when I questioned Lieutenant General Jack Shanahan, then
director of the Pentagon’s Joint Artificial Intelligence Center, about such
a risky possibility, he responded,
“You will find no stronger proponent of integration of AI capabilities writ
large into the Department of Defense, but there is one area where I pause,
and it has to do with nuclear command and control.” This “is the ultimate
human decision that needs to be made” and so “we have to be very careful.”
Given the technology’s “immaturity,” he added, we need “a lot of time to test
and evaluate [before applying AI to NC3].”
In the years since, despite such warnings, the Pentagon has been racing ahead
with the development of automated C2 systems. In its budget submission for
2024, the Department of Defense requested
$1.4 billion for the JADC2 in order “to transform warfighting capability by
delivering information advantage at the speed of relevance across all domains
and partners.” Uh-oh! And then, it requested another $1.8 billion for other
kinds of military-related AI research.
Pentagon officials acknowledge that it will be some time before robot generals
will be commanding vast numbers of U.S. troops (and autonomous weapons) in
battle, but they have already launched several projects intended to test and
perfect just such linkages. One example is the Army’s Project
Convergence, involving a series of field exercises designed to validate
ABMS and JADC2 component systems. In a test held in August 2020 at the Yuma
Proving Ground in Arizona, for example, the Army used a variety of air- and
ground-based sensors to track simulated enemy forces and then process that
data using AI-enabled computers at Joint Base Lewis McChord in Washington
state. Those computers, in turn, issued fire instructions to ground-based
artillery at Yuma. “This entire sequence was supposedly accomplished within
20 seconds,” the Congressional Research Service later reported.
Less is known about the Navy’s AI equivalent, “Project Overmatch,” as many
aspects of its programming have been kept secret. According to Admiral Michael
Gilday, chief of naval operations, Overmatch is intended
“to enable a Navy that swarms the sea, delivering synchronized lethal and
nonlethal effects from near-and-far, every axis, and every domain.” Little
else has been revealed about the project.
“Flash Wars” and Human Extinction
Despite all the secrecy surrounding these projects, you can think of ABMS,
JADC2, Convergence, and Overmatch as building blocks for a future Skynet-like
mega-network of super-computers designed to command all U.S. forces, including
its nuclear ones, in armed combat. The more the Pentagon moves in that direction,
the closer we’ll come to a time when AI possesses life-or-death power
over all American soldiers along with opposing forces and any civilians caught
in the crossfire.
Such a prospect should be ample cause for concern. To start with, consider
the risk of errors and miscalculations by the algorithms at the heart of such
systems. As top computer scientists have warned us, those algorithms are capable
of remarkably inexplicable mistakes and, to use the AI term of the moment,
“hallucinations” – that is, seemingly reasonable results that are entirely
illusionary. Under the circumstances, it’s not hard to imagine such
computers “hallucinating” an imminent enemy attack and launching a war that
might otherwise have been avoided.
And that’s not the worst of the dangers to consider. After all, there’s
the obvious likelihood that America’s adversaries will similarly equip their
forces with robot generals. In other words, future wars are likely to be fought
by one set of AI systems against another, both linked to nuclear weaponry,
with entirely unpredictable – but potentially catastrophic – results.
Not much is known (from public sources at least) about Russian and Chinese
efforts to automate their military command-and-control systems, but both countries
are thought to be developing networks comparable to the Pentagon’s JADC2.
As early as 2014, in fact, Russia inaugurated a National Defense Control Center
(NDCC) in Moscow, a centralized command post for assessing global threats
and initiating whatever military action is deemed necessary, whether of a
non-nuclear or nuclear nature. Like JADC2, the NDCC is designed
to collect information on enemy moves from multiple sources and provide senior
officers with guidance on possible responses.
China is said to be pursuing an even more elaborate, if similar, enterprise
under the rubric of “Multi-Domain Precision Warfare” (MDPW). According to
the Pentagon’s 2022 report on Chinese military developments, its military,
the People’s Liberation Army, is being
trained and equipped to use AI-enabled sensors and computer networks to
“rapidly identify key vulnerabilities in the U.S. operational system and then
combine joint forces across domains to launch precision strikes against those
vulnerabilities.”
Picture, then, a future war between the U.S. and Russia or China (or both)
in which the JADC2 commands all U.S. forces, while Russia’s NDCC and China’s
MDPW command those countries’ forces. Consider, as well, that all three systems
are likely to experience errors and hallucinations. How safe will humans be
when robot generals decide that it’s time to “win” the war by nuking their
enemies?
If this strikes you as an outlandish scenario, think again, at least according
to the leadership of the National Security Commission on Artificial Intelligence,
a congressionally mandated enterprise that was chaired by Eric Schmidt, former
head of Google, and Robert Work, former deputy secretary of defense. “While
the Commission believes that properly designed, tested, and utilized AI-enabled
and autonomous weapon systems will bring substantial military and even humanitarian
benefit, the unchecked global use of such systems potentially risks unintended
conflict escalation and crisis instability,” it affirmed
in its Final Report. Such dangers could arise, it stated, “because of challenging
and untested complexities of interaction between AI-enabled and autonomous
weapon systems on the battlefield” – when, that is, AI fights AI.
Though this may seem an extreme scenario, it’s entirely possible that
opposing AI systems could trigger a catastrophic “flash war” – the military
equivalent of a “flash crash” on Wall Street, when huge transactions by super-sophisticated
trading algorithms spark panic selling before human operators can restore
order. In the infamous “Flash Crash” of May 6, 2010, computer-driven trading
precipitated a 10% fall in the stock market’s value. According
to Paul Scharre of the Center for a New American Security, who first studied
the phenomenon, “the military equivalent of such crises” on Wall Street would
arise when the automated command systems of opposing forces “become trapped
in a cascade of escalating engagements.” In such a situation, he noted, “autonomous
weapons could lead to accidental death and destruction at catastrophic scales
in an instant.”
At present, there are virtually no measures in place to prevent a future
catastrophe of this sort or even talks among the major powers to devise such
measures. Yet, as the National Security Commission on Artificial Intelligence
noted, such crisis-control measures are urgently needed to integrate “automated
escalation tripwires” into such systems “that would prevent the
automated escalation of conflict.” Otherwise, some catastrophic version
of World War III seems all too possible. Given the dangerous immaturity of
such technology and the reluctance of Beijing, Moscow, and Washington to impose
any restraints on the weaponization of AI, the day when machines could choose
to annihilate us might arrive far sooner than we imagine and the extinction
of humanity could be the collateral damage of such a future war.
Follow TomDispatch on Twitter
and join us on Facebook.
Check out the newest Dispatch Books, John Feffer’s new dystopian novel, Songlands
(the final one in his Splinterlands series), Beverly Gologorsky’s novel
Every
Body Has a Story, and Tom Engelhardt’s A
Nation Unmade by War, as well as Alfred McCoy’s In
the Shadows of the American Century: The Rise and Decline of U.S. Global Power,
John Dower’s The
Violent American Century: War and Terror Since World War II, and Ann
Jones’s They
Were Soldiers: How the Wounded Return from America’s Wars: The Untold
Story.
Michael T. Klare, a TomDispatch
regular, is the five-college professor emeritus of peace and world
security studies at Hampshire College and a senior visiting fellow at
the Arms Control Association. He is the author of 15 books, the latest
of which is All
Hell Breaking Loose: The Pentagon’s Perspective on Climate Change.
He is a founder of the Committee
for a Sane U.S.-China Policy.
Copyright 2023 Michael T. Klare