A World of Killer Robots?

Originally posted at TomDispatch.

Yes, it’s true. After 20 years of war – actually, more like 30 years if you
count American involvement in the Russian version of that conflict in the 1980s
– the U.S. has finally waved goodbye to Afghanistan (at least for now). Its
last act in Kabul was the drone-slaughtering
of seven children and three adult civilians with a Hellfire missile. And that,
as Azmat Khan recently showed
in a striking report in the New York Times Magazine, was pretty much
par for the course in this country’s global war on terror that, for countless
civilians
, has distinctly been a war of terror of the most horrific
sort.

In those same years, this country led the way in the use of Hellfire-missile-armed
drones globally, while our president – any president you care to name – became
an assassin-in-chief,
something Donald Trump showed all too clearly when he used a drone to take
out
Iran’s second most powerful leader at Baghdad International Airport
in January 2020. And though Joe Biden has launched significantly fewer
drone strikes
so far than his predecessors, he’s still
been ordering them, too.

Worse yet, it’s sadly clear that, however sci-fi-like those drones once seemed,
they’re still piloted by actual human beings (even if from far,
far away
). As such, they represent a relatively early stage in the process
of fully
automating
weapons systems on land, on sea, and in the air – and the
decision-making that goes with them – a development, as TomDispatch
regular
Rebecca Gordon reports today, that this country is all-too-enthusiastically
involved in.

Count on one thing as you read her latest piece and think about automating
a global killing machine: such mechanisms, created by humans, will prove no
less destructive to us than the previously piloted or driven versions of the
same. Now, consider the future of automated killing, up close and personal.
~ Tom


Keep Your LAWS Off My Planet: Lethal Autonomous Weapons Systems
and the Fight to Contain Them

By Rebecca Gordon

Here’s a scenario
to consider
: a military force has purchased a million cheap, disposable
flying drones each the size of a deck of cards, each capable of carrying three
grams of explosives – enough to kill a single person or, in a “shaped
charge
,” pierce a steel wall. They’ve been programmed to seek out
and “engage” (kill) certain human beings, based on specific “signature”
characteristics like carrying a weapon, say, or having a particular skin color.
They fit in a single shipping container and can be deployed remotely. Once launched,
they will fly and kill autonomously without any further human action.

Science fiction? Not really. It could happen tomorrow. The technology already
exists.

In fact, lethal autonomous weapons systems (LAWS) have a long history. During
the spring of 1972, I spent a few days occupying
the physics building
at Columbia University in New York City. With a
hundred other students, I slept on the floor, ate donated takeout food, and
listened to Alan Ginsberg when he showed up to honor us with some of his extemporaneous
poetry. I wrote leaflets then, commandeering a Xerox machine to print them out.

And why, of all campus buildings, did we choose the one housing the Physics
department? The answer: to convince five Columbia faculty physicists to sever
their connections with the Pentagon’s Jason Defense Advisory Group, a program
offering money and lab space to support basic scientific research that might
prove useful for U.S. war-making efforts. Our specific objection: to the involvement
of Jason’s scientists in designing parts of what was then known as the “automated
battlefield” for deployment in Vietnam. That system would indeed prove
a forerunner of the lethal autonomous weapons systems that are poised to become
a potentially significant part of this country’s – and the world’s – armory.

Early (Semi-)Autonomous Weapons

Washington faced quite a few strategic problems in prosecuting its war in Indochina,
including the general corruption and unpopularity of the South Vietnamese regime
it was propping up. Its biggest military challenge, however, was probably North
Vietnam’s continual infiltration of personnel and supplies on what was called
the Ho Chi Minh Trail, which ran from north to south along
the Cambodian and Laotian borders
. The Trail was, in fact, a network
of easily repaired dirt roads and footpaths, streams and rivers, lying under
a thick jungle canopy that made it almost impossible to detect movement from
the air.

The U.S. response, developed by Jason in 1966 and deployed the following year,
was an attempt to interdict that infiltration by creating
an automated battlefield composed of four parts, analogous to a human body’s
eyes, nerves, brain, and limbs. The eyes were a broad variety of sensors – acoustic,
seismic, even chemical (for sensing human urine) – most dropped by air into
the jungle. The nerve equivalents transmitted signals to the “brain.”
However, since the sensors had a maximum transmission range of only about 20
miles, the U.S. military had to constantly fly aircraft above the foliage to
catch any signal that might be tripped by passing North Vietnamese troops or
transports. The planes would then relay the news to the brain. (Originally intended
to be remote controlled, those aircraft performed so poorly that human pilots
were usually necessary.)

And that brain, a magnificent military installation secretly built in Thailand’s
Nakhon Phanom, housed two state-of-the-art IBM mainframe computers. A small
army of programmers wrote and rewrote the code to keep them ticking, as they
attempted to make sense of the stream of data transmitted by those planes. The
target coordinates they came up with were then transmitted to attack aircraft,
which were the limb equivalents. The group running that automated battlefield
was designated Task Force Alpha and the whole project went under the code name
Igloo White.

As it turned out, Igloo White was largely an
expensive failure
, costing about a billion
dollars
a year for five years (almost
$40 billion
total in today’s dollars). The time lag between a sensor
tripping and munitions dropping made the system ineffective. As a result, at
times Task Force Alpha simply carpet-bombed areas where a single sensor might
have gone off. The North Vietnamese quickly realized how those sensors worked
and developed methods of fooling them, from playing truck-ignition recordings
to planting buckets of urine.

Given the history of semi-automated weapons systems like drones and “smart
bombs” in the intervening years, you probably won’t be surprised to learn
that this first automated battlefield couldn’t discriminate between soldiers
and civilians. In this, they merely continued a trend that’s existed since at
least the eighteenth century in which wars routinely kill more civilians than
combatants.

None of these shortcomings kept Defense Department officials from regarding
the automated battlefield with awe. Andrew Cockburn described
this worshipful posture in his book Kill
Chain: The Rise of the High-Tech Assassins
, quoting Leonard Sullivan,
a high-ranking Pentagon official who visited Vietnam in 1968: “Just as
it is almost impossible to be an agnostic in the Cathedral of Notre Dame, so
it is difficult to keep from being swept up in the beauty and majesty of the
Task Force Alpha temple.”

Who or what, you well might wonder, was to be worshipped in such a temple?

Most aspects of that Vietnam-era “automated” battlefield actually
required human intervention. Human beings were planting the sensors, programming
the computers, piloting the airplanes, and releasing the bombs. In what sense,
then, was that battlefield “automated”? As a harbinger of what was
to come, the system had eliminated human intervention at a single crucial point
in the process: the decision to kill. On that automated battlefield, the computers
decided where and when to drop the bombs.

In 1969, Army Chief of Staff William Westmoreland expressed
his enthusiasm
for this removal of the messy human element from war-making.
Addressing a luncheon for the Association of the U.S. Army, a lobbying group,
he declared:

“On the battlefield of the future enemy forces will be located, tracked,
and targeted almost instantaneously through the use of data links, computer-assisted
intelligence evaluation, and automated fire control. With first round kill probabilities
approaching certainty, and with surveillance devices that can continually track
the enemy, the need for large forces to fix the opposition will be less important.”

What Westmoreland meant by “fix the opposition” was kill the enemy.
Another military euphemism in the twenty-first century is “engage.”
In either case, the meaning is the same: the role of lethal autonomous weapons
systems is to automatically find and kill human beings, without human intervention.

New LAWS for a New Age – Lethal Autonomous Weapons Systems

Every autumn, the British Broadcasting Corporation sponsors a series of four
lectures given by an expert in some important field of study. In 2021, the BBC
invited Stuart Russell, professor of computer science and founder of the Center
for Human-Compatible Artificial Intelligence at the University of California,
Berkeley, to deliver those “Reith Lectures.” His general subject was
the future of artificial intelligence (AI), and the second lecture was entitled
“The Future Role of AI in Warfare.” In it, he addressed the issue
of lethal autonomous weapons systems, or LAWS, which the United Nations defines
as “weapons that locate, select, and engage human targets without human
supervision.”

Russell’s main point, eloquently
made
, was that, although many people believe lethal autonomous weapons
are a potential future nightmare, residing in the realm of science fiction,
“They are not. You can buy them today. They are advertised on the web.”

I’ve never seen any of the movies in the Terminator franchise, but apparently
military planners and their PR flacks assume most people derive their understanding
of such LAWS from this fictional dystopian world. Pentagon officials are frequently
at pains to explain why the weapons they are developing are not, in fact, real-life
equivalents of SkyNet – the worldwide communications network that, in those
films, becomes self-conscious and decides to eliminate humankind. Not to worry,
as a deputy secretary of defense told Russell, “We have listened carefully
to these arguments and my experts have assured me that there is no risk of accidentally
creating SkyNet.”

Russell’s point, however, was that a weapons system doesn’t need self-awareness
to act autonomously or to present a threat to innocent human beings. What it
does need is:

  • A mobile platform (anything that can move, from a tiny quadcopter to a fixed-wing
    aircraft)
  • Sensory capacity (the ability to detect visual or sound information)
  • The ability to make tactical decisions (the same kind of capacity already
    found in computer programs that play chess)
  • The ability to “engage,” i.e. kill (which can be as complicated
    as firing a missile or dropping a bomb, or as rudimentary as committing robot
    suicide by slamming into a target and exploding)

The reality is that such systems already exist. Indeed, a government-owned
weapons company in Turkey recently advertised its Kargu drone – a quadcopter
“the size of a dinner plate,” as Russell described it, which can carry
a kilogram of explosives and is capable of making “anti-personnel autonomous
hits” with “targets selected on images and face recognition.”
The company’s site has since been altered
to emphasize its adherence to a supposed “man-in-the-loop” principle.
However, the U.N. has
reported
that a fully-autonomous Kargu-2 was, in fact, deployed in Libya
in 2020.

You can buy
your own quadcopter
right now on Amazon, although you’ll still have
to apply some DIY computer skills if you want to get it to operate autonomously.

The truth is that lethal autonomous weapons systems are less likely to look
like something from the Terminator movies than like swarms of tiny killer
bots. Computer miniaturization means that the technology already exists to create
effective LAWS. If your smart phone could fly, it could be an autonomous weapon.
Newer phones use facial recognition software to “decide” whether to
allow access. It’s not a leap to create flying weapons the size of phones, programmed
to “decide” to attack specific individuals, or individuals with specific
features. Indeed, it’s likely such weapons already exist.

Can We Outlaw LAWS?

So, what’s wrong with LAWS, and is there any point in trying to outlaw them?
Some opponents argue that the problem is they eliminate human responsibility
for making lethal decisions. Such critics suggest that, unlike a human being
aiming and pulling the trigger of a rifle, a LAWS can choose and fire at its
own targets. Therein, they argue, lies the special danger of these systems,
which will inevitably make mistakes, as anyone whose iPhone has refused to recognize
his or her face will acknowledge.

In my view, the issue isn’t that autonomous systems remove human beings from
lethal decisions. To the extent that weapons of this sort make mistakes, human
beings will still bear moral responsibility for deploying such imperfect lethal
systems. LAWS are designed and deployed by human beings, who therefore remain
responsible for their effects. Like the semi-autonomous drones of the present
moment (often piloted from half a world away), lethal autonomous weapons systems
don’t remove human moral responsibility. They just increase the distance between
killer and target.

Furthermore, like already outlawed arms, including chemical and biological
weapons, these systems have the capacity to kill indiscriminately. While they
may not obviate human responsibility, once activated, they will certainly elude
human control, just like poison gas or a weaponized virus.

And as with chemical, biological, and nuclear weapons, their use could effectively
be prevented by international law and treaties. True, rogue actors, like the
Assad
regime in Syria
or the U.S. military in
the Iraqi city of Fallujah
, may occasionally violate such strictures,
but for the most part, prohibitions on the use of certain kinds of potentially
devastating weaponry have held, in some cases for over a century.

Some American defense experts argue that, since adversaries will inevitably
develop LAWS, common sense requires this country to do the same, implying that
the best defense against a given weapons system is an identical one. That makes
as much sense as fighting fire with fire when, in most cases, using water is
much the better option.

The Convention on Certain Conventional Weapons

The area of international law that governs the treatment of human beings in
war is, for
historical reasons
, called international humanitarian law (IHL). In
1995, the United States ratified an addition to IHL: the 1980 U.N.
Convention on Certain Conventional Weapons
. (Its full title is much
longer, but its name is generally abbreviated as CCW.) It governs the use, for
example, of incendiary weapons like napalm, as well as biological and chemical
agents.

The signatories to CCW meet periodically to discuss what other weaponry might
fall under its jurisdiction and prohibitions, including LAWS. The most recent
conference took place in December 2021. Although transcripts
of the proceedings exist, only a draft final document – produced before the
conference opened – has been issued. This may be because no consensus was even
reached on how to define such systems, let alone on whether they should be prohibited.
The European Union, the U.N., at least 50 signatory nations, and (according
to polls), most of the world population believe that autonomous weapons systems
should be outlawed. The U.S., Israel, the United Kingdom, and Russia disagree,
along with a few other outliers.

Prior to such CCW meetings, a Group of Government Experts (GGE) convenes,
ostensibly to provide technical guidance for the decisions to be made by the
Convention’s “high contracting parties.” In 2021, the GGE was unable
to reach a consensus about whether such weaponry should be outlawed. The United
States held that even defining a lethal autonomous weapon was unnecessary (perhaps
because if they could be defined, they could be outlawed). The U.S. delegation
put
it this way
:

“The United States has explained our perspective that a working definition
should not be drafted with a view toward describing weapons that should be banned.
This would be – as some colleagues have already noted – very difficult to reach
consensus on, and counterproductive. Because there is nothing intrinsic
in autonomous capabilities that would make a weapon prohibited under IHL, we
are not convinced that prohibiting weapons based on degrees of autonomy, as
our French colleagues have suggested, is a useful approach.”

The U.S. delegation was similarly keen to eliminate any language that might
require “human control” of such weapons systems:

“[In] our view IHL does not establish a requirement for ‘human control’
as such… Introducing new and vague requirements like that of human control could,
we believe, confuse, rather than clarify, especially if these proposals are
inconsistent with long-standing, accepted practice in using many common weapons
systems with autonomous functions.”

In the same meeting, that delegation repeatedly insisted that lethal autonomous
weapons would actually be good for us, because they would surely prove better
than human beings at distinguishing between civilians and combatants.

Oh, and if you believe that protecting civilians is the reason the arms industry
is investing billions of dollars in developing autonomous weapons, I’ve got
a patch of land to sell you on Mars that’s going cheap.

The Campaign to Stop Killer Robots

The Governmental Group of Experts also has about 35 non-state members, including
non-governmental organizations and universities. The Campaign
to Stop Killer Robots
, a coalition of 180 organizations, among them
Amnesty International, Human Rights Watch, and the World Council of Churches,
is one of these. Launched in 2013, this vibrant group provides important commentary
on the technical, legal, and ethical issues presented by LAWS and offers other
organizations and individuals a way to become involved in the fight to outlaw
such potentially devastating weapons systems.

The continued construction and deployment of killer robots is not inevitable.
Indeed, a majority of the world would like to see them prohibited, including
U.N. Secretary General Antonio Guterres. Let’s give him the last
word
: “Machines with the power and discretion to take human lives
without human involvement are politically unacceptable, morally repugnant, and
should be prohibited by international law.”

I couldn’t agree more.

Follow TomDispatch on Twitter and
join us on
Facebook.
Check out the newest Dispatch Books, John Feffer’s new dystopian novel,
Songlands (the
final one in his Splinterlands series), Beverly Gologorsky’s novel
Every
Body Has a Story
, and Tom Engelhardt’s A
Nation Unmade by War
, as well as Alfred McCoy’s In
the Shadows of the American Century: The Rise and Decline of U.S. Global Power
and
John Dower’s
The
Violent American Century: War and Terror Since World War II
.

Rebecca Gordon, a TomDispatch
regular,
teaches at the University of San Francisco. She is the author
of Mainstreaming
Torture
, American
Nuremberg: The U.S. Officials Who Should Stand Trial for Post-9/11 War Crimes

and is now at work on a new book on the history of torture in the United States.

Copyright 2022 Rebecca Gordon

Read More

Rebecca Gordon