In a Robot War, Kill the Humans

Last week’s lopsided showdown between a human F-16 pilot and an artificially intelligent one — the robot won 5-0 — was just the latest sign that we need to be thinking harder about the changes that smart machines are bringing to the battlefield. Among them: as relatively cheap robots play larger roles, the focus of warfare will shift to attacking and defending the humans that operate, maintain, and even build them.

Now and for the foreseeable future, military robots still need humans. Robots are not (yet) capable of the complex thinking required for warfare; advances in speed and computational power do not automatically bring basic common sense. A robot cannot tell the difference between a farmer with a gun and a soldier. 

So the military frequently focuses on the concept of human-machine teaming: the machine does what it does best, and the humans do the rest. 

In the short term, humans are needed to make decisions on the use of force. Autonomous systems can beat an F-16 jockey in a dogfight, but they cannot decide whether a target is worth striking. Current Department of Defense policy does not allow autonomous weapons to make decisions on the use of force without appropriate human judgement. 

However, the longer term is less clear. As single robots grow into massive swarms and become true weapons of mass destruction, humans will lack the cognitive capacity to manage the complexity without the aid of computers. Still, proponents of autonomous weapons bans may succeed in creating policies, laws, and treaties mandating that humans remain in control of firing decisions. 

Humans will also be needed to create, maintain, and manage the robot army. Human programmers write the algorithms and software that operate the robot. Tacticians, strategists, and policy-wonks need to formulate the best ways to test, employ, control, and manage them. When a robot returns from combat, human maintainers will inspect, repair, and otherwise maintain the robot. Any base or outpost where the robots are stationed will also need staff to sustain and operate it. 

All this means that killing the human operator and maintainers of robotic systems will often impose a much higher cost than disabling the robots. If the battlefield consists of fighting robots, the only cost is treasure. The destruction of a robot results in the loss of the time and money spent to build it. Robotic warfare seems to favor the small and many over the big and expensive, so the loss of a few robots may be quite low. 

It is much harder to recruit, train, and equip the humans that support them. According to a recent RAND study, American drone pilots are already understaffed and over-stressed. While greater levels of autonomy will reduce the need for and stress on human pilots, humans are still needed. Killing the human-half of a human-machine team would prevent the machine from being strategically effective or even firing at all (if current limitations continue). Eliminating the maintainers and other support staff would also cause harm across the robotic fleet. Even if robots are decisive on the battlefield, they cannot maintain themselves off the field. 

A robotic war also incentivizes attacking the people and facilities of the defense industrial base. Sabotage or destruction of a robotics factory or disrupting the factory’s supply chain would have much greater impact than destroying a few robots in the field. Of course, adversaries may launch non-violent information attacks, particularly against facilities in the American homeland. For example manipulating the algorithms that allow unmanned systems to see, fly, and make decisions would create problems in every robot that used those algorithms. 

The higher the seeming safety of soldiers, the higher the psychological impact of their death. A major advantage of unmanned systems is the reduced risk to soldiers. Remote pilots fly Predator drones from hundreds of miles away in a safe, and secure base. The sense of security is not only for the soldier, but for their mom and dad back in the United States. Violating that sense of security could create much broader effects on public support for a war effort. During the 1992-93 U.S. intervention in Somalia, images of American soldiers being dragged through the streets quickly turned public and policymaker opinion against the conflict. Dragging a robot through the streets of Mogadishu is unlikely to have the same effect. 

Perhaps the F-16’s loss to an AI was a fluke, but even if not, humans are not leaving war anytime soon. The United States and other militaries should focus on the vulnerabilities and value in targeting humans. Militaries should conduct war games and simulations to understand what roles are most critical to maintaining a robotic army. Analysis should also focus on how best to protect people in those roles. 

All views expressed here are the author’s own and do not necessarily reflect those of any current or former employers, funders, or affiliates.

Read More

tianze.zhang@graduateinstitute.ch