In 1997, The Simpsons prophesized that for future wars, “most of the actual fighting will be done by small robots” with soldiers only responsible to “build and maintain those robots.” Though the cartoon’s track record with predictions is debatable, few will argue that robots have played a critical role in combat over the past decade. Whether it is a Predator drone patrolling a No-Fly zone or a Packbot diffusing a bomb, robots have made their presence known on the battlefield.
However, some will argue the veracity of The Simpson’s prediction on the role of soldiers in relation to the robots. In particular, what will the human-robot relationship be for completing the most basic combat task: to kill the enemy? With current robots, a human operator must directly tell a robot to engage and kill an enemy.
Meanwhile, the Terminator movies predicted that by 2029, robots will be able to track and kill autonomously. In combat, this capability would be useful. A large number of robots could have autonomously patrolled the streets of Fallujah or performed risky raids through the caves in Afghanistan, rather than putting soldiers in these situations.
The question quickly arises as to what challenges exist for developing these killer robots. At a very primitive level, an autonomous killing robot could consist of a machine gun mounted on a Roomba that rolls around and kills indiscriminately. But for the robot to actually be useful in combat, it must also be able to distinguish an enemy from a non-combatant and then make the decision to kill. These two steps provide the main challenges for deploying an army of killer robots.
The first step, target identification, can be accomplished through advancements in biometrics and sensing. If the target is known, the robot can recognize them through facial recognition, voice recognition, gait analysis, or even their cellphone signal. It becomes more challenging when the target is not known, such as if a robot is scouting for enemies. If an enemy first engages the robot, the robot could simply return fire using acoustic triangulation to identify the source of the shot. In the event that the enemy does not fire, the robot would have to rely on the emerging field of intent prediction, where a person with bad intentions is identified through biometric signatures including heart rate, pupillometry, and skin temperature.
The next step, deciding to engage, traditionally has involved human judgement for two reasons. First, with the fog of war, there is always uncertainty tied to identifying a target. Second, the shooter must weigh out any collateral damage associated with engaging. However, advancements in computer vision should allow a robot to observe the environment and make these decisions based on a series of complex algorithms.
Though feasible, the technology associated with these processes is fairly immature and unreliable. However, as detection algorithms and computer vision improve, these challenges will be solved. Since technology will soon not be the limiting factor, there are two broader issues that will limit the development of killer robots.
The first issue involves identifying how an enemy would counter these killer robots; the most logical counter would be to create “robot-killing” robots. In doing so, this will spark an autonomy arms race, where different militaries vie to create the robot with the fastest kill time. Arms races are typically expensive and undesirable. Additionally, as the kill times shrink for a robot, the likelihood for collateral damage would inherently increase.
The larger issue is the ethics of allowing a robot to decide to kill. In particular, who would be held accountable if society disagrees with a robot’s decision to engage? Suppose a robot destroys a building, killing multiple civilians, after accidently mis-identifying a civilian as an enemy. Would the programmer that wrote the algorithm be held responsible? Or would the person who assigned the robot the mission? Or the commander who technically owns the robot? This ethical conundrum forms the underlying logic behind a move from the international community to ban fully autonomous killing robots.
What about terrorist organizations that do not adhere to the international conventions and are not worried about an arms race or collateral damage? Luckily, terrorist organizations do not have access to the necessary technology for making these killer robots, at least ones that would work reliably. Additionally, they would see little benefit from a robot acting fully autonomously versus being remotely operated.
Without a doubt, robots will continue to play a critical role in the battlefield. However, the core function of combat—killing the enemy—will require human involvement. Though technically feasible with current technology, the ethics and societal implications of killer robots will preclude their usage from the battlefield, at least for now.