Police Departments Are Not Going to Give Up on Killer Robots

Riot police blockade counter protesters on Aug. 27, 2017, in Berkeley, California.
AFP Contributor/Getty Images

Over the past couple of weeks, many were shocked to learn that the police in San Francisco want to use lethally armed robots. But the real surprise is that the proposal took so long. After all, six years ago the police in Dallas used a lethally armed robot to kill a barricaded sniper who had fatally shot five officers. The Dallas police generally received praise for their decision, even as it drew concerns about a new police tactic deployed without any prior guidelines in place. Even if San Francisco ultimately decides to shelve its approval of lethally armed police robots, that won’t end the matter. But the SFPD killer robot controversy has accomplished something: It’s forcing us to confront uncomfortable questions around the future of police and technology—and these questions need answers.

The controversial proposal in San Francisco to arm police robots with deadly force was raised, ironically enough, in the context of a statewide police accountability measure. A new California law, AB 481, went into effect in May and requires law enforcement agencies in the state to inventory what military-grade equipment they possess.  That equipment can only be used, if at all, pursuant to policies that have been approved by local governments in public meetings.  In other words, AB 481 imposes local democratic control over the police use of tools like mine-resistant ambush protected vehicles, sound cannons, microwave weapons, battering rams—and robots.

The original draft policy considered by the San Francisco Board of Supervisors was silent about the appropriate use of robots, but Supervisor Aaron Peskin added a provision to ban robots armed with any amount of force.  The police department responded with its own edits, which said that robots armed with deadly force could be used in situations when there was an “imminent” threat to the public or the police, and when other options were worse.  It was this police-drafted policy that was adopted by the San Francisco’s Supervisors on Nov. 29. After public outcry and national news coverage, the Board of Supervisors voted to amend the policy.  The police in San Francisco can use their remote-controlled robots for tasks like search and rescue, but not armed with deadly force.  (The lethal force provision has been sent back to the board’s rules committee for further consideration.)

The police of nearby Oakland, California, had also proposed using robots with lethal force for their own review under the state’s transparency law just months earlier. The city’s police civilian oversight council learned that the police department’s own robots could be fitted with a live “shotgun round” that might be used in emergency situations. After public backlash to the proposed use, the Oakland police department publicly announced it would not be pursuing adoption of lethally armed robots.

Even if San Francisco had adopted the originally proposed guidelines, that brief guidance raised more questions than answers. Police would have been authorized to use robots with deadly force when faced with an “imminent” threat to life and when their use outweighed other options. The standard of imminence is familiar in the deadly force context when it is police officers, not machines, who shoot and kill. But we justify those deadly decisions because we assume officers must make, as the Supreme Court has stated, “split second judgments—in circumstances that are tense, uncertain, and rapidly evolving.” But robots introduce two important elements—time and distance—that make the now-or-never rationale much harder to justify.

And no standard, whether codified in law or in policy, will do much to dampen public fears about how lethal police robots might one day be used in ordinary policing.  True, the kind of robots inventoried by the SFPD will remind people more of small, slow, and clunky tanks more than anything else. But robots come in all types, including drones and four-legged robot dogs like the ones the Los Angeles Police Department may soon deploy.

Don’t forget, either, that the term robot can encompass many forms, not just those that resonate in the popular imagination. Take the gunshot detector sensors that cities around the country have installed everywhere to listen for the sound of gunfire—can they be equipped with lethal weapons, too? What about nonlethal but injurious weapons? Can an autonomous police car also come equipped ready to subdue a suspect with pepper spray or bullets?

The fact that today’s police robots are remotely controlled, rather than autonomous or “smart,” is far from an assurance. While we should be concerned about the development of AI-enabled deadly robots, the ones controlled by people pose their own risks, too. Most might agree about the extreme use cases for lethal robots (such as an ongoing terrorist attack), but the real question is what other emergencies qualify. The exact uses of robots, armed lethally or otherwise, are going to be left up to the police themselves absent bans or detailed regulations. San Francisco’s own policy of determining imminence, for instance, is a standard that leaves little room for outside guidance or prior notice. And even the mere presence of a lethally armed robot in a neighborhood or in a peaceful protest will rightfully be seen as a tactic of intimidation.

Context matters as well. We know that police violence is a problem in the United States.  Hundreds of people are fatally shot by the police every year. We also know these official measures are likely a vast undercount of those killed by fatal police violence. Those burdens are distributed unequally; Black, Hispanic, and Native American people are far more likely to experience that violence than others.  No wonder, then, that anyone concerned about police violence should be alarmed by technologies that may increase safety for the police but create new means of deadly force to be used against civilians.

San Francisco has pressed the pause button on allowing its own police to use deadly robots, but in doing so has drawn attention to a national issue. Through its temporary approval, this famously liberal city—the same one that banned its police from using facial recognition—has lent legitimacy to the topic of lethal police robots. Lethally armed robots are no longer forbidden as a mainstream policing technology, even as some robotics companies like Boston Dynamics have pledged not to weaponize their robots, and as Axon has paused its development of a Taser-equipped drone.  Some police departments may conclude that they don’t have to decide as an official matter whether to use lethal robots because they are located in states without a transparency law like California’s.  These departments may just decide to arm their robots when they find an appropriate emergency.  After all, police can improvise a deadly robot in a matter of minutes, like the Dallas police in 2016. Other agencies around the country, many of which already possess robots of their own, may decide that they will depart from San Francisco’s views and permit their police to arm their robots. They can start with the policy from San Francisco.

Future Tense
is a partnership of
Slate,
New America, and
Arizona State University
that examines emerging technologies, public policy, and society.

Read More

Elizabeth Joh