Tasked with deciding whether the United States military should be able develop autonomous killer robots capable of using deadly force, a congressional advisory panel decided that the government should keep its options open.
Using similar logic to a parent convincing to get their toddler to try vegetables, members of the National Security Commission on Artificial Intelligence concluded that Congress should at least consider giving killer robots or artificial intelligence systems a chance, Reuters reports. Otherwise, the military might never know if they actually perform so well that they kill fewer innocent people, according to the logic of the panel, which made this decision despite growing international pressure to ban killer AI outright.
The committee’s decision doesn’t directly shape U.S. policy, but the group, led by former Google CEO Eric Schmidt, does have Congress’ ear, according to Reuters. And its decision aligns with actions the military has already taken, like laying out ethics principles that don’t call for much more than making sure that killer AI actually works.
Arguing that an effective AI might take actions that result in fewer casualties than human soldiers, committee vice chairman and former deputy secretary of defense Robert Work said that “it is a moral imperative to at least pursue this hypothesis,” according to Reuters.
Of course, the fact is that the U.S. military has already started developing autonomous tanks, which it strategically terms “lethality automated systems.” The militaries of China, Russia, and the U.K. have all developed robots, drones, or algorithms capable of taking a human life as well — suggesting that this so-called moral imperative to explore killer tech is really more of an imperative to not fall behind in an arms race.
“[The commission’s] focus on the need to compete with similar investments made by China and Russia… only serves to encourage arms races,” Mary Wareham, coordinator of the Campaign to Stop Killer Robots, told Reuters.
READ MORE: U.S. commission cites ‘moral imperative’ to explore AI weapons [Reuters]
More on killer robots: These Are the Pentagon’s New Ethics “Principles” for AI in Warfare