“Fire, forget and find?” That question may have just been answered.
A new United Nations report suggests that a long-anticipated, highly-concerning debut in military technology was made last year during Libya’s civil war: the use of an AI weapon. The U.N. report described an aerial drone used by Libya’s government against militia forces as a “lethal autonomous weapons systems,” that was “programmed to attack targets without requiring data connectivity between the operator and the munition: in effect, a true ‘fire, forget and find’ capability.”
The U.N. report details an AI weapon attacking humans–a stunning warfare precedent that should inspire fervent debate.
While retreating, the militia “were subject to continual harassment” from the KARGU-2; they “were hunted down and remotely engaged,” the U.N. report said. An aerial drone harassing and hunting enemy combatants is not significant. What is significant is that the KARGU-2 has an “autonomous” mode.
A rotary-wing attack drone designed for asymmetric warfare and anti-terrorist operations, the “KARGU can be effectively used against static or moving targets through its indigenous and real-time image processing capabilities and machine learning algorithms embedded on the platform,” the STM website says.
Of course, what real-time image processing and machine learning equate to is artificial intelligence. And the use of artificial intelligence in warfare raises pressing ethical and legal issues.
ANALYSIS: WAS A TRULY AUTONOMOUS WEAPON USED?
The U.N. report left some questions unanswered. Clearly, a drone was used in the attack. “What’s not clear is whether the drone was allowed to select its target autonomously and whether the drone, while acting autonomously, harmed anyone. The U.N. report heavily implies, but does not state, that it did,” Zachary Kallenborn, an expert on drone warfare at the University of Maryland, wrote in the Bulletin of the Atomic Scientists.
h/t Digital mix guy