General John Murray, head of Army Futures Command, told a webinar audience at the Center for Strategic & International Studies that humans may not be able to fight swarms of enemy drones, and that the rules governing human control over artificial intelligence might need to be relaxed.
“When you are defending against a drone swarm, a human may be required to make that first decision, but I am just not sure any human can keep up,” said Murray. “How much human involvement do you actually need when you are [making] nonlethal decisions from a human standpoint?”
This indicates a new interpretation of the Pentagon’s rules on the use of autonomous weapons. These require meaningful human control over any lethal system, though that may be in a supervisory role rather than direct control – termed ‘human-on-the-loop’ rather than ‘human-in-the-loop.’
Murray said that Pentagon leaders need to lead a discussion on how much human control of AI is needed to be safe but still effective, especially in the context of countering new threats such as drone swarms. Such swarms are likely to synchronize their attacks so the assault comes in all directions at once, with the aim over overwhelming air defenses. Military swarms of a few hundred drones have already been demonstrated, in future we are likely to see swarms of thousands, or more. One U.S. Navy project envisages having to counter up to a million drones at once.
The U.S. Army is spending a billion dollars on new air defense vehicles known as IM-SHORAD with cannon, two types of missile, jammers, and future options of laser and interceptor drones. Using the right weapon against the right target at the right time will be vital. Faced with large numbers of incoming threats, many of which may be decoys, human gunners are likely to be overtaxed. Murray said that the Army’s standard test involving flashcard identification requires an 80% pass rate. During the recent Project Convergence exercise, artificial intelligence software boosted this to 98% or 99%, according to Murray.
This is not the first time that the Army Future Command has suggested that humans on their own may be outclassed. In a briefing on the DARPA-Army program called SESU (System-of-Systems Enhanced Small Unit), which teamed infantry with a mix of drones and ground robots, scientists noted that the human operators kept wanting to interfere with the robots’ actions. Attempts to micromanage the machines degraded their performance.
“If you have to transmit an image of the target, let the human look at it, and wait for the human to hit the “fire” button, that is an eternity at machine speed,” said one scientist, speaking on condition of anonymity. “If we slow the AI to human speed …we’re going to lose.”
AI is in the ascendant. The 5-0 victory over a human pilot in a virtual dogfight last August is still being debated, but there is no doubting that machines have faster reflexes, and ability to keep track of several things at once, and are not troubled by the fatigue or fear that can lead to poor decisions in combat.
There are two responses to this. One is to try and control AI and keep it away from the battlefield, given that that machines lack human ethical sense. The Campaign to Stop Killer Robots has long argued the case against autonomous weapons and the EU seems to agree. Last week the European Parliament set out its position: “The decision to select a target and take lethal action using an autonomous weapon system must always be made by a human exercising meaningful control and judgement, in line with the principles of proportionality and necessity.” In other words, autonomous weapons making their own decisions should be outlawed.
However, the U.S. appears to take a different line. A government-appointed panel reporting to Congress suggested this week that — as indicated by Murray’s comments – AI is likely to make fewer mistakes than humans and would be better at identifying targets.
“It is a moral imperative to at least pursue this hypothesis,” said panel vice-chairman, Robert Work, former deputy secretary of defense. He argues that autonomous weapons would lead to reduced casualties due to target misidentification.
Behind this, there is the military argument. If AI-controlled weapons can defeat those operated by humans, then whoever has the AIs will win and failing to deploy them means accepting defeat.
Debate still swirls around this topic. The emergence of drone swarms and other types of weapons that cannot be defeated by humans alone will crystalize it. However, it is not clear whether the legal argument will be able to keep up with technology, given how long it has already been going on. At this rate, large-scale AI-powered swarm weapons may be used in action before the debate is concluded. The big question is which nations will have them first.