Will the U.S. Military Let AI Automate Defensive Weapons?

Should something like a swarm of mini-drone explosives close in for an attack or a salvo of incoming hypersonic missiles approach at speeds five times the speed of sound, human decision makers simply might not be able to respond quickly enough. In fact, military commanders may not get the chance to counterattack or determine the best course of defensive action.

Not only would there not be time for a human decision-maker to weigh the threat variables, but weapons operators themselves may simply be too overwhelmed to detect, track, engage or fire upon high-speed simultaneous attacks should they receive orders. There just simply is not time.

There is also another limitation, because while automated systems and artificial intelligence-empowered sensors and weapons now have the technical ability to autonomously find, track and shoot approaching enemy threats without needing human intervention, there are Pentagon doctrinal and ethical limitations requiring a human to be “in-the-loop” regarding decisions about the use of lethal force. What about non-lethal force? What about automated, networked sensors and weapons operating autonomously purely for defensive purposes? Could automated defenses functioning as interceptors, area weapons or countermeasures of some kind to destroy, disable or simply knock out approaching enemy threats with minimal human direction?

“Right now we don’t have the authority to have a human out of the loop,” Col. Marc E. Pelini, the division chief for capabilities and requirements within the Joint Counter-Unmanned Aircraft Systems Office, said during a teleconference, according to a Pentagon report. “Based on the existing Department of Defense policy, you have to have a human within the decision cycle at some point to authorize the engagement.”

However, is the combination of high-speed, AI-enabled computer and sensor-to-shooter connectivity, coupled with the speed and sphere of emerging threats beginning to impact this equation? Perhaps there may indeed be some tactical circumstances wherein it is both ethical and extremely advantageous to deploy autonomous systems able to track and intercept approaching threats in seconds, if not milliseconds.

Speaking in the Pentagon report, Pelini explained that there is now an emerging area of discussion pertaining to the extent to which AI might enable “in-the-loop” or “out-of-the-loop” human decision making, particularly in light of threats such as drone swarms.

“When you’re starting to see swarming activities of hundreds or potentially thousands [of UAS] in the future, obviously you want your system to operate as fast [as possible] to provide those weaponeering solutions to the operator, or operate within a set of parameters,” Pelini said. “But that’s really kind of defined right now in the policy realm.”

In yet another context, should a small group of hypersonic missiles be heading toward a U.S. Navy surface warship, there simply may not be time for a ship commander to decide upon which level of a layered defenses might best be suited to counter or intercept the attack? After al, there are a variety of defenses to choose from: interceptor missiles such as an SM-3, lasers, and electronic warfare jamming? What if an AI-enabled collection of networked sensors, drawing upon aerial and surface nodes were able to find and track the approaching missiles? Such a network could then instantly compare threat data such as flight trajectory and speed and impact likelihood against a database of previously analyzed scenarios in order to discern the optimal course of defensive action?

Not only could an AI-capable system potentially make that kind of determination, in a matter of milliseconds, but it could also direct a weapons system, effector or interceptor to engage without needing human approval. Could human decision-makers always be assured the use of force was non-lethal and defensive in nature? Therein lies the thrust of the debate, which is fast taking on new urgency given the staggering speed and breadth of emerging enemy weapons. It simply may be that AI-enabled autonomy, carefully managed and pre-programmed by human decisions makers, might prove to be the only way to save lives and valuable assets from complete destruction in warfare. Attacks are simply too fast.

Kris Osborn is the defense editor for the National Interest. Osborn previously served at the Pentagon as a Highly Qualified Expert with the Office of the Assistant Secretary of the Army—Acquisition, Logistics & Technology. Osborn has also worked as an anchor and on-air military specialist at national TV networks. He has appeared as a guest military expert on Fox News, MSNBC, The Military Channel, and The History Channel. He also has a Masters Degree in Comparative Literature from Columbia University.

Image: Reuters.

Read More

Kris Osborn