Airborne AI: Not a Game Changer Yet

Asia Defense | Security

The Agile Condor system is a promising leap toward artificially intelligent combat drones, though many challenges remain before such systems are deployed.

Airborne AI: Not a Game Changer Yet

MQ-9 Reaper

Credit: Flickr/Chris Hunkeler

General Atomics, the manufacturer of the U.S. military’s MQ-9 Reaper drone, announced on September 3 that it had conducted the first flight test of a system called Agile Condor: basically a powerful processing unit loaded into a self-contained pod. The promise of the system is that, in conjunction with a sophisticated machine learning algorithm, it could massively simplify the task of sorting through overwhelming quantities of intelligence data that even a single unnamed aerial vehicle can produce, highlighting targets of interest for the human at the far end of the system and discarding the overwhelming bulk of the data as irrelevant.

The implications, if such a system was to live up to its promise, would be significant. Modern sensor systems on aircraft are capable of taking in so much data that processing it and turning it into actionable intelligence within operationally useful time limits has become a tremendous undertaking. And the mismatch of data generated to processing ability will only increase as wide-area surveillance systems like Gorgon Stare displace traditional narrow-field sensors. On time-sensitive missions, data which cannot be transmitted for processing on the ground or processed onboard loses most if not all of its intelligence value.

The purpose of the Agile Condor system is to streamline the process by which raw data is turned into the basis for human decisions, and the purpose of hanging that system on the drone that is gathering the data is to make it as efficient as possible. The system itself is intentionally designed to accommodate future expansion, though, which opens the door to other mission possibilities. And seeing as Agile Condor is a single-pod system, an MQ-9 or similar unmanned combat aerial vehicle (UCAV) carrying one would not sacrifice all of its munitions capacity. A single UCAV carrying both an AI targeting system and weapons raises the specter of fully autonomous weaponry, with all the operational, legal and moral complexity that entails. That remains prospective, not least because legal and operational concerns mean that a human will retain ultimate control for the immediate future.

Furthermore, some skepticism about the effectiveness of automating the higher functions of such a system is warranted. The announced event seems to have been simply a flight test — that is, . a demonstration that the physical pod containing the Agile Condor system could be mounted safely on an MQ-9’s wing hardpoint without producing negative handling characteristics. It is a necessary step, but not a sign that the system is ready for deployment. Details about the actual processing hardware and, most crucially, the algorithms it could run remain scarce.

More critically, AI systems often perform better in test cases where not every single variable is modeled. For example, in the recent and widely-publicized case of an AI pilot beating a human pilot in a dogfight between simulated identical F-16s, the result seemed to be a clear-cut and overwhelming victory for the machine. But the AI pilot attained several of its “kills” with its gun at point-blank range, which as experienced observers pointed out, would likely cause damage if not outright destruction to the AI plane as it flew through the debris cloud from its disintegrating opponent. If the machine’s purpose is to destroy its target at any cost, that might not be a problem, but there are doubtless simpler and cheaper ways to accomplish that task.

More to the point, AI successes in discrete, bounded types of tasks are not necessarily a sign of their suitability for operational use in complex, real-world environments. Artificial intelligence, as a concept, suffers from being anthropomorphized: in predicting how a system will work or fail, we assume that an AI-driven system will make the same types of assumptions and errors that human minds do. But AI systems work on fundamentally different principles, and as their sophistication grows, it becomes harder and harder for human operators to predict or anticipate the types of errors that might lead to — for example, a classification of an innocent individual as a threat.

A system such as Agile Condor is inherently most useful in a complex, busy environment like a city or a heavily populated littoral, but those environments pose their own challenges. There have been numerous examples of algorithms designed by engineers from one cultural or national background failing spectacularly when faced with very basic tasks in a different environment. And the datasets that train the algorithms are themselves vulnerable, either to mismanagement or manipulation.

In short, UAVs are increasing both their ability to take in data and their ability to process it, representing rapid capability evolution but not a game-changing moment. It is, however, a good opportunity for policymakers to consider how to put safeguards, boundaries and best practices in place on the use of wide-scale surveillance systems before such a moment renders it impossible to do so.

Read More

tianze.zhang@graduateinstitute.ch