Push to expedite AI use in lethal autonomous weapons raises questions about reliability of new military tech

Push to expedite AI use in lethal autonomous weapons raises questions about reliability of new military tech

The U.S. military has a series of AI initiatives underway that are raising serious questions about using lethal autonomous weapons, especially those that have not been properly vetted.

One ambitious initiative, known as Replicator, is using thousands of fairly affordable and expendable AI-enabled autonomous vehicles to “galvanize progress in the too-slow shift of U.S. military innovation to leverage platforms that are small, smart, cheap and many.” They are expected to be fielded by 2026 in a bid to stay on par with China’s developments.

Deputy Defense Secretary Kathleen Hicks said: “We’ll counter the [People’s Liberation Army’s] mass with mass of our own, but ours will be harder to plan for, harder to hit and harder to beat.”

Although there are still not many details about the program or its funding, it is believed that Replicator will require some tough decisions about what types of AI tech can be considered reliable enough to use in weaponized systems. Some suspect the vagueness and lack of an official timeline could be an attempt to keep rivals guessing, although the Pentagon could also be trying to work out some of the finer details of the program.

It is widely believed that America will have fully autonomous deadly weapons, including flying terminator weapons, ready within the next few years. Pentagon officials and industry experts alike are convinced it’s just around the corner. While officials have maintained that humans will always have control over these weapons, their role appears to be minimal at best.

Several countries are working on lethal weapons that can be deployed using drone swarms, and the U.S. Defense Advanced Research Projects Agency (DARPA) is reportedly working on a program known as the Autonomous Multi-Domain Adaptive Swarms-of-Swarms (AMASS).

“The DARPA AMASS program is exploring the use of swarms-of-swarms to conduct military operations in highly contested environments that pose great risk to our Soldiers, Sailors, Airmen and Marines. It is meant to inform future military programs of record and not to be a military program of record itself,” a DARPA spokesperson said.

Meanwhile, the Air Force is working on a “loyal wingman” program that will pair piloted aircraft with autonomous crafts. This could see, for example, an F-16 pilot sending drones out to attack targets.

Can AI weapons be trusted to operate autonomously?

The first Pentagon AI chief, Lt. Gen. Jack Shanahan, has said that the only types of weapons he trusts at the moment to operate autonomously are fully defensive ones, like the Phalanx anti-missile systems some ships are equipped with. For him, systems failing to work as intended or killing friendly forces or civilians are bigger concerns than autonomous weapons making their own decisions.

Department of Defense Joint Artificial Intelligence Center Chief Digital and AI Officer Craig Martell emphasized that someone will always be responsible for these systems, noting: “Regardless of the autonomy of the system, there will always be a responsible agent that understands the limitations of the system, has trained well with the system, has justified confidence of when and where it’s deployable — and will always take the responsibility.”

Unfortunately, the U.S. may struggle to attract the right talent to properly test and evaluate the AI it has in development. The Pentagon is unable to compete when it comes to salaries, with professionals who possess these skills commanding much higher salaries in the private sector. And with a recent report on AI in the Air Force by the National Academy of Sciences showing that testing standards are immature, some are concerned the U.S. might deploy autonomous weapons that are not quite up to speed, particularly if another country forces its hand.

A former chief of AI assurance for the Department of Defense’s Joint Artificial Intelligence Center, Jane Pinelis, said: “We are still operating under the assumption that we have time to do this as rigorously and as diligently as possible. I think if we’re less than ready and it’s time to take action, somebody is going to be forced to make a decision.”

Sources for this article include:

TheDebrief.org

DefenseNews.com

APNews.com

Read More

Cassie B.