AI’s military use is limited by maturity concerns, CENTCOM official says

Pentagon officials are actively experimenting with artificial intelligence but view the emerging tools as one of the “less mature technologies” available in their arsenal today, the chief technology officer for U.S. Central Command — or CENTCOM — said during an online event hosted by the Middle East Institute on Tuesday. 

Even though CENTCOM has taken a leading role across the combatant commands in testing AI and other innovative technologies, CTO Schuyler Moore cautioned that “while AI holds an incredible amount of promise, performance is still quite up and down.”

“For a lot of applications, we are just dipping a toe in,” Moore said about CENTCOM’s work. “We see the potential is incredible, [and] we will continue to engage experimentation and invest in that capability for what it will offer us down the road. But the simple reality is that many AI applications are not very mature, and we have to be honest about that.”

She called the current challenge with actively adopting AI “a technical one,” noting that “a lot of models are not yet performing at the level that creates meaningful impact for folks.”

Moore said current uses of AI across CENTCOM, for example, could apply to straightforward workflow processes “where we feel confident” that the tool is mature enough to work effectively, or when “our ability to detect when it’s done incorrectly is quite high.”

Given the current concerns about AI’s maturity, Moore said that when it comes to military conflicts, officials “will more frequently than not put it to the side or use it in very, very select contexts where we feel very certain of the risks associated.”

But she said highlighting the technologies’ current limitations while still pushing for AI experimentation was important given the “immense potential for what they can do with the sheer volume of data and the importance of the work that we’re doing.”

Moore noted that military branches under CENTCOM’s command have launched innovation-specific initiatives — including the Army’s Task Force 39, the Navy’s Task Force 59 and the Air Force’s Task Force 99 — that experiment with emerging tools, but said that “we focus them on problems that they’re trying to solve, not particular technologies.”

She added that the task forces’ experimentations with AI have largely focused on “a combination of unmanned systems” and “software that can facilitate faster workflows,” noting that this problem-solving-focused work will help to inform other combatant commands when it comes to streamlining their daily operations. 

“You have to have realistic users in a realistic environment to improve this capability,” she added.

Moore said that longer-term uses of AI require “setting the mechanism that allows for consistent model performance,” such as outlining technical expectations for developers and maintaining active engagement with users. And it also includes access to vast amounts of high-quality data — an issue that Moore noted earlier this year was hindering broader adoption of AI technologies across the department. 

The Pentagon has carefully worked to establish parameters around its early use of AI technologies, including adopting a revised directive earlier this year on the use and development of autonomous weapons systems and issuing a series of ethical AI principles in 2020 governing its use of the tools.

Earlier this month, DOD also released a new data, analytics and AI adoption strategy that positioned quality of data as the base of its “AI hierarchy of needs.” The strategy’s release was followed by the department’s public issuance of a toolkit on the responsible uses of AI technologies.

Read More

Edward Graham