Ethical development is top of mind as the Pentagon’s Joint Artificial Intelligence Center works with the military to produce and scale artificial intelligence-based and autonomous systems for the battlefield, senior JAIC officials said during a press conference.
“We are constantly looking at and thinking about the policy ethics and other implications of the work that we’re doing,” the center’s Acting Director Nand Mulchandani said. He and two others briefed reporters Thursday on JAIC applications already in action and spotlighted a range of announcements unfolding during the Defense Department’s Artificial Intelligence Symposium and Exposition.
While the department aims to bake democratic values into its uses of artificial intelligence, laws don’t yet govern all of the center’s work.
Still, Mulchandani said the JAIC is “operating with all the laws and things that have existed here at the [department] with AI.” One of those is DoD Directive 3000.09. Written in 2012, the directive lays out the department’s guidelines for producing and using autonomous and semi-autonomous functions in weapon systems, and requires an appropriate level of human control—though some question whether the policy is outdated.
“There’s no lethal autonomous weapon systems being deployed or fielded, etc.,” Mulchandani confirmed. “Now, having said that, we are doing research and work in autonomy.” And to Mulchandani, “there’s no question” that autonomous capabilities should be integrated into real-world operations, though 3000.09 would not govern those that aren’t related to weapons.
The center’s chief of test, evaluation and assessment, Dr. Jane Pinelis, added that the JAIC includes working groups that meet regularly and are part of a three-star executive steering group, or ESG, on AI at the Defense Department. Honing in on specific areas—such as ethics and test and evaluation—the cadres of key officials interact together and also work across the department, she said.
The JAIC’s test and evaluation group, for instance, not only works with the center on specific efforts such as recent sensor-related work connected to the Pentagon’s computer vision-focused Project Maven, but also with service operational test agencies, the Office of the Secretary of Defense for policy—and more.
“Those are all the important stakeholders when it comes to testing any kind of system in the DoD. And so, should we start approaching 3000.09, we have all the right people and all the right partnership in place who are going to raise their hand and say, ‘hey, we now have to worry about this,’” Pinelis said. “And we have all the people who are ready to red team any system that might need it.”
Mulchandani further reiterated that “every single service, every single combatant command, every single major [Defense] stakeholder team—whether it be policy or international relations, etc.—has a representative at the ESG,” also adding “so, that conversation is happening in that forum and obviously at the senior levels, all the time.”
Officials during the briefing argued that there are “few real AI-based systems” deployed in the battlefield, though as Mulchandani put it, many look like they are based on the tech because they are autonomous in some form.
“But whether they truly use the AI or not, I think is still questionable,” he said.
Expanding on work with autonomous capabilities on the ground, Army Col. Brad Boyd, JAIC’s director of joint warfighting, said officials are steering a project deemed Smart Sensor, interacting closely with the Air Force and the Pentagon’s prior controversy-sparking Project Maven. They’re developing the Agile Condor pod capability, which Boyd said enables “essentially autonomous sensing, autonomous tracking of whatever you want in the battlefield, theoretically.”
“If you think of an [MQ-9 Reaper unmanned aerial vehicle] out there currently tethered to a ground station, Agile Condor pod will help us get that MQ-9 to continue to be able to operate if that tether is separated, so that it can … continue to look at the targets that it was looking at until the ground station is reestablished,” Boyd said. “So, that’s a critical capability and we’re in development with that.”
Beyond that work, officials shared how JAIC products are presently in prototype, testing or in production to advance automation, help the U.S. Special Operations Command predict engine failures, and the U.S. Northern Command predict supply chain and logistics issues. The center’s work also supports the fight against fires in California and more.
The JAIC also recently awarded Deloitte a contract worth millions to build the Joint Common Foundation, an AI development environment to field the tech’s capabilities at-scale across the Pentagon. Mulchandani said officials envision building JCF on top of the Joint Enterprise Defense Infrastructure, or JEDI—the Pentagon’s long-delayed, recently re-awarded enterprisewide cloud computing capability—and other cloud environments.
“So we’re not envisioning this as a monolithic central system that is the only system that everyone will use,” Mulchandani said. “There’ll probably be multiple clouds across the [department], and I think a smaller number over time. But the JCF is built to be an environment that sits on top of [those platforms]. That’s the vision of it.”
During the symposium, Defense Secretary Mark Esper announced that the Pentagon is teaming up with defense organizations from more than 10 nations to launch new tools, resources and frameworks for data sharing and interoperability across the partnering militaries.
“We believe that this is in stark contrast to other countries like China or Russia that are actually doing work in AI around things like autonomous weapon systems that are being exported without policies and controls in place—which we think is incredibly dangerous because it proliferates this technology without adequate controls that we believe could be very, very harmful,” Mulchandani said. “We plan to be very transparent about our processes and pull everyone along with us on that front.”