Air Force uses artificial intelligence aboard military jet for very first time to ‘spot missiles’ during U-2 spy plane training mission -pilot after R2D2 from Star Wars

 The US Air Force has used artificial intelligence (AI) aboard a military aircraft for the first time during a training flight in California on Tuesday.

The historic moment, which occurred at Beale Air Force Base, saw an artificial-intelligence algorithm take control of the radar sensors and tactical navigation systems of a U-2 Dragon Lady spy plane.

USAF said the AI system – named ARTUµ in tribute to the Star Wars droid R2D2 – was deliberately designed without a manual pilot override to ‘provoke thought and learning in the test environment.’

No weapons were featured during the sortie and the plane was still being steered by a pilot, however officials are celebrating the feat as a watershed moment in the Defense Department’s attempts to integrate AI into military aircraft.

‘We know that in order to fight and win in a future conflict with a peer adversary, we must have a decisive digital advantage,’ said Air Force Chief of Staff Gen. Charles Q. Brown, Jr. 

‘AI will play a critical role in achieving that edge, so I’m incredibly proud of what the team accomplished. We must accelerate change and that only happens when our airmen push the limits of what we thought was possible.’  

The historic moment, which occurred at Beale Air Force Base yesterday, saw an artificial-intelligence algorithm take control of the radar sensors and tactical navigation systems of a U-2 Dragon Lady spy plane

Maj. ‘Vudu’, a U-2 Dragon Lady pilot assigned to the 9th Reconnaissance Wing, issues a thumbs up as he prepares to taxi after returning from a training sortie at Beale Air Force

The pilot of the mission, referred to only as his call sign ‘Vudu’, explained to the Washington Post that the AI was given only very narrow tasks and he was ‘still very much in control’.

However, he credited the intelligence system with ‘performing well’ with the tasks it was entrusted with, though details of those specific tasks were not offered by the pilot.

The demo reconnaissance mission was simulated as a missile strike at Beale and took place across two-and-half-hours on Tuesday morning.

Assistant Air Force Secretary Will Roper said ARTUµ was trained against an opposing computer to spot incoming missiles and missile launchers.

ARTUµ searched for the enemy launchers while the pilot hunted for adversary aircraft. Both man and machine did so while sharing the U-2’s radar. 

The AI reportedly also had the ultimate authority of where to aim the spy plane’s sensors. 

The Assistant Air Force Secretary explained the aim of the mission was to bring the Air Force a step closer to the idea of ‘man and machine teaming’, in which AI is responsible for limited technical tasks while human pilots remain in control of ‘life-or-death decisions’, such as weapons targeting.

‘This is really meant to shock the Air Force and the [Defense] Department as a whole into how seriously we need to treat AI teaming,’ Roper told the Post.

He added that the AI is not just a part of the system, rather USAF is ‘logging it in the pilot registry’.

The pilot of the mission, referred to only as his call sign ‘Vudu’, explained to the Washington Post that the AI was given only very narrow tasks and he was ‘still very much in control’

U-2 spy planes (above) were first developed for the CIA during the 1950s and was used throughout the Cold War to conduct surveillance from altitudes of up to 70,000 feet

The Air Force explained that ARTUµ is based on an open-source software algorithm, called µZero, that was later adapted to the plane’s computer systems at the U-2 Federal Laboratory. µZero is a world-leading computer program that can quickly master games such as Chess and Go, without prior knowledge of their rules.

In an article published to Popular Mechanics, Roper wrote: ‘Completing over a million training runs prior, the flight was a small step for the computerized co-pilot, but it’s a giant leap for “computerkind” in future military operations.’

Roper said that, ‘though lacking those lively beeps and squeaks, ARTUµ surpassed its motion picture namesake in one distinctive feature: it was the mission commander, the final decision authority on the human-machine team. And given the high stakes of global AI, surpassing science fiction must become our military norm.’

In a press release, the Air Force also lauded Tuesday’s flight, calling it an overwhelming success which yielded ‘invaluable’ data. 

‘Blending expertise of a pilot with capabilities of machine learning, this historic flight directly answers the National Defense Strategy’s call to invest in autonomous systems,’ said Secretary of the Air Force Barbara Barrett. ‘Innovations in artificial intelligence will transform both the air and space domains.’

Roper concurred, pointing the August victory of an AI pilot over an F16 Top Gun during a competition run by the Defense Advanced Research Projects Agency (DARPA).

The human pilot, a Weapons School instructor pilot known only by his call-sign ‘Banger’, was annihilated by his artificial intelligent opponent in the showdown at the Johns Hopkins Applied Physics Laboratory in Laurel, Maryland, on August 20.

The AI system is named ARTUµ in tribute to the co-piloting Star Wars droid R2D2 (shown above)

Officials are celebrating the feat as a watershed moment in the Defense Department’s attempts to onboard AI into military aircraft (Gen. Mark Kelly, right, commander of Air Combat Command, and Command Chief Master Sgt. David Wade, ACC, receive a briefing from U-2 Federal Laboratory staff on December 4)

Roper believes the AI triumph is proof that an ‘autonomous future will eventually happen’, though noted today’s AI would be too easily fooled by adversary tactics and there’s still much more work to be done.

‘As we complete our first generation of AI, we must also work on algorithmic stealth and countermeasures to defeat it,’ Roper wrote in Popular Mechanics. ‘Though likely as invisible to human pilots as radar beams and jammer strobes, they’ll need similar instincts for them—as well as how to fly with and against first-generation AI—as we invent the next.

‘Algorithmic warfare has begun,’ he added.

Former Google chief executive Eric Schmidt, who previously headed the Pentagon’s Defense Innovation Board, said he believes it’s unlikely the military will embrace fully autonomous weapons systems anytime soon.

‘If a human makes a mistake and kills civilians, it’s a tragedy. … If an autonomous system kills civilians, it’s more than a tragedy,’ Schmidt told the Post.

‘No general is going to take the liability of a system where they’re not really sure it’s going to do what it says. That problem may be fixed in the next several decades but not in the next year.’

Read More

Rishika