Cancel Preloader

A Dogfight Renews Concerns About AI’s Lethal Potential

 A Dogfight Renews Concerns About AI’s Lethal Potential


By Wired



August 26, 2020

Comments

U.S. Air Force fighter jets.

The Pentagon’s AlphaDogfight contest pitted artificial intelligence programs for directing an F-16 against an Air Force pilot.

Credit: Icon Sportswire

In July 2015, two founders of DeepMind, a division of Alphabet with a reputation for pushing the boundaries of artificial intelligence, were among the first to sign an open letter urging the world’s governments to ban work on lethal AI weapons. Notable signatories included Stephen Hawking, Elon Musk, and Jack Dorsey.

Last week, a technique popularized by DeepMind was adapted to control an autonomous F-16 fighter plane in a Pentagon-funded contest to show off the capabilities of AI systems. In the final stage of the event, a similar algorithm went head-to-head with a real F-16 pilot using a VR headset and simulator controls. The AI pilot won, 5-0.

The episode reveals DeepMind caught between two conflicting desires. The company doesn’t want its technology used to kill people. On the other hand, publishing research and source code helps advance the field of AI and lets others build upon its results. But that also allows others to use and adapt the code for their own purposes.

Others in AI are grappling with similar issues, as more ethically questionable uses of AI, from facial recognition to deepfakes to autonomous weapons, emerge.

From Wired

View Full Article


No entries found

Read More

0 Reviews

Write a Review

tianze.zhang@graduateinstitute.ch