The use of artificial intelligence in combat robots has sparked numerous debates, with scientists and entrepreneurs urging authorities worldwide to formulate a convention on the use of killer robots or ban them altogether.
Killer drones might have attacked people for the first time on their own, New Scientist wrote citing a report from the United Nations Security Council’s Panel of Experts on Libya. The North African country has been mired in a civil war since the toppling of Muammar Gaddafi.
According to the obtained documents, last year a Turkish-made drone, STM Kargu-2, “hunted down” soldiers loyal to General Khalifa Haftar without being ordered to do so. The report did not elaborate on whether individuals were killed during the incident, but said that if they had been, this would mark a first-known case when an artificial intelligence-based robot killed a person on its own.
According to the description on the manufacturer’s website, the Kargu-2 drone is designed for asymmetric warfare or anti-terrorist operations. It can be carried by a single person in both autonomous and manual modes. The device uses machine learning to identify targets and attack them. It has an explosive charge and is used in kamikaze-styled attacks.
The news is likely to reignite the debate on the use of autonomous killer robots, which has been going for several years. Proponents of the use of AI-based robots say they will reduce the risk to civilians and will be critical in countering fast-paced threats.
Opponents of killer robots argue that the technology is very dangerous and as such should be guided by an international convention otherwise the consequences would be catastrophic. Some individuals, including the late theoretical physicist Stephen Hawking and tech entrepreneur Elon Musk, even argue that autonomous killer robots should be banned. The Tesla and SpaceX chief executive went so far as to predict that there will be a war with AI and even launched a project to prevent it.
Opponents of AI-based robots draw attention to the fact that they are “taught” through databases, which enable them to distinguish an enemy from civilians. However, quite often these databases turn out to be biased. In 2018, AI used to recruitment employees deemed that the best person for the job was a man called Jared, who played lacrosse in high school.
Last September, Microsoft President Brad Smith said the world needs a global convention on the use of autonomous weapons, emphasising that robots should “not be allowed to decide on their own to engage in combat and who to kill”. On Thursday he warned that George Orwell’s dystopian science fiction novel 1984 will become a reality in 2024 if authorities worldwide don’t introduce regulations on AI.