A little-known aspect of the war in Ukraine is that both Russia and Ukraine have deployed weapons that are capable of being used fully autonomously: for Russia, Lancet drones; for Ukraine, Punisher drones. Both weapons are capable of being operated semi-autonomously, with a human in the loop, and it is not clear whether Russia or Ukraine has used them fully autonomously. But the mere possibility indicates that, like it or not, we are drawing inexorably closer to the day when autonomous weapons systems (AWS) are regular participants in armed conflict.
It is quite timely, then, that I have just posted a short co-written article on SSRN entitled “Beyond the Ban: Comparing the Ability of ‘Killer Robots’ and Human Soldiers to Comply with IHL.” The article is co-authored with Lena Trabucco, a brilliant postdoctoral researcher at the Centre for Military Studies who is an expert in emerging weapons technologies. (She has a PhD in international law from the University of Copenhagen and a PhD in political science from Northwestern.)
Much of the legal debate over the use of autonomous weapons systems (AWS) focuses on whether AWS are capable of respecting basic principles of international humanitarian law (IHL). In one camp are the “techno-pessimists”: scholars who insist that AWS are not and never will be capable of complying with IHL. In the other camp are the “techno-optimists”: scholars who believe not only that AWS will eventually be able to comply with IHL, but also that the use of AWS can make armed conflict less violent.
There is, however, a puzzling lacuna at the heart of this debate. For all their differences, techno-pessimists and techno-optimists agree that the central question is whether AWS will ever be able to comply with IHL better than human soldiers. Yet the debate focuses almost exclusively on the technological limits of AWS; discussion of the cognitive errors that so often cause human soldiers to violate IHL is limited to anecdote, while systematic comparison of AWS and human soldiers in terms of their ability to comply with IHL is completely absent.
Using two scenarios involving the principle of distinction, this short article sketches the kind of research that is needed to determine whether AWS will ever be able to comply with IHL better than human soldiers. It suggests that, given human cognitive limits and machine technological potential, there is indeed reason to believe that in at least some combat situations AWS will eventually be able to outperform human soldiers in terms of IHL compliance.
As the abstract makes clear (and the article more so), although Lena and I reject as premature calls to “stop killer robots” through an international treaty and are unconvinced by the ethical argument that only humans should be allowed to take human life, we are by no means uncritically endorsing the development of AWS. We are instead defending a more pragmatic middle position, one that acknowledges the dangers of fully automated killing while insisting that debate over AWS must take into account the fact that soldiers often act irrationally, unpredictably, and violently on the batttlefield because of cognitive limitations that are unique to humans.
Our article, it is important to note, is a prolegomenon of sorts to a much larger project — one that will conduct the kind of empirical research we call for in the article. For the larger project, Lena and I have joined forces with Ron Arkin, who is Regents Professor and Director of the Mobile Robot Laboratory at Georgia Tech. Ron is not only a leading roboticist, he has written more extensively — and in our view more persuasively — than anyone else about the possibility of AWS eventually being able to reduce the amount of unnecessary violence on the battlefield.
As always, comments here or on Twitter are most welcome. Address all praise to me and all criticism to Lena.
PS: Our thanks to Chris Jenks for calling the use of Lancet and Punisher drones in Ukraine to our attention.