Lords AI weapons committee holds first evidence session

The potential benefits of using artificial intelligence (AI) in weapons systems and military operations should not be conflated with better international humanitarian law (IHL) compliance, Lords have been told.

Established 31 January 2023, the House of Lords AI in Weapon Systems Committee was set up to explore the ethics of developing and deploying autonomous weapons systems (AWS), including how they can be used safely and reliably, their potential for conflict escalation, and their compliance with international laws.

Also known as lethal autonomous weapons systems (LAWS), these are weapons systems that can select, detect and engage targets with little or no human intervention.

In its first evidence session on 23 March 2023, Lords heard from expert witnesses about whether the use of AI in weapon systems would improve or worsen compliance with IHL.

Daragh Murray, a senior lecturer and IHSS Fellow at Queen Mary University of London School of Law, for example, noted there is “a possibility” that the use of AI here could improve compliance with IHL.

“It can take a lot more information into account, it doesn’t suffer from fatigue, adrenaline or revenge, so if it’s designed properly, I don’t see why it couldn’t be better in some instances,” he said.

“For me, the big stumbling block is that we tend to approach an AI systems from a one-size- fits-all perspective where we expect it to do everything, but if we break it down in certain situations – maybe identifying an enemy tank or responding to an incoming rocket – an AI system might be much better.”

However, he was clear that any accountability for an AI-powered weapon systems operation would have to lie with the humans who set the parameters of deployment.

Georgia Hinds, a legal adviser at the International Committee of the Red Cross (ICRC), said that while that she understands the potential military benefits offered by AWS – such as increased operational speed – she would strongly caution against conflating these benefits with improved IHL compliance.

“Something like [improved operational] speed actually could pose a real risk for compliance with IHL,” she said. “If human operators don’t have the actual ability to monitor and to intervene in processes, if they’re accelerated beyond human cognition, it means that they wouldn’t be able to prevent an unlawful or an unnecessary attack – and that’s actually an IHL requirement.”

She added that arguments around AWS not being subject to rage, revenge, fatigue and the like lack the empirical evidence to back them up.

“Instead what we’re doing is engaging in hypotheticals, where we compare a bad decision by a human operator against a hypothetically good outcome that results from a machine process,” she said.

“I think there are many assumptions made in this argument, not least of which is that humans necessarily make bad decisions, [and] it ultimately ignores the fact that humans are vested with the responsibility for complying with IHL.”

Noam Lubell, a professor at Essex Law School, agreed with Hinds and questioned where the benefits of military AI would accrue.

“Better for whom? The military side and the humanitarian side might not always see the same thing as being better,” he said. “Speed was mentioned but accuracy, for example, is one where I think both sides of the equation – the military and the humanitarian – can make an argument that accuracy is a good thing.”

Precision weapons debate

Lubell noted a similar debate has played out over the past decade in relation to the use of “precision weapons” like drones – the use of which was massively expanded under the Obama administration.

“You can see that on the one hand, there’s an argument being made: ‘There’ll be less collateral damage, so it’s better to use them’. But at the same time, one could also argue that has led to carrying out military strikes in situations where previously it would have been unlawful because there would be too much collateral damage,” he said.

“Now you carry out a strike because you feel you’ve got a precision weapon, and there is some collateral damage, albeit lawful, but had you not had that weapon, you wouldn’t have carried out the strike at all.”

Speaking with Computer Weekly about the ethics of military AI, professor of political theory and author of  Death machines: The ethics of violent technologies Elke Schwarz made a similar point, pointing out that over a decade’s worth of drone warfare has shown that more ‘precision’ does not necessarily lead to fewer civilian casualties, as the convenience enabled by the technology actually lowers the threshold of resorting to force. 

“We have these weapons that allow us great distance, and with distance comes risk-lessness for one party, but it doesn’t necessarily translate into less risk for others – only if you use them in a way that is very pinpointed, which never happens in warfare,” she said, adding the effects of this are clear: “Some lives have been spared and others not.”

On the precision arguments, Hinds noted that while AWS’ are often equated with being more accurate, the opposite is true in the ICRC’s view.  

“The use of an autonomous weapon, by its definition, reduces precision because the user actually isn’t choosing a specific target – they’re launching a weapon that’s designed to be triggered based on a generalised target profile, or a category of object,” she said.

“I think the reference to precision hear generally relates to the ability to better hone in on a target and maybe to use a smaller payload, but that isn’t tied specifically to the autonomous function of the weapons.”

Human accountability

Lubell said, in response to a Lords question about whether it would ever be appropriate to “delegate” decision-making responsibility to a military AI system, that we are not talking about Terminator-style scenario where an AI sets its own tasks and goes about achieving them, and warned against anthropomorphising language.

“The systems that we’re talking about don’t decide, in that sense. We’re using human language for a tool – it executes a function but it doesn’t make a decision in that sense. I’m personally not comfortable with the idea that we’re even delegating anything to it,” he said.

“This is a tool just like any other tool, all weapons are tools, we’re using a tool…there are solutions to the accountability problem that are based on understanding that these are tools rather than agents.”

Murray said he would also be very hesitant to use the word ‘delegate’ in this context: “I think we have to remember that humans set the parameters for deployment. So I think the tool analogy is a really important one.”

Hinds also added that IHL assessments, particularly those around balancing proportionality with the anticipated military advantage, very much rely on value-judgement and context-specific considerations.

“When you recognise someone is surrendering, when you have to calculate proportionality, it’s not a numbers game. It’s about what is the military advantage anticipated,” she said.

“Algorithms are not good at evaluating context, they’re not good at rapidly changing circumstances, and they can be quite brittle. I think in those circumstances, I would really query how we’re saying that there would be a better outcome for IHL compliance, when you’re trying to codify qualitative assessments into quantitative code that doesn’t respond well to these elements.”

Ultimately, she said IHL is about “processes, not results”, and that “human judgement” can never be outsourced.

AI for general military operations

All witnesses agreed that narrowly looking at the role of AI in weapons systems would fail to fully account for the other ways in which AI could be deployed militarily and contribute to use of lethal force, and said they were particularly concerned about the use of AI for intelligence and decision-making purpsoes.

“I wouldn’t limit it to weapons,” said Lubell. “Artificial intelligence can play a critical role in who or what ends up being targeted, even outside of a particular weapon.”

Lubell added he is just as concerned, if not more, about the use of AI in the early intelligence analysis stages of military operations, and how it will affect decision-making.

Giving the example of AI in law enforcement, which has been shown to further entrench existing patterns of discrimination in the criminal justice system due to the use of historically biased policing data, Lubell said he is concerned “those problems repeating themselves when we’re using AI in the earlier intelligence analysis stages [of military planning]”.

The Lords present at the session took this on board and said that they would expand the scope of their inquiry to look at the use of AI throughout the military, and not just in weapon systems specifically.

Read More

Sebastian Klovig Skelton