Cancel Preloader

Automatizing Patterns of Conduct: Can Artificial Intelligence Help Commanders Better Comply with the Principle of Distinction?

 Automatizing Patterns of Conduct: Can Artificial Intelligence Help Commanders Better Comply with the Principle of Distinction?

[Andrea Farrés is
a young international lawyer specialized in IHL, international security and
human rights issues.
] 

With the fog of war getting thicker and
thicker, commanders and politicians are naturally inclined to search for tools
to get guidance on how they can better comply with the international
humanitarian law (IHL) targeting principles, specifically the principle of
distinction.

To distinguish a civilian from a combatant, or
a person who is taking direct part in the hostilities, analysing patterns of
conduct has become crucial, as like the article 44(3) of the Additional
Protocol I of the Geneva Conventions acknowledges, “there are situations in
armed conflicts where, owing to the nature of the hostilities an armed
combatant cannot so distinguish himself.”

With the use of new technologies in the rise, artificial
intelligence (AI) is presented as good at pattern recognition, which could lead
to the conclusion that using this tool commanders would increase their accuracy
in the process of selection of targets. However, to what extent can we
automatize patterns of conduct to comply with IHL? Would it be advisable to
leave entirely to algorithms the decision on who is a combatant? Bearing these
questions in mind, this post opts for a human-machine teaming when it comes to
applying the principle of distinction.

It does so, first by acknowledging the
proliferation of AI in the battlefield, explaining the cases of the SKYNET and
the URSA programmes; second, by analysing how the principle of distinction is
laid out in IHL, and if it provides guidance for developing the automatizing of
patterns of conduct through AI; and finally, by discussing if AI is fit for this
purpose, by highlighting some opportunities and challenges worth keeping in
mind while developing this new technology.

Proliferation of AI in the battlefield

Despite fully autonomous weapons, popularly
known as killer robots, are far away from existing, autonomy in the battlefield
is a much more established reality than one may assume at first. Hundreds of a
wide range of autonomous weapon systems, including “stationary turrets, missile systems,
and manned or unmanned aerial, terrestrial or marine vehicles
,” with
different autonomy levels (for instance some of them using AI and others not)
are already deployed in the battlefield. Thus, a wide range of warfare tasks
are assisted by technology.

For the purposes of this post, I focus only on
the systems which incorporate AI, as those are the ones which can automatize
pattern recognition. Despite having access to only a handful of those types of
programmes, the SKYNET or the Urban Reconnaissance through Supervised Autonomy
(URSA), developed by the United States Department of Defense, serve as good
examples or this technology.

The SKYNET consists of an AI machine-learning
algorithm allegedly used to analyse the cellular network metadata of millions
of people in Pakistan to identify couriers carrying messages between Al-Qaeda
members, and thus, through the creation of patterns related to the use of
mobile phones, rated the likelihood of being a terrorist. This technique of
cell phone data tracking, which was first tried by the US NSA in Afghanistan,
has also been used in other conflicts, such as in Yemen. However, a report based on leaked documentation
argued that a flaw in how this algorithm is trained to analyse such data
allegedly produced unsound results. When analysing how the SKYNET works, its
supporters argue that through programmes like this human control is not
jeopardized because this is one of the multiple tools used which help the
commander to reach a better assessment of who is a combatant. Besides, it
enables the collection of massive amounts of information to identify leads for
a targeted attack, something which through human capabilities only would not be
possible. However, its critics emphasize the lack of relevant data (“ground truths”) which can be
inserted to the machine and which could eventually lead to unsound results.
Moreover, they stress how the process of data interpretation is not a neutral
task, and how biases play an important role in developing new technology. This
is so as, for instance, patterns of conduct which would increase the rate of a
person to be considered as terrorist according to SKYNET would be conducts such
as a low use of your phone, or switching
it off
, which
would be automatically considered as attempts to evade mass surveillance.

The URSA programme aims to “enable improved techniques for
rapidly discriminating hostile intent and filtering out threats in complex
urban environments
”.  In other words, the DoD is “developing a program of high-tech
cameras mounted on drones and other robots that monitor cities, which enable
identification and discrimination between civilians and terrorists through
machine learning computers
.” In this second example too, one can observe the importance of
identifying patterns of conduct, especially in urban contexts, to separate
civilians from combatants. However, while in the SKYNET programme AI is aimed
at a very specific task (rating the likelihood of someone being a terrorist
analysing a pattern of conduct through the use of your phone), the URSA
programme has a much broader goal, which comes with bigger challenges and
opportunities. In this case, the defenders of this technology argue as well that
this is one additional tool that assists commanders, not jeopardizing the
requirement of meaningful human control. Moreover, URSA enhances human
capabilities, as through a complicated system of sensors and AI, separates
innocent from hostile behaviour, processing huge amounts of data otherwise
inaccessible. However, thorny questions also arise: which behaviours are deemed
as hostile or innocent? Is cultural sensitivity being considered? How to ensure
that the programme is fed with accurate information? Who is monitoring the
process of data interpretation? Unfortunately, as the development of the
programme is confidential, these questions remain unanswered for the general
public.

Can the
Principle of Distinction Laid out in IHL Guide the Elaboration of Patterns
Through AI?

In relation to International Armed Conflicts
(IAC), article 50 of the AP I to the Geneva Conventions describes the category
of civilian population, who cannot be the object of any attack, in a negative
sense. To determine who belongs to the armed forces, article 44(3) of AP I abolishes
the precondition of “having a fixed distinctive sign recognizable at a
distance” in “situations where, owing to the nature of the hostilities, an
armed combatant cannot distinguish himself from the civilian population”, being
limited to the obligation of carrying his arms openly. This paragraph contains an
exception to the fundamental requirement of combatants to distinguish
themselves, a provision highly contested among States at the Diplomatic
Conference. Even though in non-International Armed Conflicts (NIAC), common article
3 to the Geneva Conventions does not define the term “civilians”, the ICRC 2009
Guidance clarified that its definition was the same applicable for IACs. Therefore,
as the category of civilians is constructed in a negative sense, there is no
definition which can be inserted in the AI programmes articulating what
constitutes a civilian pattern of behaviour.

In the same vein, IHL provides little guidance
on which behaviour amounts to civilians directly participating in the
hostilities (DPH). If a civilian is found DPH, the conduct he or she undertakes
“suspends his or her protection against the dangers arising from military
operations,” although no definition of what a direct participation is can be
found in IHL. To assess if a civilian is DPH, the ICRC clarified that the
threshold of harm, direct causation and the belligerent nexus needs to be
proved. To analyse these three requirements, contextual information, “the big
picture”, needs to be taken into consideration. For instance, “the tactical and strategic
implications of a potential harm; the status of other potentially threatened
individuals; the direct causal implications of someone’s actions; or the
sociocultural and psychological situation in which that individual’s intentions
and actions qualify as military actions
.” This means that, as an example, feeding the
algorithm with the ground truth that “driving a truck full of ammunitions” is a
conduct which amounts to DPH would be incorrect, as this would be true only if the truck is going to a
shooting position, and not from a factory to a port far from a conflict zone
, requiring taking into
consideration contextual information, a capability out of reach for AI
programmes.

Using AI to Increase Compliance with IHL: Challenges
and Opportunities for Human-Machine Teaming

The SKYNET and URSA programmes are a few of the
examples relatively open to the public which use AI to create patterns which
aim at a better compliance with the principle of distinction. Bearing these
examples in mind, and how the principle of distinction is laid out by the
Geneva Conventions, this post concludes by highlighting some of the challenges
and opportunities automatizing patterns of conduct could create.

When considering how to automatize a pattern of
conduct through AI, the first challenge one encounters is that the Geneva
Conventions do not provide any hint on how a civilian behaves or what he or she
looks like: they are described by opposition and in a negative sense, leaving
us with no information related to the identification of civilians based on
their behaviour. In relation to this, cultural sensitivity while coming up with
patterns of conduct is a very much needed skill. This is so as, for instance, in
various parts of the world civilians carry weapons for self-defence purposes, a
pattern of conduct a Western technologist could easily qualify as linked to the
hostilities if he or she has no relevant training related to the areas in which
the developing technology plans to be applied. In this regard, it is important
for the developers of these new technologies to bear in mind the need to be
culturally inclusive and ensure that the reality on the ground is understood,
so the ground truths inserted in the algorithm enable the protection of
civilians.

Creating patterns of behaviour on what
constitutes a DPH act or not also entails the risk of leaving out of the
assessment the “big picture”, a constraint deemed necessary by the ICRC when
evaluating the threshold of harm, direct causation and the belligerent nexus.
Also, IHL states that solely the presence of military or civilians DPH among
the civilian population does not deprive the population of the protection from
an attack, favouring the need for military commanders to issue context-based
decisions. Furthermore, identifying hors de combat and combatants or civilians
DPH surrendering also require a contextual analysis, and the ability to
interpret human intentions. As previously stated, AI inherently lacks both
capabilities, which according to IHL are necessary. Therefore, when dealing
with human-machine teaming to analyse the patterns of conduct created by AI,
the role of the commanders is to study this data while ensuring a meaningful
human control by bringing in the contextual information and the interpretation
of human intentions.

On another level, there are technological
challenges related to AI which hinder an accurate automatization of patterns of
conduct, like the need for extensive databases to feed the algorithm with, or
the automation bias, which refers to the natural inclination of humans to
believe what the computer suggests, jeopardizing meaningful human control over
targeting decisions. As the critics of the SKYNET pointed out, machine-learning
needs massive amounts of data to come up with accurate patterns. However, there
seems to be not enough evidence on “how a terrorist behaves” in order to
produce sound results. Nevertheless, this challenge could be turned into an
interesting opportunity to gather experts from several fields to discuss what
are conducts which amount to DPH, so we could move from having a handful
examples to draw on, to have sound generalizations which could be welcomed both
at the legal level and would facilitate the development of patterns through AI.

In relation to the opportunities the use of AI offers,
the potential reduction of the confirmation bias is an interesting option. This
bias, which is present not only when AI is used, as it is a typical
characteristic of human thinking. It refers to when a person has a certain
belief (for example, that X behaviour is typical of a person who is a
combatant), so therefore he or she is going to focus on all the clues that
confirm his or her belief, omitting the opposing information that would
challenge this perception. Also, this is more likely to happen when little evidence
is available, which can often be the case, as it has been demonstrated by some studies related to wrongful drone strikes. In
this regard, the development of AI could reduce the confirmation bias if used
as an added tool for the commander to agree on a targeting decision, as it
would enable him or her to work with more information, which by herself of
himself would be unable to process, and because the recommendation made by the
algorithm would be the collective result of the work of lawyers, roboticists
and technologists, suggesting a result not linked to the certain beliefs of the
commander.

In conclusion, one should bear in mind that although
AI can be better than humans at generating information, human capabilities to
analyse it are still much accurate than the ones of a “robot”. Therefore, while
developing human-machine teams, the existence of biases should be taken into
account, as well as the limitations current AI capabilities have, such as the
lack of contextual awareness or little information available from which to draw
reliable patterns.

Print Friendly, PDF & Email

Read More

0 Reviews

Write a Review

tianze.zhang@graduateinstitute.ch

Leave a Reply

Your email address will not be published.