Killing Robots and Just War Theory

Justification of war as an objective phenomenon found its place as a traditional theory of just war. Theorists of this concept explain that war can be ethical if it follows certain rules. These rules are divided into two parts. The first part called Jus ad Bellum regulates the moral principles on which the state can start the war being morally right and war itself can be justified. The outcome from the Jus ad Bellum, that if all the moral principles are satisfied by all sides, the war is impossible to happen, and subsequently the fact that the war has happened evidence about some of these principles were violated by one of the sides. The side that acts in correspondence to these principles is justified and according to Jonathan Parry, “individuals commit no wrong by fighting in it” (Parry, 2020).

The second part of the just war principals is called Jus in Bello and it describes the moral conditions and means on which the performance of the war can be justified. It focuses on specific aspects that fighting sides should follow during the war. Any violations of these principles can be assumed as war crimes and thus be persecuted. These principles are Discrimination, Proportionality and Necessity.

Discrimination is a principle that gives moral permission for combatants of one side to attack combatants of another side. At the same time, it prohibits attacking non-combatant targets. This principle is based on the assumption that non-combatants gave no consent to be involved in the war and thus did “nothing to lose their normal right not to be killed” (Parry, 2020).

Talking about just war theory, worth mentioning the reductivist approach to war which follows the assumption that “wars can be justified as a large-scale aggregation of individual justifications for killing in self-defense and in defense of others” (Parry, 2020). Reductivists focus on two aspects: right against harm and the possibility to lose their right not to be killed or harmed known also as liability justification for killing. According to the liability justification in case if an enemy side is not Jus ad Bellum its combatants can be seen as legitimate targets for killing or harm.

One sensitive aspect that Jonathan Parry discusses in his article is the legitimacy of the granted consent for combatants. Jonathan Parry highlights the fact that combatants usually very young people who are forcibly recruited to the military service. They usually represent underprivileged and less educated segments of the society who doesn’t have wide access to the information and thus can not critically assess their consent to lose their life.

Another objection to the discrimination principle and highlights that despite the fact that war is naturally an action between the states and it doesn’t involve civilians, in fact, it is not true. According to the objection, civilians do contribute to the wars in a direct and indirect manner. For example, working on the front line or participating in the supply chains, donating money, or supporting political powers that directly command the war. Thus it seems to be very problematic to draw the line in this gradient of involvement in war actions and create a stable immunity for non-combatants.

However, in order to protect rights of civilians from being not killed, Jonathan Parry sticks to the point that a target can be seen as legitimate only in case if the person “directly poses an unjust threat” (Parry, 2020). Thus people who are not involved in the particular action of armed conflict can be seen as non-combatants and their death can not be legitimized.

The rapid development of military technologies exposes new challenges and risks that possible armed conflict could be not regulated by the common just war principles as they are not adapted to the new realities of war performance and a new type of weapon. Robert Sparrow in his article ‘Killer Robots’ tackles the problem of deploying autonomous weapon systems and trying to review the basic moral aspects of these technologies.

Since the main role of AI to make decisions, there is a risk that “their actions will not be predictable” (Sparrow, 2007). The main focus of Sparrow’s paper is to identify the level of autonomy of new AI weapons and thus identify the level of Jus in Bello responsibility of all sides involved in the deployment and subsequent usage of these machines in case of war crime happens. “if war is going to be governed by morality at all — that someone should accept responsibility, or be capable of being held responsible” (Sparrow, 2007).

Sparrow’s writing doesn’t give a certain answer to the question if war becomes more just in the case of AI machines involvement. However, he highlights possible ambiguities in identifying the moral acceptability of war actions. So, if a war crime happened, can the machine be responsible for that as responsibility implies punishment in case if a mistake that causes harm occurs. Basic argument is that machines are in fact non-alive agents and they can not experience suffer and thus can not be punished and have no responsibility for their actions.

As the matter of fact, Sparrow’s paper is based on an outdated understanding of AI technologies. So for example, his attempts to review programmers who designed the algorithms as a side of responsibility are not valid. Modern AI technologies don’t have a hard-coded logic for decision making but use the training data to create models that will make decisions on the basis of previous experience, which literally makes them black boxes which logic is hidden by the complexity of calculations and could not be understood by people and thus programmers are detached from the decisions making by machines.

Having this aspect of unpredictability, Commanding Officers also has no direct influence on the actions of these machines and thus can not be seen as responsible. “The more autonomous the systems are, the larger this risk looms” (Sparrow, 2007). However, knowing the fact that machines can be unpredictable and at the same time not responsible for their actions, it is still the responsibility of the people who make decisions to deploy them.

Can war become more just if we use AI machines? On one hand, in case if the machines will be programmed to correspond Jus in Bello, that could be possible to avoid human factors such as anger or eagerness for revenge that usually lead to war crimes. War can become more precise and discriminatory principles can be satisfied, thus avoiding big civilians losses. On another side, the whole moral conception of just war still is very ambiguous and it can become even worse if AI ambiguity becomes a part of the war scenario.

Reference list

Parry, J. (2020). War and Moral Consistency. In Hugh LaFollette (ed.), Ethics in Practice: An Anthology, (5th Edition), pp.692–703.

Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), pp.62–77.

Read More

ol