[Helen Stamp is a PhD candidate in the Minderoo Tech & Policy Lab and researches concepts of control, responsibility, and accountability regarding the development and use of autonomous vehicles and autonomous weapons.]
The ongoing discussions by the international community regarding the challenges posed by emerging technologies and the application of International Humanitarian Law (IHL) to the use of such technologies, are well known to those researching in this field. For some time, this area has been dominated by debate on the legality of developing and deploying Lethal Autonomous Weapons (LAWS) in situations of armed conflict; a debate which is very much pushed forward by concerns not only about the legality of these weapons but also by a fundamental questioning of the ethics of autonomous technology Whilst this discussion is valid and necessary, it does seem to have almost completely sidelined debate regarding the military use of artificial intelligence more generally. Such debate is necessary given that the ongoing development of, and use of artificial intelligence, is now prevalent in many facets of society and the resulting incorporation of artificial intelligence into military applications is inevitable. It is also logical that artificial intelligence and its connection to IHL as a legal framework should be considered more broadly given that this technology creates the very building blocks essential to the development of LAWS.
Hitoshi Nasu is very much aware of these limitations in respect to the debate on artificial intelligence and IHL and it was refreshing to read about this in his chapter on ‘Artificial intelligence and the obligation to respect and to ensure respect for IHL.’ In this chapter, Nasu explores the issues surrounding the military use of artificial intelligence generally without centring this discussion on LAWS (my conditioning to the current academic debate in this area was very much exposed when I assumed, when reading the title of this chapter, that most of this chapter would be about LAWS!). Nasu adds an important additional factor into this discussion, this being that the development and use of artificial intelligence for military purposes is not neatly confined to this particular use; indeed, quite the opposite as artificial intelligence is usually first developed for commercial reasons and then adapted for military use if appropriate. This is an important shift in this dynamic with Nasu noting that, previously, other new technologies were developed for military use and then commercialised if this was viable.
Nasu begins by observing the significant, long term investment in the development of artificial intelligence for military application by a number of powerful States. This is important given that some academic debate in the area of emerging technologies for military use still seems predicated on the notion that such technology is akin to science fiction and completely futuristic rather than something that is already becoming an integrated part of the society we live in.
The creation of artificial intelligence in the commercial sector and its potential ‘dual use’ application to the military setting, as identified by Nasu, is very much the pivotal point of this chapter. Such artificial intelligence is now being commissioned, created and deployed in commercial confidence without falling into any of the obligations under IHL, including those contained within Common Article 1, and will have the capability to then be adapted for military application in a reasonably fluid way and at scale. With these factors in mind, consideration of how States can, both internally and externally, comply with the obligations under Common Article 1, in respect to the military use of artificial intelligence is of vital importance
Nasu explains the main technologies currently incorporated into artificial intelligence systems and notes that such technologies produce ‘the algorithm- based probabilistic reasoning based on highly complicated statistical operations or geometric relationships that humans cannot visualise creates a ‘black box’ problem – it is difficult for humans to predict the decision or output that AI produces, or understand its decision-making process.’ (p.134).
Whilst this remains correct to a certain extent, it would have been useful for Nasu to also explore in this chapter the emerging literature and think tanks exploring explainable AI and algorithmic accountability where an emphasis is placed on the human input and human decision making that usually goes into each stage of developing an artificial intelligence system from commissioning, development and deployment of such systems. This field is also considering how attribution of human responsibility for decisions made through the lifecycle of such development can be tracked and documented for compliance and accountability reasons, together with how ethical considerations can be embedded into such systems to minimise problems such as algorithmic bias and issues with data sets used.
The context of the development and use of artificial intelligence needs to also be explored by considering a number of additional dynamics not always paired with conventional weaponry, including the role of the transnational big tech companies in developing artificial intelligence and associated politics, and the commercial sensitivities that are inherently part of this development. The concepts of control and responsibility have also fundamentally shifted as artificial intelligence is developed with questions as to how the different liability frameworks, including International Criminal Law, may be applied to address violations of IHL when these occur as the result of artificial intelligence that is incorporated into a weapon. This is an important context to take into account when considering the measures that states can take under Common Article 1 as the military use of artificial intelligence continues. For example, could greater accountability measures for the development of artificial intelligence for commercial application have flow on benefits for regulation if it has a later military use? Could documentation of the processes and decisions made when creating and developing artificial intelligence systems be available for review when such systems are later incorporated into a weapon?
Nasu also notes that ‘there will be a need to revisit the relevant rules of IHL in determining how those rules that have been developed to regulate the conduct of States and individuals might extend to the use of AI as it starts assuming the tasks that human beings traditionally performed on the battlefield.’ (p.135). Whilst the general consensus of the international community, at this time, is that IHL should continue to be the applicable legal framework to regulate LAWS, given the additional concerns and difficulties associated with regulating artificial intelligence generally, revisiting how IHL will apply to the military use of this technology will very much remain a work in progress.
In line with the overall theme of this book, Nasu also examines the practical measures that are currently available to regulate weapons and how these might apply to military applications of artificial intelligence.
Nasu explores the concept of due diligence incumbent upon states to comply with Common Article 1 and the limited range of measures available at this time to exercise this diligence in respect to new technologies such as artificial intelligence. Nasu notes that the application of weapons reviews under Article 36 of Additional Protocol I to artificial intelligence used for military purposes is limited as only a few states have systematic approaches to such reviews, some artificial intelligence technology may not even be subject to such a review as this technology is developed independently from the military (as discussed earlier), or the use of a particular artificial intelligence system by the military may not qualify as a ‘weapon’ or ‘method of warfare’ and therefore not fall under Article 36. Nasu also notes that Article 36 reviews consider the broad circumstances for which the weapons will be used and, as such, may miss some of the military uses of artificial intelligence or not examine the use of an artificial intelligence component when this is incorporated into a weapon.
Nasu also considers export controls applicable to weapons under regimes, such as the Arms Trade Treaty and the Wassenaar Agreement, and the obligations of States to consider if a weapon could be used for violations of IHL by another state prior to export. Whilst both agreements are fairly limited in their application to the military use of artificial intelligence, Nasu notes that the Wassenaar Agreement does extend to algorithms with certain functions.
In his conclusion, Nasu reiterates that the fundamental gap in complying with Common Article 1 in respect to artificial intelligence relates to technology which, whilst not originally developed to cause injury or death to a person or object, is ‘capable of application that facilitates a serious violation of IHL.’ (p.141).
This chapter by Hitoshi Nasu is a very important contribution to the limited scholarship covering the regulation of artificial intelligence under IHL and International Criminal Law when this technology is used for military purposes. The development of such technology in the commercial sphere and its potential to then be incorporated into weapons and used for other military purposes, with relative ease and with little detection presents a testing scenario for the obligation to respect and to ensure respect for IHL under Common Article 1.
The limitations of the practical measures currently available for states to use to comply with the obligations under Common Article 1 in respect to artificial intelligence, in the form of weapon reviews and export controls, are laid bare by Nasu and it is clear that there is much work that needs to be done in this area. In writing this chapter, Nasu, has taken the step of clearly identifying both the importance of this issue (amongst the noise of the debate regarding LAWS) and the need for the international community to actively consider how to ensure that respect for IHL is maintained as states increase their military use of artificial intelligence.