Defence releases report on ethical use of AI

16 February 2021

Findings from a workshop on the ethics of Artificial Intelligence (AI) for Defence in 2019 have been released to support science and technical considerations for the potential development of Defence policy, doctrine, research and project management.                                                           

The technical report entitled A Method for Ethical AI in Defence summarises the discussions from the workshop, and outlines a pragmatic ethical methodology to enhance further communication between software engineers, integrators and operators during the development and operation of AI projects in Defence.

Chief Defence Scientist, Professor Tanya Monro, said AI technologies offer many benefits such as saving lives by removing humans from high-threat environments and improving Australian advantage by providing more in-depth and faster situational awareness.

“Upfront engagement on AI technologies, and consideration of ethical aspects needs to occur in parallel with technology development,” Professor Monro said.

The significant potential of AI technologies and autonomous systems is being explored through the Science, Technology and Research (STaR) Shots from the  More, together: Defence Science and Technology Strategy 2030 as well as meeting the needs of the updated National Security Science & Technology Priorities.

“Defence research incorporating AI and human-autonomy teaming continues to drive innovation, such as work on the Allied IMPACT (AIM) Command and Control (C2) System demonstrated at Autonomous Warrior 2018 and the establishment of the Trusted Autonomous Systems Defence CRC.”

Air Vice-Marshal Cath Roberts, Head of Air Force Capability said “artificial intelligence and human-machine teaming will play a pivotal role for air and space power into the future.”

“We need to ensure that ethical and legal issues are resolved at the same pace that the technology is developed. This paper is useful in suggesting consideration of ethical issues that may arise to ensure responsibility for AI systems within traceable systems of control.”

“Practical application of these tools into projects such as the Loyal Wingman will assist Defence to explore autonomy, AI, and teaming concepts in an iterative, learning and collaborative way,” said AVM Roberts.

Supported by Defence Science and Technology Group, Plan Jericho and the Trusted Autonomous Systems Defence Cooperative Research Centre (TASDCRC), the 2019 workshop was attended by over 100 representatives from Defence, other Australian government agencies, industry, academia, international organisations and media.

Attendees contributed evidence-based hypotheses to discussions with a view to developing a report with suggestions as a starting point for principles, topics and methods relevant to Defence contexts for AI and autonomous systems to inform military leadership and ethics. Consultation with Defence stakeholders after the workshop consolidated the outputs of the report.

The ethics of AI and autonomous systems is an ongoing priority and Defence is committed to developing, communicating, applying and evolving ethical AI frameworks. Australia is proactive in undertaking Article 36 legal reviews on new weapons, means and methods of warfare.

A Method for Ethical AI in Defence is available at www.dst.defence.gov.au/ethicalai

Media contacts

Issued by Ministerial and Executive Coordination and Communication,


Department of Defence, 


Canberra, ACT


Email: media@defence.gov.au 

NEWSLETTER

Join the GlobalSecurity.org mailing list




Read More

John Pike