By Branka MarijanContributor
Fri., Oct. 9, 2020timer2 min. read
updateArticle was updated 2 days ago
If Canada intended not to make waves over a week ago at United Nations meetings on lethal autonomous weapons — otherwise known as killer robots — then mission accomplished.
One of the few countries not to make an individual or joint statement at the gathering in Geneva, Canada only took the floor once, to clarify that the chair had called on it by mistake.
Less than a year ago, however, Minister of Foreign Affairs François-Philippe Champagne had been specifically mandated to support international efforts to ban fully autonomous weapons.
In the mandate letter dated Dec. 13, 2019, the foreign minister was advised to “advance international efforts to ban the development and use of fully autonomous weapons systems.” This clear instruction pertaining to weapons in which critical functions, selection, and engagement of targets were performed by weapons independent of human control was unexpected but was warmly welcomed by supporters of a ban on such weapons.
Yet at the principal multilateral forum in the world in which this mandate could be fulfilled, Canada chose silence over engagement. Why?
For some time, artificial intelligence (AI) experts have been calling on the Canadian government to join international efforts to ban autonomous weapons.
In November 2017, leading Canadian AI researchers, including computer scientist Dr. Yoshua Bengio, sent an open letter to Prime Minister Justin Trudeau expressing their fear “that machines — not people — will determine who lives and dies.” The letter went on to say that “Canada’s AI community does not condone such uses of AI. We want to study, create and promote its beneficial uses.” The letter has never been answered.
But the mandate given to Champagne finally seemed to indicate that the Canadian government has been listening all along and was going to act. Since the UN began discussions on autonomous weapons in 2014, Canada has generally chosen to remain on the sidelines.
These discussions are not pie-in-the-sky and Canada’s continuing reluctance to state its position clearly could have serious consequences. Research that will lead to greater autonomy is ongoing. In the past year, the United States Navy tested a network of AI systems in which the human commander simply gave the order to fire. The targets were determined by the AI systems.
As the level of human control over weapons diminishes, the question of who is responsible for what the weapons could do becomes more compelling — and less easy to answer. And thus the question of how to maintain international standards of warfare becomes more acute.
The international discussions in Geneva will resume this November. Countries including Russia, which hope to benefit from autonomous weapons, will almost certainly attempt to derail any process that imposes restrictions.
Still, the march to autonomous weapons is not inevitable and is not under the sole control of major powers like Russia. But regulation of such weapons development will not happen if countries like Canada remain silent. Silence, in this case, is not without consequence.
Branka Marijan is a senior researcher at Project Ploughshares.