Ian Kerr holds the Canada Research Chair in Ethics, Law and Technology at the University of Ottawa, Faculty of Law. He is a founding member of the Centre for Law, Technology and Society and the Institute for Science, Society and Policy.
The presence of two great spirits emanates from a framed letter on my office wall. Dated November 29, 1947, and addressed to my grandfather, Louis Pearlman, the letter from the Emergency Committee of Atomic Scientists urges leaders in the community to help transform the newly created United Nations into an actual "federation of nations in which [we] might develop and use our creative capacities to serve [hu]mankind." Later that year, the authors of the letter would tour the U.S. to educate the public on the peaceful uses of nuclear energy. The letter is signed by Albert Einstein.
Most of us can only imagine what it would be like to lay the scientific groundwork for the subsequent development of world-altering contrivances such as the atomic bomb or nuclear power. Especially after his famous letter to President Roosevelt in 1939, one imagines in Einstein a similar existential dread to that experienced by the pilot of Enola Gay flying out of the shockwave on that infamous August day: total consternation in the ensuing recognition of the cataclysmic capacity for human slaughter.
Today we stand on the precipice of what world-renowned Australian artificial-intelligence expert Toby Walsh has called "the third revolution in warfare, after gunpowder and nuclear arms." We face the near-term possibility of autonomous weapons that would select, engage and kill military or police targets without any need for human intervention or oversight. If developed, they will permit armed conflict to be fought at a scale greater than ever, and faster than humans can comprehend.
The deadly consequence of this is that machines – not people – will determine who lives and dies. Unlike nuclear weapons, AI-guided weapons require no costly or hard-to-obtain raw materials. They are potentially ubiquitous and would be relatively cheap for all significant military (or police) powers to mass-produce.
Inspired by Einstein's letter to my grandfather, I recently launched a campaign alongside four internationally renowned Canadian AI researchers. Yoshua Bengio (University of Montreal), Geoffrey Hinton (University of Toronto), Rich Sutton (University of Alberta), Doina Precup (McGill University) and I have consolidated a deep consensus in Canada's AI research community.
In a letter delivered to the Prime Minister's Office last week, we exhort Justin Trudeau to join an international call to ban autonomous weapons that remove meaningful human control in the deployment of lethal force. More than 200 leading AI researchers have also signed the letter, which is now open for all Canadians to sign and have their say.
All who sign the letter are of the view that weaponizing AI is a very bad idea.
Weapons that remove meaningful human control from target-and-kill decisions sit on the wrong side of a clear moral line. We have therefore asked Canada to support the call to ban such weapons at the UN Conference of the Convention on Certain Conventional Weapons, which convenes in Geneva on November 13.
We believe Canada should also commit to working with other states to conclude a new international agreement that achieves this objective. By doing so, our government can reclaim its position of moral leadership on the world stage as demonstrated previously by the Ottawa Treaty – the international ban on landmines initiated in 1996 by our then-minister of foreign affairs, Lloyd Axworthy.
The Canadian AI research community is not alone. As we sent the letter to our Prime Minister, Australia's AI researchers sent a similar letter to theirs. The Belgian AI research community will soon follow, as will experts in other countries.
Organizations such as the International Committee for Robot Arms Control, Human Rights Watch and Mines Action Canada have worked tirelessly on this issue for several years as part of a broader Campaign to Stop Killer Robots. The Future of Life Institute has co-ordinated similar previous letters, signed by leading scientists and the heads of more than 250 AI companies.
Although engaged citizens sign petitions every day, it is not often that captains of industry, scientists and technologists call for prohibitions on innovation of any sort, let alone an outright ban. But the Canadian AI research community is clear: We must not permit AI to target and kill without meaningful human control. Playing Russian roulette with the lives of others can never be justified. The decision on whether to ban autonomous weapons goes to the core of our humanity.
Canada has made its way onto the global stage in AI research, with applications developed here already underpinning improvements in infrastructure, transportation, education, health, business, the arts and the military. Recognizing AI's potential to become a driver of our economy, the Canadian government recently invested an unprecedented $125-million in AI research. However, as my co-author of the letter Yoshua Bengio likes to say: "Leading in AI also means acting responsibly about it."
Like Einstein's vision for the peaceful application of nuclear energy, Canada's creators of AI and those who study its ethical and legal ramifications want to develop AI as an engine of creation, not destruction.
You can sign our open letter to the Prime Minister here.