“A ban on killer robots is the ethical choice”, Ottawa Citizen, July 31, 2015 C9
This opinion editorial, published by the Ottawa Citizen, describes the recent “Open Letter from AI & Robotics Researchers”, calling for a ban on offensive autonomous weapons and explains why I am a signatory. In addition to concerns about a global AI arms race, I argue that the decision to ban killer robots is the ethical choice because delegating life-or-death decisions to machines crosses a fundamental moral line. I further argue that playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. In the end, the decision whether to ban killer robots is not only a fundamental issue of human rights; it goes to the core of our humanity.
Full text of the article:
Internet pioneer Stewart Brand famously said: “Once a new technology rolls over you, if you’re not part of the steamroller, you’re part of the road.”
This unseemly prospect is extremely powerful, imbuing in many the desire to build even bigger and better steamrollers.
Because, obviously, whoever builds the biggest steamroller wins. Right?
This mentality and the existential risks that emerging technologies impose are precisely what more than 16,000 AI researchers, roboticists and others in related fields are now seeking to avoid. Like the many chemists and biologists who provided broad support for the prohibition of chemical and biological weapons, these AI researchers and roboticists don’t want to see anybody steamrolled by killer robots.
That’s right. Killer robots.
Killer robots are offensive autonomous weapons that can select and engage targets without any need for human intervention. In an open letter recently presented at the International Joint Conference on Artificial Intelligence in Buenos Aires, experts describe the prospect of killer robots as “the third revolution in warfare, after gunpowder and nuclear arms.” The letter calls for “a ban on offensive autonomous weapons” that can be engaged without meaningful or effective human control.
The list of signatories calling for an offensive ban on killer robots is impressive. Anyone who consumes popular media surely knows by now that it includes the likes of Tesla and SpaceX CEO Elon Musk, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn, physicist Stephen Hawking, and numerous highly influential academics such as Noam Chomsky and Daniel Dennett.
Unsurprisingly, the popular press has ignored a number of notable female signatories worthy of explicit mention (hat tip to Mary Wareham): Higgins Professor of Natural Sciences Barbara Grosz of Harvard University, IBM Watson design leader Kathryn McElroy, Martha E. Pollack of the University of Michigan, Carme Torras of the Robotics Institute at CSIC-UPC in Barcelona, Francesca Rossi of Padova University and Harvard University, Sheila McIlraith of the University of Toronto, Allison Okamura of Stanford University, Lucy Suchman of Lancaster University, Bonnie Weber of Edinburgh University, Mary-Anne Williams of the University of Technology Sydney, and Heather Roff of the University of Denver, to name a few.
I too am a signatory. I am a Canadian participant in the global Campaign To Stop Killer Robots (coordinated by Human Rights Watch in collaboration with 8 other national and international NGOs). I am also a member of the International Committee for Robot Arms Control (an NGO committed to the peaceful use of robotics in the service of humanity).
As a technological concept, the killer robot represents a stark shift in military policy; a willful, intentional and unprecedented removal of humans from the kill decision loop. Just set the robots loose and let them do our dirty work.
For this reason and others, the United Nations has dedicated a series of meetings through its Convention on Conventional Weapons, hoping to better understand killer robots and their social implications.
To date, the debate has mostly focused on three issues: How far off are we from developing advanced autonomous weapons? Could such technologies be made to comport with international humanitarian law? Could a ban be effective if some nations do not comply?
On the first issue, the open letter reveals the stunning fact that many technologists believe the robot revolution is “feasible within years, not decades, and the stakes are high.”
Of course, this is largely speculative and the actual timeline is surely longer once one layers on top of the technology the requirements of the second issue, that killer robots must comport with international humanitarian law. That is, machine systems operating without human intervention must be able to: successfully discriminate between combatants and non-combatants in the moment of conflict; morally assess every possible conflict in order to justify whether a particular use of force is proportional; and comprehend and assess military operations sufficiently well to be able to decide whether the use of force on a particular occasion is of military necessity.
To date, there is no obvious solution to these non-trivial technological challenges.
However, in my view, it is the stance taken on the third issue — whether it would be efficacious to ban killer robots in any event — that makes this open letter profound. This is what made me want to sign the letter.
Although engaged citizens sign petitions everyday, it is not often that captains of industry, scientists and technologists call for prohibitions on innovation of any sort — let alone an outright ban. The ban is an important signifier. Even if it is self-serving insofar as it seeks to avoid “creating a major public backlash against AI that curtails its future societal benefits,” by recognizing that starting a military AI arms race is a bad idea, the letter quietly reframes the policy question of whether to ban killer robots on grounds of morality rather than efficacy. This is crucial, as it provokes a fundamental reconceptualization of the many strategic arguments that have been made for and against autonomous weapons.
When one considers the matter from the standpoint of morality rather than efficacy, it is no longer good enough to say, as careful thinkers like Evan Ackerman have said, that “no letter, UN declaration, or even a formal ban ratified by multiple nations is going to prevent people from being able to build autonomous, weaponized robots.”
We know that. But that is not the point.
Delegating life-or-death decisions to machines crosses a fundamental moral line — no matter which side builds or uses them. Playing Russian roulette with the lives of others can never be justified merely on the basis of efficacy. This is not only a fundamental issue of human rights. The decision whether to ban or engage killer robots goes to the core of our humanity.
The Supreme Court of Canada has had occasion to consider the role of efficacy in determining whether to uphold a ban in other contexts. I concur with Justice Charles Gonthier, who astutely opined:
“(T)he actual effect of bans … is increasingly negligible given technological advances which make the bans difficult to enforce. With all due respect, it is wrong to simply throw up our hands in the face of such difficulties. These difficulties simply demonstrate that we live in a rapidly changing global community where regulation in the public interest has not always been able to keep pace with change. Current national and international regulation may be inadequate, but fundamental principles have not changed nor have the value and appropriateness of taking preventive measures in highly exceptional cases.”
Killer robots are a highly exceptional case.
Rather than asking whether we want to be part of the steamroller or part of the road, the open letter challenges our research communities to pave alternative pathways. As the letter states: “AI has great potential to benefit humanity in many ways, and … the goal of the field should be to do so.”
In my view, perhaps the chief virtue of the open letter is its implicit recognition that scientific wisdom posits limits. This is something Einstein learned the hard way, prompting his subsequent humanitarian efforts with the Emergency Committee of Atomic Scientists. Another important scientist, Carl Sagan, articulated this insight with stunning, poetic clarity:
“It might be a familiar progression, transpiring on many worlds – a planet, newly formed, placidly revolves around its star; life slowly forms; a kaleidoscopic procession of creatures evolves; intelligence emerges which, at least up to a point, confers enormous survival value; and then technology is invented. It dawns on them that there are such things as laws of Nature, that these laws can be revealed by experiment, and that knowledge of these laws can be made both to save and to take lives, on unprecedented scales. Science, they recognize, grants immense powers. In a flash, they create world-altering contrivances. Some planetary civilizations see their way through, place limits on what may and what must not be done, and safely pass through the time of perils. Others, not so lucky or so prudent, perish.”
Recognizing the ethical wisdom of setting limits and living up to demands the of morality is difficult enough. Figuring out the practical means necessary to entrench those limits will be even tougher. But it is our obligation to try.