• Website maintained with the support of the Ian R. Kerr Memorial Fund at the Centre for Law, Technology and Society at the University of Ottawa
  • Blog
    • Biography
    • Press Kit
    • Contact
    • Approach
    • Contracts
    • Laws of Robotics
    • Building Better Humans
    • Books
    • Book Chapters
    • Journal Articles
    • Editorials
  • Research Team
  • Stuff
Menu

Ian R. Kerr [Archive]

Street Address
City, State, Zip
Phone Number
ARCHIVED WEBSITE

Your Custom Text Here

Ian R. Kerr [Archive]

  • Website maintained with the support of the Ian R. Kerr Memorial Fund at the Centre for Law, Technology and Society at the University of Ottawa
  • Blog
  • About
    • Biography
    • Press Kit
    • Contact
  • Teaching
    • Approach
    • Contracts
    • Laws of Robotics
    • Building Better Humans
  • Publications
    • Books
    • Book Chapters
    • Journal Articles
    • Editorials
  • Research Team
  • Stuff

Asleep at the switch? How killer robots become a force multiplier of military necessity

May 31, 2019 CLTS
Executioner_sm.jpg

This chapter was written in collaboration with one of my favorite all-time coauthors and friends, Katie Szilagyi.

Lethal autonomous weapons—machines that might one day target and kill people without human intervention or oversight—are gaining attention on the world stage. While their development, deployment and perceived superiority over human soldiers are presumed to be inevitable, in this chapter we challenge the prevalent view, arguing that the adoption of these technologies is not fait accompli.

We begin by canvassing the state of the art in robotic warfare and the military advantages that autonomous weapons offer, aiming to scratch below the surface level success of robotic warfare and consider the drastic effects its implementation can have on international humanitarian law, adherence to humanitarian principles, and notions of technological neutrality. International humanitarian law governs the use of particular weapons and advancing technologies in order to ensure that the imperative of humanity modulates how war is waged. Based on the interest of protecting civilians, military actions are therefore restricted through compliance with humanitarian principles, including proportionality between collateral injuries and military advantages, discrimination between combatants and non-combatants, and military necessity for reaching concrete objectives.

This chapter suggests that serious and catastrophic consequences become foreseeable when robots are given full autonomy to pull the trigger in complicated and context-dependent situations, and that technological neutrality is not a safe presumption. We also argue that when a disruptive technology changes the nature of what is possible, there is a corresponding expansion in what can be perceived of as “necessary,” allowing lethal autonomous robots to become a force multiplier of military necessity. Ultimately, we ask our readers to consider the consequences of a future with lethal autonomous robots, when the power to implement them lies in the hands of those who have not fully come to terms with their implications.

Download the full article.

← Delegation, relinquishment, and responsibility: The prospect of expert robotsChief Justice John Roberts Is a Robot →

Special thanks and much gratitude are owed to one of my favorite artists, Eric Joyner, for his permission to display a number of inspirational and thought–provoking works in the banner & background.

You can contact the Centre for Law, Technology and Society | Creative Commons Licence