• Website maintained with the support of the Ian R. Kerr Memorial Fund at the Centre for Law, Technology and Society at the University of Ottawa
  • Blog
    • Biography
    • Press Kit
    • Contact
    • Approach
    • Contracts
    • Laws of Robotics
    • Building Better Humans
    • Books
    • Book Chapters
    • Journal Articles
    • Editorials
  • Research Team
  • Stuff
Menu

Ian R. Kerr [Archive]

Street Address
City, State, Zip
Phone Number
ARCHIVED WEBSITE

Your Custom Text Here

Ian R. Kerr [Archive]

  • Website maintained with the support of the Ian R. Kerr Memorial Fund at the Centre for Law, Technology and Society at the University of Ottawa
  • Blog
  • About
    • Biography
    • Press Kit
    • Contact
  • Teaching
    • Approach
    • Contracts
    • Laws of Robotics
    • Building Better Humans
  • Publications
    • Books
    • Book Chapters
    • Journal Articles
    • Editorials
  • Research Team
  • Stuff

Schrödinger’s Robot: Privacy in Uncertain States

June 3, 2019 CLTS
the-gift_sm.jpg

As robots and AIs are becoming ever-present in public and private, this piece addresses an increasingly relevant issue: can robots or AIs operating independently of human intervention or oversight diminish our privacy? Here, I consider two equal and opposite schools of thought on this issue.

On the side of the robots, we see that machines are starting to outperform human experts in an increasing array of narrow tasks, including driving, surgery, and medical diagnostics. This performance track record is fueling a growing optimism that robots and AIs will one day exceed humans more generally and spectacularly; some think, to the point where we will have to consider their moral and legal status.

On the side of privacy, I consider the exact opposite: that robots and AIs are, in a legal sense, nothing. The prevailing view is that since robots and AIs are neither sentient nor capable of human-level cognition, they are of no consequence to privacy law.

In this paper, I argue that robots and AIs operating independently of human intervention can and, in some cases, already do diminish our privacy. Using the framework of epistemic privacy, we can begin to understand the kind of cognizance that gives rise to diminished privacy. Because machines can act on the basis of the beliefs they form in ways that affect people’s life chances and opportunities, I argue that they demonstrate the kind of awareness that definitively implicates privacy. I come to the conclusion that legal theory and doctrine will have to expand their understanding of privacy relationships to be inclusive of robots and AIs that meet these epistemic conditions. Today, an increasing number of machines possess the epistemic qualities that force us to rethink our understanding of privacy relationships between humans and robots and AIs.

Read the full article

← The Death of the AI AuthorWhen AIs Outperform Doctors: Confronting the challenge of a tort-induced over-reliance on machine learning →

Special thanks and much gratitude are owed to one of my favorite artists, Eric Joyner, for his permission to display a number of inspirational and thought–provoking works in the banner & background.

You can contact the Centre for Law, Technology and Society | Creative Commons Licence