• Website maintained with the support of the Ian R. Kerr Memorial Fund at the Centre for Law, Technology and Society at the University of Ottawa
  • Blog
    • Biography
    • Press Kit
    • Contact
    • Approach
    • Contracts
    • Laws of Robotics
    • Building Better Humans
    • Books
    • Book Chapters
    • Journal Articles
    • Editorials
  • Research Team
  • Stuff
Menu

Ian R. Kerr [Archive]

Street Address
City, State, Zip
Phone Number
ARCHIVED WEBSITE

Your Custom Text Here

Ian R. Kerr [Archive]

  • Website maintained with the support of the Ian R. Kerr Memorial Fund at the Centre for Law, Technology and Society at the University of Ottawa
  • Blog
  • About
    • Biography
    • Press Kit
    • Contact
  • Teaching
    • Approach
    • Contracts
    • Laws of Robotics
    • Building Better Humans
  • Publications
    • Books
    • Book Chapters
    • Journal Articles
    • Editorials
  • Research Team
  • Stuff

When AIs Outperform Doctors: Confronting the challenge of a tort-induced over-reliance on machine learning

June 3, 2019 CLTS
Screen Shot 2019-06-04 at 1.18.31 PM.png

I wrote this piece in collaboration with my long-time pal and We Robot co-founder Michael Froomkin and our genius colleague in machine learning, Joëlle Pineau. In it, we observe that someday, perhaps soon, diagnostics generated by machine learning (ML) will have demonstrably better success rates than those generated by human doctors. In that context, we ask what the dominance of ML diagnostics will mean for medical malpractice law, for the future of medical service provision, for the demand for certain kinds of doctors, and—in the long run—for the quality of medical diagnostics itself?

In our view, once ML diagnosticians are shown to be superior, existing medical malpractice law will require superior ML-generated medical diagnostics as the standard of care in clinical settings. Further, unless implemented carefully, a physician’s duty to use ML systems in medical diagnostics could, paradoxically, undermine the very safety standard that malpractice law set out to achieve.

Although at first doctor + machine may be more effective than either alone—because humans and ML systems might make very different kinds of mistakes—in time, as ML systems improve, effective ML could create overwhelming legal and ethical pressure to delegate the diagnostic process to the machine. Ultimately, a similar dynamic might extend to treatment as well. If we reach the point where the bulk of clinical outcomes collected in databases are ML-generated diagnoses, this may result in future decisions that are no longer easily audited or even understood by human doctors.

Given the well-documented fact that treatment strategies are often not as effective when deployed in clinical practice compared to preliminary evaluation, the lack of transparency introduced by the ML algorithms could lead to a decrease in the overall quality of care. My co-authors and I describe salient technical aspects of this scenario, particularly as it relates to diagnosis and canvasses various possible technical and legal solutions that would allow us to avoid these unintended consequences of medical malpractice law. Ultimately, we suggest there is a strong case for altering existing medical liability rules to avoid a machine-only diagnostic regime. We conclude that the appropriate revision to the standard of care requires maintaining meaningful participation by physicians the loop.

Read the full article

← Schrödinger’s Robot: Privacy in Uncertain StatesRobots and Artificial Intelligence in Health Care →

Special thanks and much gratitude are owed to one of my favorite artists, Eric Joyner, for his permission to display a number of inspirational and thought–provoking works in the banner & background.

You can contact the Centre for Law, Technology and Society | Creative Commons Licence