As robots and AIs are becoming ever-present in public and private, this piece addresses an increasingly relevant issue: can robots or AIs operating independently of human intervention or oversight diminish our privacy? Here, I consider two equal and opposite schools of thought on this issue.
On the side of the robots, we see that machines are starting to outperform human experts in an increasing array of narrow tasks, including driving, surgery, and medical diagnostics. This performance track record is fueling a growing optimism that robots and AIs will one day exceed humans more generally and spectacularly; some think, to the point where we will have to consider their moral and legal status.
On the side of privacy, I consider the exact opposite: that robots and AIs are, in a legal sense, nothing. The prevailing view is that since robots and AIs are neither sentient nor capable of human-level cognition, they are of no consequence to privacy law.
In this paper, I argue that robots and AIs operating independently of human intervention can and, in some cases, already do diminish our privacy. Using the framework of epistemic privacy, we can begin to understand the kind of cognizance that gives rise to diminished privacy. Because machines can act on the basis of the beliefs they form in ways that affect people’s life chances and opportunities, I argue that they demonstrate the kind of awareness that definitively implicates privacy. I come to the conclusion that legal theory and doctrine will have to expand their understanding of privacy relationships to be inclusive of robots and AIs that meet these epistemic conditions. Today, an increasing number of machines possess the epistemic qualities that force us to rethink our understanding of privacy relationships between humans and robots and AIs.