AI Entities as AI Agents: Artificial Intelligence Liability and the AI Respondeat Superior Analogy

Artificial Intelligence (AI) based entities are already causing damages and fatalities in today’s commercial world. As a result, the dispute about tort liability of AI-based machines, algorithms, agents, and robots is exponentially advancing in the scholarly world and outside of it. When it comes to AI accidents, different scholars and key figures in the AI industry advocate for different liability regimes. This ever-growing disagreement is condemning this new emergent technology, soon to be found in almost every home and street in the US and around the world, into a realm of regulatory uncertainty. This obstructs our ability to fully enjoy the many benefits AI has to offer us as consumers and as a society.
This Article advocates for the adoption and application of a strict liability regime on current and future AI accidents. It does so by delving into and exploring the realm of legal analogies in the AI context and promoting the agency analogy, and subsequently, the respondeat superior doctrine. This Article explains and justifies why the agency analogy is the best-suited one in contrast to other analogies which have been suggested in the context of AI liability (e.g., products, animals, electronic persons and even slaves). As a result, the intuitive application of the respondeat superior doctrine provides the AI industry with a much-needed underlying liability regime which will enable it to continue to evolve in the years to come, and its victims to receive remedy once accidents occur.

AI-Entities-as-AI-Agents_-Artificial-Intelligence-Liability-and-t