Article
52 Mitchell Hamline L. Rev. 1 (2025)

Generative AI as Courtroom Evidence: A Practical Guide

By
Neal Feigenson & Brian Carney

You are the lawyer in a case in which the crucial incident was captured by dozens of smartphone, surveillance, and other cameras. Imagine your forensic video expert putting all of those videos into a generative artificial intelligence (GenAI) model that quickly synchronizes the audio and video streams, links relevant documents, and provides an outline for the strategy of your case—enabling you to understand exactly what happened in minutes instead of weeks and then suggesting ways to prove it at trial. The expert could also employ GenAI to enhance those videos, making relevant facts clearer by rendering blurry images more legible and inaudible conversations more intelligible, or even by creating important camera angles showing views not found in the original images. Or imagine, in a complex commercial dispute, feeding masses of documents and other data into a GenAI model that produces timelines and other visualizations of the relevant events, as well as lists of inherent contradictions in the evidence, which you could then use to prepare your arguments and illustrate your theory of the case in court. All of these tools and more will soon be available.

Much has been written in the last half-dozen or so years about the prospect of images, video, and audio created with GenAI being used in court. Most of the concern has focused on deepfakes, and for good reason. Easily created and as convincing as genuine video or audio recordings, deepfakes have already caused harm to individuals (such as celebrities and ordinary women whose faces have been superimposed onto pornographic videos) and society (by exacerbating the spread of political disinformation, among other things). Needless to say, the courtroom use of a technology that facilitates depicting something that did not actually occur would pose a threat to fair and accurate adjudication, and various technological and procedural methods for identifying deepfakes and excluding them from the courtroom have been proposed.

Since knowingly introducing a deepfake would almost always require an authenticating witness to commit perjury, a lawyer to commit fraud on the court, or both, the prospect of adverse rulings from the bench, loss of face, loss of credibility, and professional sanctions, if not internalized ethical principles, should keep most lawyers from doing so. Rather, GenAI is likely to affect evidentiary practice in other ways, none of which need involve any intent to deceive. Most, in fact, are likely to be driven by the desire to get at the facts and present them more clearly and accurately.