Negligent AI Speech: Some Thoughts About Duty

Journal of Free Speech Law

15 Pages Posted: 14 May 2023 Last revised: 13 Oct 2023

See all articles by Jane R. Bambauer

Jane R. Bambauer

University of Florida Levin College of Law; University of Florida - College of Journalism & Communication; University of Arizona - James E. Rogers College of Law

Date Written: April 28, 2023

Abstract

Careless speech has always existed on a very large scale. When people talk, they often give bad advice or wrong information, and occasionally this leads the listener to act in a way that causes physical harm. The scale was made more visible by the public Internet as the musings and conversations of billions of participants became accessible and searchable to all. This dynamic produced a set of tort and free speech principles that we have debated and adjusted to over the last three decades.

AI speech systems bring a new dynamic. Unlike the disaggregated production of misinformation in the Internet era, much of the production will be centralized and supplied by a small number of deep pocket, attractive defendants (namely, OpenAI, Microsoft, Google, and other producers of sophisticated conversational AI programs). When should these companies be held liable for negligent speech produced by their programs? And how should the existence of these programs affect liability between other individuals?

This essay begins to work out the options that courts or legislatures will have. I will explore a few hypotheticals that are likely to arise frequently, and then plot out the analogies that courts may make to existing liability rules. The essay focuses on duty—that is, whether under traditional tort principles. Historically, duty rules have accommodated and absorbed First Amendment principles when the alleged act of negligence is pure expression. I consider hypotheticals and likely judicial responses to them in three clusters: (A) cases where the AI gives misinformation leading the user to harm herself; (B) cases where the AI gives misinformation leading the user to harm a third party (via the user’s conduct); and (C) cases where the user does not use AI, and if they had, it would have supplied useful information to avert physical harm.

In the end, I conclude that duty rules, if not modified for the AI context, could wind up missing the mark for optimal deterrence. They can be too broad, too narrow, or both at the same time, depending on how courts decide to draw their analogies.

Keywords: AI, ChatGPT, negligence, duty, common law, private law

Suggested Citation

Yakowitz Bambauer, Jane R., Negligent AI Speech: Some Thoughts About Duty (April 28, 2023). Journal of Free Speech Law, Available at SSRN: https://ssrn.com/abstract=4432822 or http://dx.doi.org/10.2139/ssrn.4432822

Jane R. Yakowitz Bambauer (Contact Author)

University of Florida Levin College of Law ( email )

P.O. Box 117625
Gainesville, FL 32611-7625
United States

University of Florida - College of Journalism & Communication ( email )

United States

University of Arizona - James E. Rogers College of Law ( email )

P.O. Box 210176
Tucson, AZ 85721-0176
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
386
Abstract Views
1,684
Rank
141,900
PlumX Metrics