Automated Vehicles, Moral Hazards & the 'AV Problem'

5 Notre Dame J. Emerging Tech.1 (2023)

University of Miami Legal Studies Research Paper No. 3902217

45 Pages Posted: 11 Aug 2021 Last revised: 30 Sep 2023

See all articles by William H. Widen

William H. Widen

University of Miami - School of Law

Date Written: August 9, 2021

Abstract

The autonomous vehicle (“AV”) industry faces the following ethical question: “How do we know when our AV technology is safe enough to deploy at scale?” The search for an answer to this question is the “AV Problem.” This essay examines that question through the lens of the July 15, 2021 filing on Form S-4 with the Securities and Exchange Commission in the going public transaction for Aurora Innovation, Inc.

The filing reveals that successful implementation of Aurora’s business plan in the long term depends on the truth of the following proposition: A vehicle controlled by a machine driver is safer than a vehicle controlled by a human driver (the “Safety Proposition”).

In a material omission for which securities law liability may attach, the S-4 fails to state Aurora’s position on deployment: will Aurora delay deployment until such time as it believes the Safety Proposition is true to a reasonable certainty or will it deploy at scale earlier in the hope that increased current losses will be offset by anticipated future safety gains?

The Safety Proposition is a statement about physical probability which is either true or false. For success, AV companies need the public to believe the Safety Proposition, yet belief is not the same as truth. The difference between truth and belief creates tension in the S-4 because the filing both fosters a belief in the Safety Proposition while at the same time making clear there is insufficient evidence to support the truth of the Safety Proposition.

A moral hazard results when financial pressures push for early deployment of AV systems before evidence shows that the Safety Proposition is true to a reasonable certainty. This problem is analyzed by comparison with the famous trolley problem in ethics and consideration of corporate governance techniques which an AV company might use to ensure the integrity of its decision process for deployment. The AV industry works to promote belief in the safety proposition in the hope that the public will accept that AV technology has benefits, thus avoiding the need to confront the truth of the Safety Proposition directly. This hinders a meaningful public debate about the merits and timing of deployment of AV technology, raising the question of whether there is a place for meaningful government regulation.

Keywords: Artificial intelligence, Autonomous vehicles, Business ethics, Consequentialism, Disclosure, Ethics, Machine ethics, Material omission, Moral agent, Moral authority, Moral hazard, Moral machine, Self-driving cars, SEC disclosure, Securities regulation, Trolley problem, Utilitarianism

JEL Classification: K22

Suggested Citation

Widen, William H., Automated Vehicles, Moral Hazards & the 'AV Problem' (August 9, 2021). 5 Notre Dame J. Emerging Tech.1 (2023), University of Miami Legal Studies Research Paper No. 3902217, Available at SSRN: https://ssrn.com/abstract=3902217 or http://dx.doi.org/10.2139/ssrn.3902217

William H. Widen (Contact Author)

University of Miami - School of Law ( email )

P.O. Box 248087
Coral Gables, FL 33146
United States

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
216
Abstract Views
1,061
Rank
256,055
PlumX Metrics