Automating FDA Regulation

67 Pages Posted: 15 Dec 2021 Last revised: 25 Jul 2022

See all articles by Mason Marks

Mason Marks

Florida State University - College of Law; Harvard Law School; Yale University - Information Society Project; Leiden University - Centre for Law and Digital Technologies

Date Written: December 8, 2021

Abstract

In the twentieth century, the Food and Drug Administration (“FDA”) rose to prominence as a respected scientific agency. By the middle of the century, it transformed the U.S. medical marketplace from an unregulated haven for dangerous products and false claims to a respected exemplar of public health. More recently, the FDA’s objectivity has increasingly been questioned. Critics argue the agency has become overly political and too accommodating to industry while lowering its standards for safety and efficacy. The FDA’s accelerated pathways for product testing and approval are partly to blame. They require lower-quality evidence, such as surrogate endpoints, and shift the FDA’s focus from premarket clinical trials toward postmarket surveillance, requiring less evidence up front while promising enhanced scrutiny on the back end. To further streamline product testing and approval, the FDA is adopting outputs from computer models, enhanced by artificial intelligence (“AI”), as surrogates for direct evidence of safety and efficacy.

This Article analyzes how the FDA uses computer models and simulations to save resources, reduce costs, infer product safety and efficacy, and make regulatory decisions. To test medical products, the FDA assembles cohorts of virtual humans and conducts digital clinical trials. Using molecular modeling, it simulates how substances interact with cellular targets to predict adverse effects and determine how drugs should be regulated. Though legal scholars have commented on the role of AI as a medical product that is regulated by the FDA, they have largely overlooked the role of AI as a medical product regulator. Modeling and simulation could eventually reduce the exposure of volunteers to risks and help protect the public. However, these technologies lower safety and efficacy standards and may erode public trust in the FDA while undermining its transparency, accountability, objectivity, and legitimacy. Bias in computer models and simulations may prioritize efficiency and speed over other values such as maximizing safety, equity, and public health. By analyzing FDA guidance documents and industry and agency simulation standards, this Article offers recommendations for safer and more equitable automation of FDA regulation.

Note:
Funding: None to declare.

Declaration of Interests: None to declare

Keywords: artificial intelligence, AI, machine learning, natural language processing, FDA, Food and Drug Administration, administrative law, computer modeling, simulation, modeling and simulation, molecular modeling, in silico trials, digital clinical trials, AI ethics, non-delegation, PHASE model, kratom

Suggested Citation

Marks, Mason, Automating FDA Regulation (December 8, 2021). 71 Duke Law Journal 1207 (2022) , Available at SSRN: https://ssrn.com/abstract=3980973 or http://dx.doi.org/10.2139/ssrn.3980973

Mason Marks (Contact Author)

Florida State University - College of Law ( email )

425 W. Jefferson Street
Tallahassee, FL 32306
United States

Harvard Law School ( email )

1563 Massachusetts Avenue
Cambridge, MA 02138
United States

Yale University - Information Society Project ( email )

P.O. Box 208215
New Haven, CT 06520-8215
United States

Leiden University - Centre for Law and Digital Technologies ( email )

P.O. Box 9520
2300 RA Leiden, NL-2300RA
Netherlands

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
313
Abstract Views
1,518
Rank
178,196
PlumX Metrics