Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and the Acceptability of Risk

35 Pages Posted: 3 Oct 2022

See all articles by Johann Laux

Johann Laux

University of Oxford - Oxford Internet Institute

Sandra Wachter

University of Oxford - Oxford Internet Institute

Brent Mittelstadt

University of Oxford - Oxford Internet Institute

Date Written: September 26, 2022

Abstract

Governments, international organisations, corporations, and other institutions around the globe are drawing up frameworks for ‘trustworthy’ Artificial Intelligence (AI). This effort follows an explicit strategic premise: raise the trustworthiness of AI and people will trust it more, thus use it more, and unlock the technology’s economic and social potential. With its proposed AI Act, the European Union (EU) has put itself at the forefront of this regulatory development. Adopting a risk-based approach towards AI, the EU chose to understand trustworthiness of AI in terms of the acceptability of its risks. This conflation of trustworthiness with acceptability of risk invites further reflection. Based on a narrative systematic literature review on institutional trust and the use of AI in the public sector, this paper argues that the EU adopted a simplistic conceptualisation of trust and is overselling its regulatory ambition. The AI Act is a proposal for technocratic risk-regulation which by itself is unlikely to effectively signal trustworthiness or raise actual levels of trust in citizens. This paper makes four contributions. First, it reconstructs the conflation of ‘trustworthiness’ with the ‘acceptability of risks’ in the EU’s AI policy. Second, with a view on the extreme heterogeneity of trust research, the paper develops a prescriptive set of variables for reviewing trust research in the context of AI. Third, it then uses those variables as a structure for a narrative review of prior research on trust and trustworthiness in AI in the public sector. Fourth, the paper relates the findings of the review to the EU’s AI policy. It states the uncertain prospects for the AI Act to be successful in engineering citizen’s trust. There remains a threat of misalignment between levels of actual trust and the trustworthiness of applied AI. The conflation of ‘trustworthiness’ with the ‘acceptability of risks’ in the AI Act will thus be shown to be inadequate.

Keywords: Artificial Intelligence, AI Act, Trust, Regulation, Risk, Law, European Union, Policy, Literature Review

JEL Classification: K32, L52, O38

Suggested Citation

Laux, Johann and Wachter, Sandra and Mittelstadt, Brent, Trustworthy Artificial Intelligence and the European Union AI Act: On the Conflation of Trustworthiness and the Acceptability of Risk (September 26, 2022). Available at SSRN: https://ssrn.com/abstract=4230294 or http://dx.doi.org/10.2139/ssrn.4230294

Johann Laux (Contact Author)

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Sandra Wachter

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Brent Mittelstadt

University of Oxford - Oxford Internet Institute ( email )

1 St. Giles
University of Oxford
Oxford OX1 3PG Oxfordshire, Oxfordshire OX1 3JS
United Kingdom

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
857
Abstract Views
3,808
Rank
52,124
PlumX Metrics