AI loyalty: A New Paradigm for Aligning Stakeholder Interests

9 Pages Posted: 20 Apr 2020 Last revised: 7 Aug 2020

See all articles by Anthony Aguirre

Anthony Aguirre

University of California, Santa Cruz

Gaia Dempsey

affiliation not provided to SSRN

Harry Surden

University of Colorado Law School

Peter Bart Reiner

Department of Psychiatry, University of British Columbia

Date Written: March 25, 2020

Abstract

When we consult with a doctor, lawyer, or financial advisor, we generally assume that they are acting in our best interests. But what should we assume when it is an artificial intelligence (AI) system that is acting on our behalf? Early examples of AI assistants like Alexa, Siri, Google, and Cortana already serve as a key interface between consumers and information on the web, and users routinely rely upon AI-driven systems like these to take automated actions or provide information. Superficially, such systems may appear to be acting according to user interests. However, many AI systems are designed with embedded conflicts of interests, acting in ways that subtly benefit their creators (or funders) at the expense of users. Unlike the relationship between an individual and a doctor, lawyer, or financial advisor, there is no requirement that AI systems act in ways that are consistent with the users’ best interests. To address this problem, in this paper we introduce the concept of AI loyalty. AI systems are loyal to the degree that they are designed to minimize, and make transparent, conflicts of interest, and to act in ways that prioritize the interests of users. Properly designed, such systems could have considerable functional and competitive – not to mention ethical – advantages relative to those that do not. Loyal AI products hold an obvious appeal for the end-user and could serve to promote the alignment of the long-term interests of AI developers and customers. To this end, we suggest criteria for assessing whether an AI system is sufficiently transparent about conflicts of interest, and acting in a manner that is loyal to the user, and argue that AI loyalty should be deliberately considered during the technological design process alongside other important values in AI ethics such as fairness, accountability privacy, and equity. We discuss a range of mechanisms, from pure market forces to strong regulatory frameworks, that could support incorporation of AI loyalty into a variety of future AI systems.

Keywords: AI assistant, fiduciary, loyalty, AI ethics

Suggested Citation

Aguirre, Anthony and Dempsey, Gaia and Surden, Harry and Reiner, Peter Bart, AI loyalty: A New Paradigm for Aligning Stakeholder Interests (March 25, 2020). U of Colorado Law Legal Studies Research Paper No. 20-18, Available at SSRN: https://ssrn.com/abstract=3560653 or http://dx.doi.org/10.2139/ssrn.3560653

Anthony Aguirre

University of California, Santa Cruz ( email )

1156 High St
Santa Cruz, CA 95064
United States

Gaia Dempsey

affiliation not provided to SSRN

Harry Surden

University of Colorado Law School ( email )

401 UCB
Boulder, CO 80309
United States

HOME PAGE: http://lawweb.colorado.edu/profiles/profile.jsp?id=316

Peter Bart Reiner (Contact Author)

Department of Psychiatry, University of British Columbia ( email )

2255 Wesbrook Mall
Vancouver, British Columbia BC V6T 2A1
Canada
250.537.6560 (Phone)

HOME PAGE: http://peterbartreiner.com

Do you have negative results from your research you’d like to share?

Paper statistics

Downloads
233
Abstract Views
1,926
Rank
238,410
PlumX Metrics