Content area

Abstract

With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual, theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.

Details

Title
Trust in artificial intelligence: From a Foundational Trust Framework to emerging research opportunities
Author
Lukyanenko, Roman 1 ; Maass, Wolfgang 2 ; Storey, Veda C. 3 

 University of Virginia, Charlottesville, USA (GRID:grid.27755.32) (ISNI:0000 0000 9136 933X) 
 Saarland University and German Research Center for Artificial Intelligence (DFKI), Saarbruecken, Germany (GRID:grid.11749.3a) (ISNI:0000 0001 2167 7588) 
 Georgia State University, Atlanta, USA (GRID:grid.256304.6) (ISNI:0000 0004 1936 7400) 
Pages
1993-2020
Publication year
2022
Publication date
Dec 2022
Publisher
Springer Nature B.V.
ISSN
10196781
e-ISSN
14228890
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2772194940
Copyright
© The Author(s), under exclusive licence to Institute of Applied Informatics at University of Leipzig 2022.