Content area
Abstract
With the rise of artificial intelligence (AI), the issue of trust in AI emerges as a paramount societal concern. Despite increased attention of researchers, the topic remains fragmented without a common conceptual and theoretical foundation. To facilitate systematic research on this topic, we develop a Foundational Trust Framework to provide a conceptual, theoretical, and methodological foundation for trust research in general. The framework positions trust in general and trust in AI specifically as a problem of interaction among systems and applies systems thinking and general systems theory to trust and trust in AI. The Foundational Trust Framework is then used to gain a deeper understanding of the nature of trust in AI. From doing so, a research agenda emerges that proposes significant questions to facilitate further advances in empirical, theoretical, and design research on trust in AI.
Details
1 University of Virginia, Charlottesville, USA (GRID:grid.27755.32) (ISNI:0000 0000 9136 933X)
2 Saarland University and German Research Center for Artificial Intelligence (DFKI), Saarbruecken, Germany (GRID:grid.11749.3a) (ISNI:0000 0001 2167 7588)
3 Georgia State University, Atlanta, USA (GRID:grid.256304.6) (ISNI:0000 0004 1936 7400)





