Full text

Turn on search term navigation

© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

AI-assisted decision-making that impacts individuals raises critical questions about transparency and fairness in artificial intelligence (AI). Much research has highlighted the reciprocal relationships between the transparency/explanation and fairness in AI-assisted decision-making. Thus, considering their impact on user trust or perceived fairness simultaneously benefits responsible use of socio-technical AI systems, but currently receives little attention. In this paper, we investigate the effects of AI explanations and fairness on human-AI trust and perceived fairness, respectively, in specific AI-based decision-making scenarios. A user study simulating AI-assisted decision-making in two health insurance and medical treatment decision-making scenarios provided important insights. Due to the global pandemic and restrictions thereof, the user studies were conducted as online surveys. From the participant’s trust perspective, fairness was found to affect user trust only under the condition of a low fairness level, with the low fairness level reducing user trust. However, adding explanations helped users increase their trust in AI-assisted decision-making. From the perspective of perceived fairness, our work found that low levels of introduced fairness decreased users’ perceptions of fairness, while high levels of introduced fairness increased users’ perceptions of fairness. The addition of explanations definitely increased the perception of fairness. Furthermore, we found that application scenarios influenced trust and perceptions of fairness. The results show that the use of AI explanations and fairness statements in AI applications is complex: we need to consider not only the type of explanations and the degree of fairness introduced, but also the scenarios in which AI-assisted decision-making is used.

Details

Title
Fairness and Explanation in AI-Informed Decision Making
Author
Angerschmid, Alessa 1   VIAFID ORCID Logo  ; Zhou, Jianlong 2   VIAFID ORCID Logo  ; Theuermann, Kevin 3 ; Chen, Fang 4   VIAFID ORCID Logo  ; Holzinger, Andreas 5   VIAFID ORCID Logo 

 Medical Informatics, Statistics and Documentation, Medical University Graz, 8036 Graz, Austria; [email protected] (A.A.); [email protected] (A.H.) 
 Human-Centered AI Lab, University of Natural Resources and Life Sciences, 1190 Vienna, Austria; Human-Centered AI Lab, University of Technology Sydney, Sydney, NSW 2007, Australia; [email protected] 
 Doctoral School of Computer Science, Graz University of Technology, 8010 Graz, Austria; [email protected] 
 Human-Centered AI Lab, University of Technology Sydney, Sydney, NSW 2007, Australia; [email protected] 
 Medical Informatics, Statistics and Documentation, Medical University Graz, 8036 Graz, Austria; [email protected] (A.A.); [email protected] (A.H.); Human-Centered AI Lab, University of Natural Resources and Life Sciences, 1190 Vienna, Austria; Doctoral School of Computer Science, Graz University of Technology, 8010 Graz, Austria; [email protected]; xAI Lab, Alberta Machine Intelligence Institute, University of Alberta, Edmonton, AB T5J 3B1, Canada 
First page
556
Publication year
2022
Publication date
2022
Publisher
MDPI AG
e-ISSN
25044990
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2679758101
Copyright
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.