Full text

Turn on search term navigation

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

In this paper, we present the potential of Explainable Artificial Intelligence methods for decision support in medical image analysis scenarios. Using three types of explainable methods applied to the same medical image data set, we aimed to improve the comprehensibility of the decisions provided by the Convolutional Neural Network (CNN). In vivo gastral images obtained by a video capsule endoscopy (VCE) were the subject of visual explanations, with the goal of increasing health professionals’ trust in black-box predictions. We implemented two post hoc interpretable machine learning methods, called Local Interpretable Model-Agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP), and an alternative explanation approach, the Contextual Importance and Utility (CIU) method. The produced explanations were assessed by human evaluation. We conducted three user studies based on explanations provided by LIME, SHAP and CIU. Users from different non-medical backgrounds carried out a series of tests in a web-based survey setting and stated their experience and understanding of the given explanations. Three user groups (n = 20, 20, 20) with three distinct forms of explanations were quantitatively analyzed. We found that, as hypothesized, the CIU-explainable method performed better than both LIME and SHAP methods in terms of improving support for human decision-making and being more transparent and thus understandable to users. Additionally, CIU outperformed LIME and SHAP by generating explanations more rapidly. Our findings suggest that there are notable differences in human decision-making between various explanation support settings. In line with that, we present three potential explainable methods that, with future improvements in implementation, can be generalized to different medical data sets and can provide effective decision support to medical experts.

Details

Title
Explainable Artificial Intelligence for Human Decision Support System in the Medical Domain
Author
Samanta Knapič 1   VIAFID ORCID Logo  ; Malhi, Avleen 2 ; Saluja, Rohit 3   VIAFID ORCID Logo  ; Kary Främling 1   VIAFID ORCID Logo 

 Department of Computer Science, Aalto University, Konemiehentie 2, 02150 Espoo, Finland; [email protected] (A.M.); [email protected] (R.S.); [email protected] (K.F.); Department of Computing Science, Umeå University, 90187 Umeå, Sweden 
 Department of Computer Science, Aalto University, Konemiehentie 2, 02150 Espoo, Finland; [email protected] (A.M.); [email protected] (R.S.); [email protected] (K.F.); Department of Computing and Informatics, Bournemouth University, Poole BH12 5BB, UK 
 Department of Computer Science, Aalto University, Konemiehentie 2, 02150 Espoo, Finland; [email protected] (A.M.); [email protected] (R.S.); [email protected] (K.F.); Department of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 11428 Stockholm, Sweden 
First page
740
Publication year
2021
Publication date
2021
Publisher
MDPI AG
e-ISSN
25044990
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2576450933
Copyright
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.