Full text

Turn on search term navigation

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Along with the increasing development of information technology, the interaction between artificial intelligence and humans is becoming even more frequent. In this context, a phenomenon called “medical AI aversion” has emerged, in which the same behaviors of medical AI and humans elicited different responses. Medical AI aversion can be understood in terms of the way that people attribute mind capacities to different targets. It has been demonstrated that when medical professionals dehumanize patients—making fewer mental attributions to patients and, to some extent, not perceiving and treating them as full human—it leads to more painful and effective treatment options. From the patient’s perspective, will painful treatment options be unacceptable when they perceive the doctor as a human but disregard his or her own mental abilities? Is it possible to accept a painful treatment plan because the doctor is artificial intelligence? Based on the above, the current study investigated the above questions and the phenomenon of medical AI aversion in a medical context. Through three experiments it was found that: (1) human doctor was accepted more when patients were faced with the same treatment plan; (2) there was an interactional effect between the treatment subject and the nature of the treatment plan, and, therefore, affected the acceptance of the treatment plan; and (3) experience capacities mediated the relationship between treatment provider (AI vs. human) and treatment plan acceptance. Overall, this study attempted to explain the phenomenon of medical AI aversion from the mind perception theory and the findings are revealing at the applied level for guiding the more rational use of AI and how to persuade patients.

Details

Title
Acceptance of Medical Treatment Regimens Provided by AI vs. Human
Author
Wu, Jiahua 1 ; Xu, Liying 1 ; Yu, Feng 2 ; Peng, Kaiping 1 

 Department of Psychology, School of Social Sciences, Tsinghua University, Beijing 100084, China; [email protected] (J.W.); [email protected] (K.P.) 
 Department of Psychology, School of Philosophy, Wuhan University, Wuhan 430079, China 
First page
110
Publication year
2022
Publication date
2022
Publisher
MDPI AG
e-ISSN
20763417
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2618218855
Copyright
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.