Content area

Abstract

[...]the integration of AI in healthcare may introduce new complexities such as concerns with accuracy and reliability, and uncertainty in AI regulation and medical liabilities. [...]healthcare providers are skeptical about the utility of AI despite the massive societal interest and increasing adoption by health systems3. Risk aversion Risk preferences refer to an individual’s tendency to avoid taking risks in making decisions. Because of AI’s relative novelty in clinical workflows and possible unfamiliarity among clinicians, there is an inherent distrust in its decision support capabilities. [...]to the extent that the healthcare organizations using the AI tool already have existing resources to address concerns flagged by AI, better deployment of personnel such as case managers or social workers to whom the clinician can directly refer the patient may enhance AI adoption. [...]increase in the adoption of AI needs to be accompanied by enhanced regulations that clearly address the liability issues, which is currently an important but underdeveloped area.

Details

Title
Aligning incentives: the importance of behavioral economic perspectives in AI adoption
Author
Li, Jing 1 ; Navathe, Amol S. 2 ; Zhang, Yiye 3 

 University of Washington, Seattle, USA (GRID:grid.34477.33) (ISNI:0000 0001 2298 6657) 
 University of Pennsylvania, Philadelphia, USA (GRID:grid.25879.31) (ISNI:0000 0004 1936 8972) 
 Weill Cornell Medicine, New York, USA (GRID:grid.471410.7) (ISNI:0000 0001 2179 7643) 
Pages
18
Publication year
2025
Publication date
Dec 2025
Publisher
Nature Publishing Group
e-ISSN
30051959
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3225848049
Copyright
Copyright Nature Publishing Group Dec 2025