Content area

Abstract

Generative artificial intelligence (AI) chatbots such as ChatGPT have several potential clinical applications, but their use for clinical documentation remains underexplored. AI-generated clinical documentation presents an appealing solution to administrative burden but raises new and old ethical concerns that may be overlooked. This article reviews the potential use of generative AI chatbots for purposes such as note-writing, handoffs, and prior authorisation letters, and the ethical considerations arising from their use in this context. AI-generated documentation may offer standardised and consistent documentation across encounters but may also embed biases that can spread across clinical teams relying on previous notes or handoffs, compromising clinical judgement, especially for vulnerable populations such as cognitively impaired or non-English-speaking patients. These tools may transform clinician–patient relationships by reducing administrative work and enhancing shared decision-making but may also compromise the emotional and moral elements of patient care. Moreover, the lack of algorithmic transparency raises concerns that may complicate the determination of responsibility when errors occur. To address these considerations, we propose notifying patients when the use of AI-generated clinical documentation meaningfully impacts their understanding of care, requiring clinician review of drafts, and clarifying areas of ambiguity to protect patient autonomy. Generative AI-specific legislation, error reporting databases and accountable measures for clinicians and AI developers can promote transparency. Equitable deployment requires careful procurement of training data representative of the populations served that incorporate social determinants while engaging stakeholders, ensuring cultural sensitivity in generated text, and enhancing medical education.

Details

Business indexing term
Identifier / keyword
Title
Charting the ethical landscape of generative AI-augmented clinical documentation
Author
Qiwei Wilton Sun 1   VIAFID ORCID Logo  ; Miller, Jennifer 2 ; Hull, Sarah C 3 

 Yale School of Medicine, New Haven, CT, USA 
 Internal Medicine, Yale University School of Medicine, New Haven, CT, USA; Program for Biomedical Ethics, Yale School of Medicine, New Haven, CT, USA 
 Program for Biomedical Ethics, Yale School of Medicine, New Haven, CT, USA; Cardiology, Yale School of Medicine, New Haven, CT, USA 
Publication title
First page
jme-2024-110656
Publication year
2025
Publication date
May 2025
Section
Current controversy
Publisher
BMJ Publishing Group LTD
Place of publication
London
Country of publication
United Kingdom
Publication subject
ISSN
03066800
e-ISSN
14734257
Source type
Scholarly Journal
Language of publication
English
Document type
Journal Article
Publication history
 
 
Online publication date
2025-05-27
Milestone dates
2024-12-13 (Received); 2025-05-16 (Accepted)
Publication history
 
 
   First posting date
27 May 2025
ProQuest document ID
3212715862
Document URL
https://www.proquest.com/scholarly-journals/charting-ethical-landscape-generative-ai/docview/3212715862/se-2?accountid=208611
Copyright
© 2025 Author(s) (or their employer(s)) 2025. No commercial re-use. See rights and permissions. Published by BMJ Group.
Last updated
2025-11-14
Database
ProQuest One Academic