Content area

Abstract

Background: The integration of artificial intelligence (AI) in medicine, particularly through AI-based language models like ChatGPT, offers a promising avenue for enhancing patient education and healthcare delivery. This study aims to evaluate the quality of medical information provided by Chat Generative Pre-trained Transformer (ChatGPT) regarding common orthopedic and trauma surgical procedures, assess its limitations, and explore its potential as a supplementary source for patient education.

Methods: Using the GPT-3.5-Turbo version of ChatGPT, simulated patient information was generated for 20 orthopedic and trauma surgical procedures. The study utilized standardized information forms as a reference for evaluating ChatGPT's responses. The accuracy and quality of the provided information were assessed using a modified DISCERN instrument, and a global medical assessment was conducted to categorize the information's usefulness and reliability.

Results: ChatGPT mentioned an average of 47% of relevant keywords across procedures, with a variance in the mention rate between 30.5% and 68.6%. The average modified DISCERN (mDISCERN) score was 2.4 out of 5, indicating a moderate to low quality of information. None of the ChatGPT-generated fact sheets were rated as "very useful," with 45% deemed "somewhat useful," 35% "not useful," and 20% classified as "dangerous." A positive correlation was found between higher mDISCERN scores and better physician ratings, suggesting that information quality directly impacts perceived utility.

Conclusion: While AI-based language models like ChatGPT hold significant promise for medical education and patient care, the current quality of information provided in the field of orthopedics and trauma surgery is suboptimal. Further development and refinement of AI sources and algorithms are necessary to improve the accuracy and reliability of medical information. This study underscores the need for ongoing research and development in AI applications in healthcare, emphasizing the critical role of accurate, high-quality information in patient education and informed consent processes.

Details

1009240
Business indexing term
Company / organization
Title
Does the Information Quality of ChatGPT Meet the Requirements of Orthopedics and Trauma Surgery?
Publication title
Cureus; Palo Alto
Volume
16
Issue
5
Publication year
2024
Publication date
2024
Publisher
Springer Nature B.V.
Source
PubMed Central
Place of publication
Palo Alto
Country of publication
Netherlands
University/institution
U.S. National Institutes of Health/National Library of Medicine
Publication subject
e-ISSN
21688184
Source type
Scholarly Journal
Language of publication
English
Document type
Journal Article
Publication history
 
 
Online publication date
2024-05-15
Milestone dates
2024-05-15 (Accepted)
Publication history
 
 
   First posting date
15 May 2024
ProQuest document ID
3073831641
Document URL
https://www.proquest.com/scholarly-journals/does-information-quality-chatgpt-meet/docview/3073831641/se-2?accountid=208611
Copyright
Copyright © 2024, Kasapovic et al. This work is published under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.
Last updated
2024-07-01
Database
ProQuest One Academic