Full text

Turn on search term navigation

© 2024. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Background:ChatGPT is among the most popular large language models (LLMs), exhibiting proficiency in various standardized tests, including multiple-choice medical board examinations. However, its performance on otolaryngology–head and neck surgery (OHNS) certification examinations and open-ended medical board certification examinations has not been reported.

Objective:We aimed to evaluate the performance of ChatGPT on OHNS board examinations and propose a novel method to assess an AI model’s performance on open-ended medical board examination questions.

Methods:Twenty-one open-ended questions were adopted from the Royal College of Physicians and Surgeons of Canada’s sample examination to query ChatGPT on April 11, 2023, with and without prompts. A new model, named Concordance, Validity, Safety, Competency (CVSC), was developed to evaluate its performance.

Results:In an open-ended question assessment, ChatGPT achieved a passing mark (an average of 75% across 3 trials) in the attempts and demonstrated higher accuracy with prompts. The model demonstrated high concordance (92.06%) and satisfactory validity. While demonstrating considerable consistency in regenerating answers, it often provided only partially correct responses. Notably, concerning features such as hallucinations and self-conflicting answers were observed.

Conclusions:ChatGPT achieved a passing score in the sample examination and demonstrated the potential to pass the OHNS certification examination of the Royal College of Physicians and Surgeons of Canada. Some concerns remain due to its hallucinations, which could pose risks to patient safety. Further adjustments are necessary to yield safer and more accurate answers for clinical implementation.

Details

Title
A Novel Evaluation Model for Assessing ChatGPT on Otolaryngology–Head and Neck Surgery Certification Examinations: Performance Study
Author
Long, Cai  VIAFID ORCID Logo  ; Lowe, Kayle  VIAFID ORCID Logo  ; Zhang, Jessica  VIAFID ORCID Logo  ; André dos Santos  VIAFID ORCID Logo  ; Alanazi, Alaa  VIAFID ORCID Logo  ; O'Brien, Daniel  VIAFID ORCID Logo  ; Wright, Erin D  VIAFID ORCID Logo  ; Cote, David  VIAFID ORCID Logo 
First page
e49970
Section
Theme Issue: ChatGPT and Generative Language Models in Medical Education
Publication year
2024
Publication date
2024
Publisher
JMIR Publications
e-ISSN
23693762
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2917584506
Copyright
© 2024. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.