Full text

Turn on search term navigation

Copyright © 2025, Reshi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License CC-BY 4.0., which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Introduction: Varicella, hand, foot, and mouth disease (HFMD), and measles are some of the common causes of fever with rash in the pediatric age group. ChatGPT and Gemini are effective large language models (LLMs) for parents to understand their child’s condition. Therefore, considering the growing popularity of artificial intelligence (AI), LLMs, and their ability to disseminate health information, assessing ChatGPT's (OpenAI, San Francisco, CA, USA) and Gemini's (Google LLC, Mountain View, CA, USA) quality and accuracy is essential.

Materials and methods: A cross-sectional study was conducted on responses generated using AI for common causes of fever with rash in the pediatric age group, namely varicella, HFMD, and measles. ChatGPT and Gemini were used for the generation of brochures for patient education. The responses generated were evaluated using the Flesch-Kincaid Calculator (Good Calculators: https://goodcalculators.com/), the QuillBot plagiarism tool (QuillBot, Chicago, IL, USA), and the modified DISCERN score. Statistical analysis was done using R version 4.3.2 (R Foundation for Statistical Computing, Vienna, Austria, https://www.R-project.org/), and unpaired t-tests were used to compare the various scores. A p-value of less than 0.05 was considered statistically significant.

Results: It was found that ChatGPT generates a higher word count as compared to Gemini (p=0.047). Sentences, average words per sentence, average syllables per word, ease score, and grade level between the two AI tools were statistically insignificant (p>0.05). The mean reliability score was 3/5 in the case of Gemini versus 2.67/5 in ChatGPT, but the difference was statistically insignificant (p=0.725).

Conclusions: This study highlights that ChatGPT generates more word count than Gemini, and the finding was statistically significant (p=0.047). Additionally, there is no significant difference in the average ease score or grade score for common pediatric exanthematous conditions: varicella, HFMD, and measles. Future research should focus on improving AI-generated health content by incorporating real-time validation mechanisms, expert reviews, and structured patient feedback.

Details

Title
ChatGPT and Gemini for Patient Education: A Comparative Analysis of Common Pediatric Exanthematous Conditions
Author
Amrutha, Reshi 1 ; Arora Nikhil 2 ; Singh Tanupriya 3 

 Psychiatry, Bhagawan Sri Balagangadharanatha Swamiji (BGS) Global Institute of Medical Sciences, Bengaluru, IND 
 Pediatrics, Government Medical College, Patiala, IND, Pediatrics, Guru Gobind Singh Medical College and Hospital, Faridkot, IND 
 Pediatric Medicine, The Doctor's Hub Polyclinic, Dubai, ARE 
University/institution
U.S. National Institutes of Health/National Library of Medicine
Publication year
2025
Publication date
2025
Publisher
Springer Nature B.V.
e-ISSN
21688184
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
3214251965
Copyright
Copyright © 2025, Reshi et al. This is an open access article distributed under the terms of the Creative Commons Attribution License CC-BY 4.0., which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.