Content area

Abstract

Aims

This study aimed to evaluate the performance of publicly available large language models (LLMs), ChatGPT-4o, ChatGPT-4o Mini and Perplexity AI, in responding to research-related questions at the undergraduate nursing level. The evaluation was conducted across different platforms and prompt structures. The research questions were categorized according to Bloom’s taxonomy, to compare the quality of AI-generated responses across cognitive levels. Additionally, the study explored the perspectives of research members on using AI tools in teaching foundational research concepts to undergraduate nursing students.

Background

Large Language Models (LLMs) could help nursing students learn foundational research concepts but their performance in answering research-related questions has not been explored.

Design

An exploratory case study was conducted to evaluate the performance of ChatGPT-4o, ChatGPT-4o Mini and Perplexity AI in answering 41 research-related questions.

Methods

Three different prompts (Prompt-1: Unstructured with no context; Prompt-2: Structured from professor’s perspective; Prompt-3: Structured from student’s perspective) were tested. A 5-point Likert-type valid author-developed scale was used to assess all AI-generated responses across six domains: Accuracy, Relevance, Clarity & Structure, Examples Provided, Critical Thinking and Referencing.

Results

All three AI models generated higher-quality responses when structured prompts were used compared with unstructured prompts and responded well across the different Bloom’s taxonomy levels. ChatGPT-4o and ChatGPT-4o Mini performed better at answering research-related questions than Perplexity AI.

Conclusion

AI models hold promise as supplementary tools for enhancing undergraduate nursing students’ understanding of foundational research concepts. Further studies are warranted to evaluate their impact on specific research-related learning outcomes within nursing education.

Full text

Turn on search term navigation

© 2025 Elsevier Ltd