Content area
Background:
Artificial intelligence (AI) is increasingly integrated into nursing education, yet limited research exists on students' perceptions of AI use in nursing research courses. This study explored undergraduate nursing students' experiences using AI tools (ChatGPT and U-M GPT) in research courses.
Method:
This qualitative descriptive study used Braun and Clarke's six-step thematic analysis. Data were collected through anonymous written reflections from students enrolled in nursing research courses at two universities.
Results:
Five key themes emerged: (1) mixed initial perceptions; (2) shifting final perceptions; (3) AI strengths and limitations; (4) ethical concerns; and (5) use of AI in health care. Participants noted the potential of AI to enhance efficiency and provide feedback but expressed concerns about reliability, ethical use, and overreliance.
Conclusion:
Structured exposure to AI fostered positive perceptions of AI as a learning tool. Findings highlight the need for ethical frameworks and guidelines to support responsible integration, with future research exploring AI literacy and its implications for nursing education.
Artificial intelligence (AI) has rapidly become an integral component of various sectors, including health care and education. AI refers to the development of computer systems capable of performing tasks that typically require human intelligence, such as reasoning, problem solving, learning, and decision making (Montejo et al., 2024). One of the most well-known advancements in AI is ChatGPT, a natural language processing model developed by OpenAI, designed to generate human-like conversations and answers to complex queries (Rawas, 2024). In recent years, AI-powered tools have been increasingly integrated into academic settings to enhance learning experiences, promote personalized instruction, and assist with administrative tasks (Gunawan et al., 2024). These models represent the evolving capabilities of AI to understand, process, and create text, which has found applications in many educational settings (Ouyang et al., 2022).
AI tools have been used to streamline various tasks across disciplines, from automating grading systems to providing personalized learning experiences for students (Gunawan et al., 2024). These tools can help educators offer more tailored support to students, addressing individual learning needs and improving the efficiency of course management. These technologies have demonstrated significant potential to support personalized learning environments, provide instant feedback, and assist educators in managing large classes (El Azhari et al., 2023). For instance, when faculty are unavailable, chatbots can serve as valuable resources for addressing students' repetitive questions, thereby alleviating instructional burdens (El Azhari et al., 2023). The exponential rise in AI technologies like ChatGPT has ignited widespread discussions about their applications and the challenges they pose in higher education (Sun & Hoelscher, 2023).
As the use of AI technologies expands, it becomes increasingly important to assess their utility in specific educational contexts, including nursing education. In nursing education, a scoping review by Buchanan et al. (2021) highlighted the growing adoption of AI technologies, emphasizing the need to develop new technological competencies. However, the use of AI in nursing education remains largely underexplored, especially in the context of developing research competencies. Therefore, to foster responsible use of AI in educational contexts, educators must design assignments that encourage critical thinking, problem solving, and self-reflection to teach students how to critically evaluate information and make informed decisions when using AI (Sun & Hoelscher, 2023).
This study explored undergraduate nursing students' perceptions and experiences related to the integration of generative AI tools—specifically ChatGPT and U-M GPT (2024)—into a nursing research course assignment to better understand how these tools can support the teaching and learning of research methodologies in an undergraduate nursing research course. The study was guided by the following research question: What are undergraduate nursing students' perceptions and experiences regarding the use of generative AI tools, specifically ChatGPT and U-M GPT, in completing a nursing research course assignment?
Method
Study Design
This study employed a qualitative description research design to explore student perceptions and experiences of AI integration in an undergraduate nursing research course. This inductive approach is characterized by low-inference interpretation, allowing researchers to stay close to the data and better understand phenomenon by using a comprehensive summary of participant descriptions and subjective meanings (Bradshaw et al., 2017; Kim et al., 2017; Sullivan-Bolyai & Bova, 2021). Thematic analysis was used to analyze the data, following Braun and Clarke's (2006) six-step approach ensuring a systematic exploration of key themes.
Participants and Setting
The research was conducted at two institutions: the University of Michigan-Flint (UMF) and Grand Valley State University (GVSU). Participants included undergraduate nursing students enrolled in an asynchronous online introductory nursing research course at each institution. Students from UMF participated during the 2024 Winter semester, and students from GVSU participated during the 2024 Spring/Summer semester. All students who completed the course assignment relevant to this study were invited to participate. Participation was voluntary, and no personal identifying information was collected to maintain participant anonymity. Although all students were required to complete the assignment as a part of their course content, they could opt out of participating in the study.
Educational Intervention
The intervention involved a structured course assignment that focused on the appropriate use of AI tools, specifically ChatGPT or U-M GPT. ChatGPT is a large language model developed by OpenAI, which uses deep learning techniques to generate human-like text based on input prompts (OpenAI, 2024). U-M GPT is a customized AI tool, equivalent to ChatGPT, specifically designed for use within the University of Michigan's academic environment, providing tailored responses and resources aligned with university guidelines. All UMF students have free access to U-M GPT (U-M GPT, 2024). Students at GVSU used ChatGPT exclusively, while students at UMF had the option to use ChatGPT or U-M GPT.
The assignment consisted of several steps, beginning with the creation of a PICOT (Population of concern, Intervention or issue, Comparison, Outcome, and Time frame) question, with students independently formulating an initial clinical question. Following this, students then engaged with ChatGPT or U-M GPT to evaluate the structure and clarity of their preliminary PICOT question. To support this process, students were provided with a list of sample prompts to guide their interaction with the AI (Table 1). Next, students reviewed the AI-generated recommendations and revised their questions if needed as they deemed appropriate, drawing from course learning to critically evaluate the suggestions and finalize their PICOT question. Finally, students submitted a reflection on their experience using the AI tools. This reflection included their initial perceptions on using AI, an evaluation of the utility of the AI-generated suggestions, and an overall assessment of the effectiveness of AI as a tool to support clinical practice inquiry. Students were not instructed to use AI in writing their reflections; rather, they were asked to share their personal opinions about the assignment and the use of AI, noting there were no right or wrong answers.
| Prompt Type | Prompt |
|---|---|
| Initial PICOT question submission | I have developed a PICOT question for my nursing research class. Can you review it and provide feedback? Here's my question: [student inserts the PICOT question]. |
| Specific aspects inquiry | Can you help me refine the population part of my PICOT question? It currently reads: [student inserts the population part of the question]. How can I make this more specific or appropriate? |
| Intervention clarification | In my PICOT question, I'm not sure if the intervention I've chosen is clear enough. Here's what I have: [student inserts the Intervention part of the question]. Can you suggest ways to make this clearer? |
| Comparison component evaluation | I am struggling with the comparison component of my PICOT question. It currently reads: [student inserts the comparison part of the question]. Do you think this is an effective comparison? How can I improve it? |
| Outcome relevance check | For the outcome part of my PICOT question, I wrote: [Student inserts the outcome part of the question]. Is this outcome relevant and measurable for my research question? |
| Time frame adjustment | I'm not sure if the time frame in my PICOT question is appropriate. Here's what I have: [Student inserts the time frame part of the question]. Can you suggest a more suitable time frame for this study? |
| Overall coherence and feasibility | Here's my complete PICOT question: [Student inserts the full PICOT question]. Can you assess the overall coherence and feasibility of the question? What improvements would you recommend? |
| Comparative analysis | I have two versions of a PICOT question, and I'm not sure which is better. Can you compare them and suggest which one is more effective? Here are the two questions: [Student inserts both PICOT questions]. |
| Feedback implementation | Based on your previous feedback, I've revised my PICOT question to this: [Student inserts revised PICOT question]. Have I improved it effectively? What else can be done? |
Data Collection
Qualitative data were collected from students' written reflections submitted through an anonymous Qualtrics® link provided to students within the assignment. Students were asked to reflect on their initial thoughts on using AI in clinical practice inquiry, the recommendations provided by U-M GPT or Chat-GPT and how they influenced their final PICOT question, their perceptions of using AI for this purpose, and their final PICOT question. These reflections captured students› perceptions of and interactions with the AI tools.
Data Analysis
Thematic analysis was used to analyze the qualitative data, following Braun and Clarke's six-step approach. The steps included:
Familiarization with the data. Both researchers independently read and re-read the student comments to immerse themselves in the data. Initial ideas and impressions were noted during this process (May to June 2024).
Systematic data coding. Using NVivo and MAXQDA software, researchers systematically and independently coded the data, identifying interesting features of the data relevant to the research questions (July 2024).
Generating initial themes. Codes were grouped into potential themes, which were then discussed and refined collaboratively (August 2024).
Developing and reviewing themes. The researchers collaboratively reviewed the initial themes, ensuring they were representative of the coded extracts and the overall data set. This iterative process involved refining themes and ensuring coherence between the themes and the research questions (September 2024).
Defining and naming themes. Final themes were clearly defined and named to reflect the essence of the data (November 2024).
Producing the report. A detailed analysis was written, supported by compelling examples from the data, relating the findings to the research questions and relevant literature (December 2024).
The data from UMF were analyzed first, followed by the analysis of data from GVSU. This comparative approach confirmed that consistent themes emerged across both sites, enhancing the robustness of the findings.
Trustworthiness
To ensure the credibility and trustworthiness of the study, several strategies were employed. Two researchers independently analyzed the data, employing triangulation to minimize individual bias and enhance the reliability of findings. A systematic audit trail was maintained using NVivo and MAXQDA software to document coding decisions and theme development. Additionally, cross-site validation was conducted by comparing themes identified from UMF data with those from GVSU, ensuring consistency across diverse educational settings.
Ethical Considerations
The study received ethical approval from the University of Michigan-Flint and Grand Valley State University Institutional Review Boards prior to commencement. Participants were provided with informed consent information outlining the purpose of the study, procedures for ensuring confidentiality, and the voluntary nature of their participation. Anonymity was preserved by collecting responses through an anonymous Qualtrics link, and no identifying information was included in the analysis.
Results
The student samples consisted of 42 (95% of the class) participants from a UMF undergraduate nursing research course and 35 (85.4% of the class) participants enrolled in undergraduate nursing research courses at GVSU. Student demographics were not collected to protect the anonymity of students. The student population at UMF comprises undergraduate students in the RN-to-baccalaureate nursing (BSN), traditional, and accelerated second-degree programs. The student population at GVSU comprises traditional BSN and RN-to-BSN students.
Three participant reflections were discarded due to lack of adequate information in the responses; therefore, a total of 77 participant responses were included in the analysis. Analysis of the qualitative data revealed four major themes. Overarching themes that emerged from the data included (1) initial and final impressions; (2) strengths and limitations of AI; (3) ethical concerns; and (4) AI in health care. Illustrative participant responses, selected phrases, and corresponding emerging themes are presented in Table A (available in the online version of this article).
| Text | Selected Phrase | Theme |
|---|---|---|
| I was not enthusiastic about the requirement to use AI for an assignment, as I expected it to be largely unhelpful, but it turned out to improve both the process of doing the assignment and my understanding of the concept being taught. | I was not enthusiastic about the requirement to use AI for an assignment, as I expected it to be largely unhelpful, but it turned out to improve both the process of doing the assignment and my understanding of the concept being taught. I was pleasantly surprised by how well it helped me rephrase my question to be more applicable, while still coming out relatively concise. | Mixed initial and shifting final impressions |
| I was pleasantly surprised by how well it helped me rephrase my question to be more applicable, while still coming out relatively concise. I will probably continue to use AI as a sounding board for clarifying my thoughts in future assignments (as long as it is allowed). | ||
| My initial thoughts on using AI in clinical practice inquiry were excitement, relief, and curiosity. I had the initial feeling of excitement because I use ChatGPT to help myself plan and organize in my day-to-day life. I was relieved because writing is not my strong suit, and this new form of technology allowed me to strengthen my weakness without having the need for multiple people to look over my work. I was curious as to how much one would be able to incorporate AI answers and how citation would be formatted. The recommendations provided by UM-GPT were to give a more defined intervention and outcome. I did revise my PICOT question to have a specific routine for better comparison and a more precisely stated outcome to observe. | My initial thoughts on using AI in clinical practice inquiry were excitement, relief, and curiosity. | Mixed initial and shifting final impressions |
| I liked how ChatGPT gave me a breakdown of my original PICOT question before I asked it the second question. It provided me with different topics that I could research or look for while better understanding the ways physical activity could | I believe that this AI gives very detailed responses that you might not otherwise consider and gives a different perspective. Something that might be an issue is if ChatGPT is wrong about a topic, it might not be noticeable, | AI strengths and limitations |
| help nurse burnout. When I asked it to break down and evaluate, it gave me categories of ways to ensure that my question was clear, such as using clear and concise language and no jargon. It also provided me with categories on the structure of my question specific to the PICOT I asked. | and we could be fed false information. | |
| Looking at the strengths, I believe that this AI gives very detailed responses that you might not otherwise consider and gives a different perspective. Something that might be an issue is if ChatGPT is wrong about a topic, it might not be noticeable, and we could be fed false information. | ||
| AI is a great tool that not only simplifies formulating a PICOT question but also aids in making a clear and effective research question, which is foundational in research. Prior to using AI, I was very confident with the PICOT question I created. | I got responses that actually made my question much more clear.AI helped me formulate my PICOT question by making it more precise specifically in the population, intervention, and outcome specification part.When using AI, you must be careful; since it is computerized, it can halt your ability to think and use your creativity in formulating a research question. | AI strengths and limitations |
| However, after using UM-GPT, I was able to narrow down and formulate a much more effective PICOT question. I got responses that actually made my question much more clear.I noticed when using UM-GPT I did have to structure my question or have to ask more questions to clarify my point of view and put more context to what I wanted to ask about the PICOT question. AI helped me formulate my PICOT question by making it more precise specifically in the population, intervention, and outcome specification part. For example, instead of using “infection rates,” it helped me narrow down to “catheter-related bloodstream infections.”When using AI, you must be careful; since it is computerized, it can halt your ability to think and use your creativity in formulating a research question. | ||
| My initial thoughts on using AI in clinical practice were virtually nonexistent because I know AI can make errors, and I | Using AI also feels like “cheating” so I typically don't use it for academic purposes. | Ethical concerns |
| would prefer to make my own conclusion versus using AI. Using AI also feels like “cheating,” so I typically don't use it for academic purposes. ChatGPT recommended changing “anxiety” to “reducing anxiety” and also recommended adding a setting. I think using AI for this purpose is good for refining a final PICOT. | ||
| You can quickly receive answers if you pose the correct type of question to the bot. If you did not, you may get very skewed answers to the question. I feel the downside of this is it can lead to bias or ethical concerns. It is not your original work, so by not doing it yourself without checking the information could cause issues if you rely solely on AI. | I feel the downside of this is it can lead to bias or ethical concerns. It is not your original work, so by not doing it yourself without checking the information could cause issues if you rely solely on AI. | Ethical concerns |
| I feel as though using any form of AI can be a huge help in our health care society. It quickly and effectively answers questions that may take hours for a human to research. I know that there are still many limitations to how much of AI our society trusts, so I feel like this may be a hurdle to overcome for many providers. As with many things in life, I do believe there is a right time and a wrong time for everything, but with the proper education and expectations I really think AI will benefit health care, both the patients and the providers. When I first came up with my PICOT question, I thought that I had done a pretty excellent job; I spent a lot of time coming up with the correct wording and ensuring that what I was asking was clear. As soon as I put it into ChatGPT to analyze, I got a “looks good but here's what can make it better.” It was interesting to see how the pieces that I thought made it perfect needed to be tweaked just a bit to make it more thorough. I used each piece of information to formulate my revised question and will happily use ChatGPT for this kind of | I feel as though using any form of AI can be a huge help in our health care society. It quickly and effectively answers questions that may take hours for a human to research. I know that there are still many limitations to how much of AI our society trusts, so I feel like this may be a hurdle to overcome for many providers. As with many things in life, I do believe there is a right time and a wrong time for everything, but with the proper education and expectations, I really think AI will benefit health care, both the patients and the providers. | Use of AI in health care |
| situation in the future. | ||
| I have always been a fan of technology and looking up what's new and the latest and greatest applications and gadgets. I find using AI, even in clinical practice inquiry, exciting. Although in general it is a great tool, it should not replace critical thinking and clinical judgment in the nursing profession and cannot replace the use of the human brain's functional capabilities. Building critical thinking and clinical judgment is not automated but takes exposure, experience, teaching, and continuous learning and building onto current knowledge. AI can add positive aspects to health care by enhancing patient care in aiding in things such as making a diagnosis and creating a treatment plan, improve accuracy, and assist health care workers with time management and completion of certain tasks. On the other hand, AI lacks emotion and feeling, cannot learn from mistakes, is not error proof, cannot form original ideas, and does not take ethics and morals into consideration. | AI can add positive aspects to health care by enhancing patient care in aiding in things such as making a diagnosis and creating a treatment plan, improving accuracy, and assisting health care workers with time management and completion of certain tasks. On the other hand, AI lacks emotion and feeling, cannot learn from mistakes, is not error proof, cannot form original ideas, and does not take ethics and morals into consideration. It can only use information that has been inputted. Another concern with storing information in AI systems is the security of that information and the possibility of it being breached. All the things that AI is and is not capable of are required in clinical practice. | Use of AI in health care |
| It can only use information that has been input. Another concern with storing information in AI systems is the security of that information and the possibility of it being breached. All the things that AI is and is not capable of are required in clinical practice. |
Mixed Initial and Shifting Final Impressions
Students expressed mixed initial impressions about AI, ranging from enthusiasm to skepticism. Participants with initial positive perceptions about AI believed it can be a beneficial tool and enhance the efficiency of completing everyday tasks, such as composing emails. Positive impressions reported by participants included that its use might accelerate research processes and yield better results during clinical practice inquiries by compiling multiple sources of information into one location. Participants also reported that AI had the potential to aid academic writing by providing feedback to enhance clarity and address weaknesses in written work without relying on others to proofread. In health care, participants thought AI had potential to advance medicine by changing diagnostic and treatment processes and reducing the time required to analyze patient data, ultimately saving lives.
Negative initial perceptions included concerns about over-reliance, inaccuracies, and potential ethical challenges. One source of concern was AI's ability to produce responses within seconds, appearing confident even when providing inaccurate information, which could lead to an overreliance on an unreliable system. As a result, participants preferred to seek information from multiple sources and draw their own conclusions rather than depend on AI for information gathering. Other potential risks considered by participants included AI replacing human jobs across various fields, detracting from valuable human interactions. Past experiences in courses also contributed to feelings of discomfort surrounding AI, as the participants feared AI use could be perceived as plagiarism. One participant explained, “Initially, I felt quite hesitant.I was under the impression that any hint of using AI could be considered plagiarism and would not be acceptable.”
By the conclusion of the assignment, participants generally shifted to a more positive view of AI, particularly in relation to refining academic work and aiding clinical research. Related to the use of AI in coursework, one participant stated:
After doing this assignment, I can see how beneficial using AI in this way can be. It doesn't necessarily replace critical thinking, but it did help to refine and improve my work. I look forward to applying this method to other assignments in the future.
Another student noted:
After letting U-M GPT refine my PICOT question, I changed my opinion on AI technology. U-M GPT not only analyzed my question but also gave professional suggestions. For example, I wrote the outcome as “control of acute incisional pain.” U-M GPT pointed out that myoutcome was not specific and gave four examples of pain measurement, such as patient self-reporting and observed pain behaviors.
Interestingly, after completing the assignment, one participant changed from a positive to a negative view regarding AI. This student said:
Initially, I thought incorporating AI into this process could be a good idea, as it could provide a structured and systematic approach to developing a well-defined research question. However, the AI's answer was far more detailed and specific than I anticipated, making it somewhat challenging to apply in a broader context. While I appreciated the depth of the response, it narrowed the scope of my question more than necessary, potentially limiting its applicability.
AI Strengths and Limitations
Participants valued AI for its speed, ability to refine research questions, and editorial support, noting that the speed of responses and helpful feedback enabled them to work more effectively. They described the experience as collaborative, likening it to working with another person to brainstorm ideas and explore previously unconsidered directions. For example, one participant remarked, “It is incredible how fast it can generate a response and compound ideas from the previous messages to provide relevant answers. It felt like I was texting a real person.” This appreciation also extended to AI's understanding of medical terminology and its clear recommendations, which enhanced the clarity of the students' PICOT questions. As stated by one participant:
The recommendations provide almost a peer type of review. If I did not have the AI, I would have asked one of my coworkers who is also in school to look over my question to see what I could have changed.
Improvements in the PICOT questions after engaging with AI also were noted by participants. These improvements included enhanced conciseness, structure, and specificity, which facilitated easier evidence retrieval. AI also helped identify missing or vague components, such as outcomes or populations, resulting in more specific and measurable clinical questions. Participants valued the editorial support, which strengthened the grammar and wording of their questions. One participant reflected:
When I first came up with my PICOT question, I thought that I had done a pretty excellent job. As soon as I put it into Chat-GPT to analyze, I got a “looks good but here's what can make it better.” It was interesting to see how the pieces I thought were perfect needed slight tweaks to be more thorough.
Examples of PICOT question development are included in Table 2.
| Initial PICOT Question | Final PICOT Question |
|---|---|
| In children who have ADHD, does prescribing medication only to them or does prescribing medication along with behavioral therapy have a better outcome for the child and family? | In children diagnosed with ADHD, does prescribing medication in conjunction with behavioral therapy compared to prescribing medication only produce better outcomes in terms of child behavior in school and family functioning over a period of 6 months? |
| In veterans with PTSD, does the use of psychedelics compared to drug therapy reduce PTSD symptoms over the course of a year? | In veterans with PTSD, does the use of psychedelics compared with benzodiazepines reduce the frequency of hypervigilance events over the course of a year? |
| In adults with chronic pain (P), does performing mindfulness meditation (I) daily decrease their level of pain (O) after 3 months (T)? The comparison (C) was meant to be implied as the patient population not performing mindfulness meditation. | In adults with chronic pain, does performing guided mindfulness meditation for a minimum of 5 minutes daily decrease their level of pain compared with baseline after 3 months? |
| In school-age children, what is the effect of a school-based physical activity program on the reduction in the incidence of childhood obesity compared with no intervention? | In children ages 6 to 10 years, what is the effect of a school-based physical activity program on reducing the incidence of childhood obesity compared with no intervention? |
| In adults with frontal lobe seizures, how does adequate sleep (6 to 8 hours) compared with inadequate sleep (less than 6 hours) affect the frequency of seizures over a period of 6 months? | In adults diagnosed with frontal lobe seizures, how does maintaining a regular sleep schedule of 6 to 8 uninterrupted hours at night compared with an inconsistent sleep schedule with total sleep of less than 6 hours per night affect the frequency of physician-confirmed and self-reported seizures over a 6-month period? |
| (P) Among operating room nurses, (I) does wearing lead aprons (C) compared with standing behind a lead door (O) decrease radiation exposure? | (P) Among operating room nurses, (I) does wearing lead aprons (C) compared with standing behind a lead door (O) result in decreased radiation exposure (measured by personal dosimeters immediately postprocedure) (T) during a single procedure? |
Despite the strengths, participants expressed concerns about the accuracy of the information provided by AI and noted that verifying AI-generated content required additional effort, as certain sources were known to be unreliable. One participant commented, “The strengths of using AI are simply that it offers you another source of information, but I think you still need to evaluate the info that AI gives you.” This concern was further linked to AI's reliance on the data it was trained on, as noted by another participant: “AI can only be as strong as those who help influence the data being collected. The information we give AI programs is the only way they can learn.”
Outdated information in AI-generated responses was also a point of concern, particularly in relation to clinical guidelines. For instance, one participant noted, “In my exact example, the AI did not know what the current American Diabetes Association A1c guidelines were.” Additionally, participants highlighted AI's lack of human nuance. As one participant summarized, “All in all, AI can be used to create factual questions, but it's important to realize because AIs are not human, their questions may lack certain details that only humans can form.”
Ethical Concerns
Ethical concerns and considerations regarding the use of AI predominantly centered on a fear of plagiarism, that AI could be perceived as cheating, or that overuse of AI could hinder students' education. One student stated, “Using AI also feels like ‘cheating,’ so I typically don't use it for academic purposes.” Another student said, “I feel that so far in my academic career, AI has been demonized.” Other concerns regarding the use of AI included the tool's lack of human characteristics, such as empathy, intuition, and professional judgment, which could lead to unique ethical dilemmas both in education and health care. One student stated:
Who is there to decide that the information being provided by AI is relevant, truthful, and taken into best consideration? There are no guidances on the honesty of ChatGPT and we are simply trusting a computer to take into account all of the various aspects that we as humans do.
To address these concerns, students felt that the use of AI in these settings should be limited to brainstorming ideas or editorial improvements without directly using the information produced by AI.
Use of AI in Health Care
Participants articulated a balanced perspective on the strengths and limitations of AI in health care. Concerns primarily revolved around the reliability of information generated by AI, along with significant risks related to data security and the protection of sensitive patient information. Additional apprehensions included the potential for misdiagnoses, undetected biases that could exacerbate existing health care disparities, and the possibility that overreliance on AI might undermine the clinical judgment of health care providers.
Despite these challenges, participants identified several strengths of AI in enhancing health care delivery. These included AI's capacity to improve efficiency by aiding in diagnostic accuracy, synthesizing disease-related information without requiring consultation with other professionals, creating tailored treatment plans, and facilitating effective time management for completing specific tasks. One participant encapsulated this optimistic outlook, stated:
I feel as though using any more of AI can be a huge help in our health care society. It quickly and effectively answers questions that may take hours for a human to research. I know that there are still many limitations to how much of AI our society trusts, so I feel like this may be a hurdle to overcome for many providers. As with many things in life, I do believe there is a right time and a wrong time for everything, but with the proper education and expectations, I really think AI will benefit health care, both the patients and the providers.
However, participants also stressed that AI should function as a supportive tool rather than a substitute for critical thinking. One participant who summarized these sentiments stated:
It cannot take the place of people; rather, it is an addition to human intelligence, judgment, and experience. It provides an in-depth and critical approach to medical information. We need to continue to look at how artificial intelligence can advance health care as we continue to enter the digital era.
Discussion
This study explored nursing students' perceptions and experiences with the integration of AI in undergraduate nursing research courses. The findings provide insight into students' evolving impressions, perceived benefits and limitations, and ethical considerations associated with AI use in academic and health care settings. While students recognized the potential benefits of AI, their concerns underscore the necessity for nuanced integration strategies in nursing education.
Participants entered the study with a spectrum of initial perceptions, ranging from optimism to skepticism. Positive impressions were grounded in AI's perceived capacity to enhance efficiency and improve academic and clinical practices, such as expediting research processes and refining PICOT question development. Conversely, negative impressions were tied to apprehensions about AI's reliability and its potential to undermine human interaction and originality in academic work. These divergent views evolved as students engaged with AI, with many reporting increased comfort and appreciation for its capabilities despite lingering concerns.
The integration of AI was found to facilitate efficiency and provide valuable feedback, enabling students to refine their academic inquiries and improve clarity in written work. AI's ability to simulate collaborative brainstorming was particularly noted, as students described interactions akin to conversing with an informed colleague. These findings align with existing literature, which indicates that students commonly use AI as personal tutors to enhance their learning (Ansari et al., 2024). The improvement in PICOT question formulation and the identification of vague components reinforces AI's potential to support evidence-based practice in nursing.
While participants acknowledged the strengths of AI, they also highlighted significant limitations and ethical concerns. Challenges included the need for users to develop proficiency in leveraging AI effectively and the potential for misinformation if AI-generated content is accepted uncritically. Ethical concerns centered on the risk of overreliance, issues of academic integrity, and the implications of AI in clinical decision making where inaccuracies could have life-threatening consequences. These findings align with broader discussions in nursing education about integrating technology responsibly to uphold professional values.
Overall, participants reported positive experiences with the assignment, highlighting the utility of the provided example prompts. While both student groups expressed initial skepticism, students at UMF exhibited slightly lower levels of apprehension. This difference may be attributed to the accessibility of U-M GPT, which required only a university login, in contrast to the additional steps needed to access ChatGPT. The university-developed nature of U-M GPT may also have contributed to its perceived legitimacy and ease of integration into the assignment.
Implications for Nursing Education and Practice
The findings of this study are supported by research assessing the use of AI in higher education. Rawas (2023) highlighted seven key opportunities associated with using ChatGPT in higher education, including personalized and interactive learning, automated grading, intelligent tutoring, content creation, language learning, and accessibility. Similarly, Ouyang et al. (2022) underscored AI's ability to recommend resources and enhance teaching strategies, further demonstrating its value in academic settings. However, while these advancements offer numerous benefits, they also introduce concerns about academic integrity, such as the potential misuse of AI-generated content and over-reliance on automated tools.
Rawas (2023) noted that the ethical challenges surrounding AI use, including issues of informed consent and potential effects on teaching methods, necessitate careful evaluation to mitigate risks and uphold educational integrity. Furthermore, concerns related to privacy, data protection, bias, transparency, and accountability must be addressed to ensure ethical implementation and adoption of AI in academia (Montejo et al., 2024). In understanding AI principles across the nursing profession, Abuzaid et al. (2022) further noted a gap, advocating for targeted training to enable seamless and safe integration of AI into clinical practice. Therefore, the responsible application of AI in nursing education requires addressing ethical considerations specific to the profession, such as minimizing biases and ensuring privacy.
The results of this study underscore the importance of preparing nursing students to use AI as a complementary tool rather than a substitute for critical thinking and ethical decision-making. Faculty should incorporate structured guidance on ethical AI use, highlighting its strengths while addressing its limitations. Scholars have long advocated for curricular reforms in nursing education to better prepare students for safe and efficient practice in an AI-integrated health care environment (Buchanan et al., 2021). Faculty must strive to balance leveraging AI's capabilities with preserving human interaction and encouraging independent learning, as these remain integral to higher education's pedagogical goals (Gunawan et al., 2024). Establishing clear policies and fostering an open dialogue about the role of AI can alleviate student anxiety and promote its effective application in nursing education and practice.
Limitations
This study had a few limitations. Differences in instructional modalities may have influenced student engagement and reflections, affecting comparability. Additionally, student demographics were not collected due to institutional review board restrictions at one of the institutions, which limits insights into potential contextual factors affecting perceptions. Finally, relying solely on written reflections as the data source may have constrained the depth of responses. Future research should include diverse data collection methods and broader participant information to provide a more comprehensive understanding of student experiences.
Conclusion
The integration of AI in nursing education presents a dual-edged opportunity, offering significant benefits while posing notable challenges. This study highlights the transformative potential of AI in enhancing academic efficiency and clinical research while underscoring the critical need for ethical oversight and user education. By fostering a balanced understanding of AI's capabilities and limitations, nursing educators can empower students to harness its benefits responsibly. Future research should explore longitudinal effects of AI integration on nursing competencies, ethical decision making, and patient care outcomes to inform best practices and policy development. Through thoughtful implementation, AI can become a valuable asset in advancing nursing education and practice.
Abuzaid, M. M., Elshami, W., & Fadden, S. M. (2022). Integration of artificial intelligence into nursing practice. Health and Technology, 12(6), 1109–1115. 10.1007/s12553-022-00697-0 PMID: 36117522
Ansari, A. N., Ahmad, S., & Bhutta, S. M. (2024). Mapping the global evidence around the use of ChatGPT in higher education:Asystematic scoping review. Education and Information Technologies, 29(9), 11281–11321. 10.1007/s10639-023-12223-4
Bradshaw, C., Atkinson, S., & Doody, O. (2017). Employingaqualitative description approach in health care research. Global Qualitative Nursing Research, 4, 2333393617742282 10.1177/2333393617742282 PMID: 29204457
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101. 10.1191/1478088706qp063oa
Buchanan, C., Howitt, M. L., Wilson, R., Booth, R. G., Risling, T., & Bamford, M. (2021). Predicted influences of artificial intelligence on nursing education: Scoping review. JMIR Nursing, 4(1), e23933 10.2196/23933 PMID: 34345794
El Azhari, K., Hilal, I., Daoudi, N., & Ajhoun, R. (2023). SMART chatbots in the e-learning domain:Asystematic literature review. International Journal of Interactive Mobile Technologies, 17(15),4– 37. 10.3991/ijim.v17i15.40315
Gunawan, J., Aungsuroch, Y., & Montayre, J. (2024). ChatGPT integration within nursing education and its implications for nursing students:Asystematic review and text network analysis. Nurse Education Today, 141, 106323. 10.1016/j.nedt.2024.106323 PMID: 39068726
Kim, H., Sefcik, J. S., & Bradway, C. (2017). Characteristics of qualitative descriptive studies:Asystematic review. Research in Nursing & Health, 40(1), 23–42. 10.1002/nur.21768 PMID: 27686751
Montejo, L., Fenton, A., & Davis, G. (2024). Artificial intelligence(AI) applications in healthcare and considerations for nursing education. Nurse Education in Practice, 80, 104158. 10.1016/j.nepr.2024.104158 PMID: 39388757
OpenAI. (2024, October 3). ChatGPT(GPT-4o version)[Large language model]. https://chat.openai.com
Ouyang, F., Zheng, L., & Jiao, P. (2022). Artificial intelligence in online higher education:Asystematic review of empirical research from 2011 to 2020. Education and Information Technologies, 27(6), 7893–7925. 10.1007/s10639-022-10925-9
Rawas, S. (2024). ChatGPT: Empowering lifelong learning in the digital age of higher education. Education and Information Technologies, 29(6), 6895–6908. 10.1007/s10639-023-12114-8
Sullivan-Bolyai, S. L., & Bova, C. A. (2021). Qualitative description:A“how-to” guide. University of Massachusetts Medical School, Graduate School of Nursing. Accessed April 1, 2025 https://repository.escholarship.umassmed.edu/bitstream/handle/20.500.14038/46447/auto_convert.pdf?sequence=3&isAllowed=y
Sun, G. H., & Hoelscher, S. H. (2023). The ChatGPT storm and what faculty can do. Nurse Educator, 48(3), 119–124. 10.1097/NNE.0000000000001390 PMID 37043716
U-M GPT. (2024). U-M GPT(GPT-4o Omni version)[Large language model]. University of Michigan. https://genai.umich.edu/
From School of Nursing, University of Michigan-Flint, Flint (BWD), and Kirkhof College of Nursing, Grand Valley State University, Grand Rapids (LAD), Michigan.
Disclosure: The authors have disclosed no potential conflicts of interest, financial or otherwise.
Copyright 2025, SLACK Incorporated
