Content area
We have come to a crossroads in nurse education. Generative AI (genAI) is here to stay, and as academics, there is little we can do about it. We are going to use it. Our colleagues are going to use it. But, most importantly, our students are going to use it, regardless of what we say or do. Yet, as guardians of the profession, we need to support its use and simultaneously demonise it.
Currently, students are using genAI in all sorts of ways. They are using it to help with their studies. Making learning materials easier for them to understand. Helping them to grasp those elusive concepts they just cannot grasp from reading or listening to online lectures. Which they have once would have grasped by attending tutorials and sitting down to discuss the topic with their peers and more importantly with the academic, the font of knowledge and wisdom in a course.
Generative AI is being used by students to edit and polish their assignments. Correcting and polishing their syntax and grammar. Taking the unrefined words of their thoughts and creating a masterpiece of language and prose. Where all the punctuation is correct. The logic of the argument is smooth and clearly understood. Gone are the days of the academic struggling to read an assignment, because the argument presented jumps all over the place. Or the grammar and syntax were so poor you could not make out the argument. Punctuation is splattered onto the page in the same way paint is splattered onto a canvas to give the appearance of stars at night.
Concerningly, AI or specifically genAI is used by students to create their assignments. Students are devising prompts or just uploading assignment questions to a genAI tool and allowing the tool to answer for them. Students input the required prompts and out pops a formulated answer for them. The students are no longer reading the required texts. Interacting with the online learning materials and attending lectures or tutorials to learn the intricacies of what needs to be learnt is on the decline and engagement is at minimal levels.
Now some discerning students, who do engage with content, will review, revise, and edit the genAI output. Ensuring that the information provided meets the requirements of the assessment or least gets them most of the way there. Most, alas, will not be that discerning, and will take the word written, graph created, or picture developed by the genAI tool as being genuine and correct. They will not review, revise, or edit it, they will just submit it for marking.
We as academics then get the pleasure of marking this work. Those who are discerning will shine and will have produced an excellent piece of work. It may be impossible to determine if it really is the student's work or if it is the product of a genAI tool. Others from non-discerning students we will groan and moan about because our job dictates that we mark what is trash is submitted. Because a genAI tool incorrectly used will produce trash. The arguments will be superficial, there will be hallucinated references, and although the work may look polished, the reading of it is disheartening.
As the guardians of our profession, we need to unite and start to stand up to the onslaught of technology and on equal terms embrace it, especially when there is a clear benefit to that technology. When there is potential harm taking time to remove it and discouraging this harmful use. There are several ways this needs to happen.
First, we need to be clear in our definition of what we are discussing. Returning from a couple of conferences where the focus of discussions was AI, it was clear that we were all using different definitions. Let us start by defining Artificial Intelligence (AI) as a broad term that is used to describe any form of AI. This is a broad term used to explain AI in general terms only. Then we have Predictive AI (predAI). This form of AI is the one we use the most often. It is AI that predicts what is going to happen next. This is the predictive text seen in text apps or route planning software. It learns from its user to predict what their response may be, based on how they have previously combined words. It does not create anything new; the information is already there. Then we have genAI. This type of AI is what causing all the issues in academia. No one clearly understands or knows how these Large Language Models (LLMs) work. What is known is that they take an inputted prompt and then create an output based on the prompt. Sometimes, it gets it correct and sometimes it gets it wrong.
It is the genAI tools that are the concern because students are using them indiscriminately for assignments and not using prompts correctly, not reviewing or revising the output appropriately, or not understand the fundamentals of the profession to be able to critically review the output to decide its accuracy. This is the second thing we as academics need to do to ensure that we stay the guardians of our profession. We need to ensure that we understand and know the capabilities of common genAI tools. We do not need to be the experts, but we cannot rely on others either. If we do not learn, grow, and develop an understanding our students will constantly outshine us and the knowledge we must impart to them becomes useless. Our roles will no longer be important because according to students they can get everything they need by putting in the correct prompt and getting the correct answer.
This leads to the third thing we need to do as guardians of our profession. We need to ensure that students by the time they are ready to graduate have acquired the right knowledge to graduate safely as nurses. How do we do this? We do this by engaging with students, encouraging them to attend class, and showing the importance of interacting with their peers and academics. Showing them that producing a piece of work for an assignment does not mean they have acquired the correct knowledge to be a nurse. When we formally test knowledge acquisition, we do so in an authentic way, a way that allows the student as an individual to shine and not a genAI tool that no one knows how it works. This demonstration of knowledge acquisition allows us to confidently graduate students to become members of the most trusted profession in the world.
Finally, we need to deter students from using a genAI tool in ways that are not beneficial and do not allow them as an individual to shine with the knowledge they have learnt. We need to detect academic misconduct from incorrect genAI use. No matter how long it takes we need to investigate these cases, we need to report them to compliance units and punish the wrongdoers. If we do not do this then we are not serving our profession, we are not acting as its guardians and we are devaluing the degree we are teaching and we are devaluing our profession.
We are the guardians of our profession, and we are there to ensure that the care patients receive is safe. GenAI is a weapon that can derail that and destroy the profession. As guardians we need to rise and fight the dark side of genAI and defend the light side, to ensure that nursing as a profession stays the force of good in healthcare because the knowledge acquired by students in its discipline is genuine and created by real people and not fabricated by uncontrolled machines.
Editorial note: Editorials in NEP are not peer reviewed and are published at the discretion of the Editor in Chief. Rejoinders are welcome provided they are constructive and polite.
Funding sourcenone declared.
Ethical approvalNot applicable
Declaration of Competing InterestNone declared.
AcknowledgementsNone declared.
©2025. Elsevier Ltd