Content area
Aim
The aim of this study was to explore the use of generative artificial intelligence (GenAI) in geriatric nursing classes for the design of older adult activities to educate students on how to pose clear questions, provide and identify potentially suitable daily activities for older adults.
BackgroundResearchers in various educational fields are increasingly employing GenAI tools such as ChatGPT for curriculum development and research. Question generation is an essential skill for all students to learn to acquire knowledge. However, there is limited experimental evidence on teaching students to correctly use GenAI for assisting with question generation ability and empirical data related to improving students' capacity for solving complex problems remains scarce.
DesignA mixed-method study design with both quantitative and qualitative analysis.
MethodsThis study investigated the effectiveness of a GenAI-guided prompt-based learning approach implemented in a geriatric nursing class for first-year undergraduate students, involving a cohort of 56 participants.
ResultsExperimental results indicated that the GenAI-guided prompt-based learning approach significantly enhanced students' critical thinking, metacognition and problem-solving tendencies and their question generation via prompts performance. Moreover, participants who engaged in the GenAI-guided prompt-based learning approach found the tasks easier to complete and required less cognitive effort.
ConclusionsNursing students using the GenAI-guided prompt-based learning approach outperformed the control group in cognitive network analysis dimensions of clarity, relevance, complexity, precision and engagement. Thus, integrating GenAI prompts into course activities can effectively improve student learning outcomes, reduce metacognitive load and assist in solving learning problems.
In the era of globalization, cultivation of students’ critical thinking, metacognition and problem-solving skills is crucial for their holistic development ( Exintaris et al., 2023; Marthaliakirana et al., 2022). These skills help students adapt to a rapidly changing world and, in an age inundated with information, enable them to identify false information, thus preventing misguided decisions and enhancing their decision-making capabilities ( Merkebu et al., 2023). By improving metacognitive abilities, we can also foster students' innovative and creative thinking, thereby enhancing their problem-solving capabilities ( Wang et al., 2023a). This, in turn, boosts social interaction and collaboration skills. Consequently, promotion of these capabilities not only has a direct impact on students’ academic achievements, but also has a significant impact on their professional growth and personal development as members of society ( Orakci, 2023). Therefore, teaching health students about designing activities and promoting health for the elderly is important for maintaining quality of life in old age ( Tandoyo et al., 2023). Integrating activities for older people into health education programs allows students to more effectively support an aging demographic, while also promoting societal benefits by creating a space where older adults can enjoy lives that are active, fulfilling and respected.
With the advent of the artificial intelligence era, the real-time rapid generation capabilities exemplified by Generative AI (GenAI), such as ChatGPT, which can produce text or charts, have attracted the attention of educational researchers from various fields ( Essel et al., 2024; Lai and Tu, 2024). Jukiewicz (2024) noted that ChatGPT effectively improved students' performance in a Python programming course. Scholars using ChatGPT as a learning aid have been able to enhance individual self-efficacy and critical thinking through its performance capabilities ( Chang et al., 2024). During the problem-solving process, ChatGPT supports students in the generation phase, making tasks more manageable and enhancing students' confidence in finalizing tasks ( Urban et al., 2024). According to previous studies, the human-AI collaboration theory ( Järvelä et al., 2023; Oravec, 2023) posits that using AI places particularly high demands on metacognitive skills, with participants using ChatGPT exhibiting superior learning outcomes compared with control groups. This is because problem solving with ChatGPT requires adequate metacognition, as learners must not only work based on their own ideas, but must also engage intensively in restating and clarifying their thoughts, rectifying erroneous information and adjusting ChatGPT commands to generate suitable solutions, thereby ensuring the quality and originality of the generated content ( Exintaris et al., 2023).
Researchers Guaman-Quintanilla et al. (2023) defined design thinking as a problem-solving method that underscores understanding the students, challenging their assumptions, redefining problems and creating innovative solutions through a student-centric progression. Design thinking includes the five key stages of empathize, define, ideate, prototype and test ( Kim et al., 2023). This methodology is particularly valuable for tackling complex problems that are ill-defined or unknown, by drawing on logic, imagination, intuition and systemic reasoning to explore possibilities and to create desired outcomes that benefit the end learner. Scholars have proposed that research into specific applications of ChatGPT indicates a higher interest in design education among students ( Meron and Araci, 2023), as well as critical thinking, metacognition and problem-solving tendencies ( Huang et al., 2024). During the ChatGPT prompt process, stages such as empathizing, defining and ideating provide assistance and a clear structure (i.e., problem representation and provision of relevant information), as well as problem-solving phases such as prototyping, which aid in addressing challenges, testing tasks and completing learning tasks, thereby reducing task difficulty ( Scott, 2023). The support provided by ChatGPT during the generative phases can reduce the mental effort required by students to solve tasks, thus lowering the cognitive demands ( Urban et al., 2024). Thus, this study applied ChatGPT in geriatric nursing classes on older adult activities to assist students in formulating clear questions, providing potentially useful information and generating a variety of daily activities suitable for older adults.
Although various studies have explored the application of ChatGPT to assist with language arts ( Li et al., 2023), reading ( Kushki and Nassaji, 2024), mathematics ( Hwang and Utami, 2024) and argumentation ( Wang et al., 2024), the prompts or instructions for problem solving in educational environments are often unclear, rendering ChatGPT unable to effectively aid learners in resolving learning issues or tasks. Nevertheless, past research has not yet examined students' questioning abilities with ChatGPT, nor whether content generated with clear prompts is more detailed, original, or unique than other content which can be obtained through questioning with ChatGPT ( Kurban and Şahin, 2024). As noted by Andrews-Todd et al. (2022), annotation, quality, refinement and originality are key metrics for assessing performance on problem-solving tasks. In classroom settings, vague or open-ended prompts or instructions pose challenges for using ChatGPT effectively ( Velásquez-Henao et al., 2023). The use of GenAI technologies such as ChatGPT by students, without correct instruction, may pose challenges in completing learning tasks. Consequently, teaching students how to pose questions and use meaningful prompts with ChatGPT can facilitate a deeper understanding of AI in educational innovation and can improve learners' problem-solving abilities ( Lee et al., 2023). Educational researchers are highly interested in the performance of students' questioning abilities with GenAI, yet empirical scientific studies in this area are scant. Given this context, this study aimed to reveal the impact of using well-defined prompts for GenAI on students' learning outcomes, providing insights and evidence that can serve as an academic resource for educational researchers and for broader applications of AI technology in education. To achieve this goal, we implemented a GenAI-guided prompt-based learning approach to guide students through a learning task of designing activities for older adults and explored the differences in learning outcomes between students guided by prompts and those who did not receive prompts during the problem-solving process.
Since design thinking is an effective learning strategy that fosters students' ideation, creation and deep engagement with learning, this study integrated the design thinking processes of empathize, define, ideate, prototype and test ( Guaman-Quintanilla et al., 2023; Kim et al., 2023). We proposed a mixed-method study to validate the efficacy of the GenAI-guided prompt-based learning approach for enhancing students' questioning abilities during the design thinking process. The study’s approach aimed to cultivate students' critical thinking, metacognition and problem-solving tendencies. The following research questions were posed:
- (1) Are there differences in the critical thinking, metacognition and problem-solving tendencies of students who engaged in the GenAI-guided prompt-based learning approach and those who used the conventional technology-based learning method?
- (2) Is there a difference in the question-generating ability via prompt performance of students engaging in the GenAI-guided prompt-based learning approach and those using the conventional technology-based learning methods?
2.1 Question generating in constructivist learning theory
Constructivist learning theory refers to students’ knowledge being absorbed and actively constructed through interaction between the student and the environment ( Saunders, 1992). Thus, integrating higher-order thinking (e.g., question generating, critical thinking tendency, metacognition tendency and problem solving) into the constructivist educational framework represents a powerful pedagogical approach ( Marthaliakirana et al., 2022). In constructivist teaching, questioning is seen as a crucial cognitive learning process whereby students learn to pose questions, by exploring and understanding through their inquiries to construct their own knowledge, thus facilitating deep learning and the development of critical thinking. When teachers design learning activities that apply principles of question generation, they encourage students to pose questions, explore and engage in problem-solving tasks, helping them to link new and old knowledge and enhance their ability to deeply and critically think through complex problems ( Ottenhof et al., 2022; Wijanarko et al., 2021). Through the questioning process, students can become absorbed in deeper reflection, recognize knowledge and reflect on learning progressions and strategies, thus promoting the development of their metacognitive skills ( Durukan and Kızkapan, 2022). Through such questioning exercises, the learners’ question generating ability and the constructivist teaching mutually emphasize each other and promote the students’ dynamic exploration of learning, deep empathy and knowledge construction ( Shinogaya, 2021). The GenAI environment can realize the need for students to safely express their doubts, explore and become absorbed in question generation ( Hwang and Chen, 2023). As Durukan and Kızkapan (2022) found through questioning, teachers can create progressively stimulating learning experiences that endorse learners’ critical thinking, creativity and problem-solving abilities, thereby providing more comprehensive development of students' learning and metacognitive skills.
Question generating ability refers to a learner's skill of posing meaningful, relevant and insightful questions during the learning process ( Kurdi et al., 2020). The ability to ask effective questions is considered crucial for deepening understanding, fostering critical thinking and driving independent inquiry ( Tarigan and Hasibuan, 2024). Scholars emphasize that it involves recognizing knowledge gaps, formulating questions that can lead to further insights and engaging actively with educational material ( Merliza et al., 2020). This skill is often emphasized in educational frameworks that promote active learning and student-centered teaching approaches. The question generating ability is also useful for various purposes, such as education, research and AI development, where generating relevant and insightful questions can lead to deeper understanding and interaction ( Song and Glazewski, 2023).
Scholars have stated that having better question generating ability can encourage students’ critical thinking and analysis, assess their comprehension of a topic, guide research or learning directions and enhance engagement in discussions or educational content ( Aflalo, 2021; Shinogaya, 2021). Using a GenAI prompt to acquire knowledge or comments naturally requires the person to not only recognize the content at hand but also to formulate questions that are meaningful, relevant and challenging based on that content ( Ilgaz and Çelik, 2023). As education is increasingly shifting towards student-centered and personalized approaches to improving students’ learning outcomes, the everyday use of GenAI exemplifying this evolution. This shift highlights the importance of acknowledging individual variances in classroom activities, thereby prompting the incorporation of technologies such as GenAI into education practices ( Wang et al., 2023b). Effective questioning with GenAI (e.g., ChatGPT), which plays the role of an expert by providing instant responses and advice, has demonstrated exceptional performance in solving complex problems. Integrating question generating via prompts into a constructivist framework transforms the classroom into a dynamic environment where students are active participants in their own learning and engage in reflection on their learning. Chang et al. (2024) used questioning strategies to enhance learners' understanding of concepts in nursing education. Thus, this approach not only prepares students for academic success, but also equips them with questioning strategies, comprehensive knowledge, skills and attitudes. It fosters a tendency for critical thinking and problem solving, effectively preparing them for future careers.
The changing role of today's educators necessitates that educational researchers not only address new pedagogical challenges and methodologies, but also become facilitators, guiding learners to think, practice and apply knowledge in ways that suit their individual learning needs, while also using AI technology to promote efficient learning across interdisciplinary training fields ( Jeon and Lee, 2023). This transformation not only enhances the quality of education but also prepares learners for lifelong success and independent learning. Consequently, this study integrated GenAI questioning strategies through interactive questioning tactics, where GenAI acted as a responsive guiding agent to student inquiries, enhancing enjoyment and participation in the learning process and enabling students to personalize their learning at their own pace, thereby improving their learning efficiency.
2.2 The role of metacognition tendencyMetacognition refers to one's awareness and understanding of one’s own thought processes ( Salam et al., 2020). The term "metacognition tendency," while not a standardized psychological term, can be interpreted as the inclination or habit of engaging in metacognitive processes ( Thomas et al., 2022). A high-level metacognitive tendency might manifest as consistently analyzing how one learns best, questioning the effectiveness of diverse strategies for problem solving, or being aware of and modifying one's cognitive biases and limitations ( Nosratinia et al., 2015). Researchers have also found that students with high-level metacognitive tendencies are often better at adapting their learning strategies to new conditions, regulating their emotional responses and making decisions because they are more aware of their own thought patterns and behaviors ( Merkebu al., 2023).
Technology plays a significant role in discovering students' metacognitive tendencies. Researchers have applied various information technologies to endorse and evaluate students' metacognitive abilities. For instance, Khiat and Vogel (2022) integrated a metacognitive tool within a Learning Management System (LMS), which recorded the students' self-regulated learning processes and strategies, facilitating students’ performance, motivation and reflection in learning. Wang et al. (2023b) proposed an intelligent tutoring system that allowed for the recommendation of various learning modules or activities. This system adjusted to the student's pace and enhanced their educational experience. Additionally, educational technology platforms and applications have been designed with features to promote students' reflection and self-assessment ( Alamri et al., 2021). After completing learning tasks, immediate feedback helps students adjust their learning strategies promptly. Educational technology tools can collect and analyze data on student interactions and behavior tracking on learning platforms, providing insights into students' learning behaviors and metacognitive abilities, assisting teachers in identifying students who struggle or need help ( Alnajjar et al., 2023).
The aforementioned technological applications, such as the GenAI application, ChatGPT, not only enhance students' learning outcomes but also provide teachers with effective tools to support their individualized learning needs. When dealing with ambiguous tasks, individuals employ divergent thinking and metacognitive skills while setting unique goals.
3 MethodThis study integrated the GenAI-guided prompt-based learning approach with constructivist learning theory, enabling students to practice questioning and explore learning by using a learning sheet in the GenAI-based environment. To validate the effectiveness of this experimental approach, we conducted a mixed-method study in a university-level geriatric nursing class focused on planning daily activities for older adults. All 56 students volunteered to participate in the investigation and were divided into an experimental group and a control group, each comprising 28 participants. Moreover, it was the first time for all students to use ChatGPT and they were instructed by the same instructor, who had 5 years of digital education experience. The experimental group learned using the GenAI-guided prompt-based learning approach, while the control group learned with the conventional technology-based learning approach without prompts.
3.1 Generative AI-guided prompt-based learning approach based on the constructivist learning theoryIn the experimental group, students were guided to use a Large Language Model (LLM) via a GenAI-guided prompt-based learning approach consisting of the Clarity, Context, Specificity and Attention balance stages to complete learning tasks, as shown in Table 1. Through the question generating process with clarity, context, specificity and attention balance, students integrated new information with existing knowledge and experiences to construct knowledge. Meanwhile, the control group students simply used ChatGPT without prompt guidelines and were free to complete the class activity task, as suggested by Han and Battaglia (2024).
In the present study, the course activities began with an introduction by the teacher, followed by instructing the students to complete the scenario assignments for the study, as follows: "Please try to design a daily health promotion and long-term care service activity for older adults." At the Clarity stage, students were required to pose clear prompts based on the geriatric nursing class assignment. They should follow the examples and practice providing prompts as shown in Fig. 1. For example, the student might pose the following prompt: “As the director of a long-term care facility, please organize a daily schedule of activities for the older residents, including detailed information about specific routine activities.”
At the Context stage, students built on the foundations established during the Clarity stage by narrowing down their responses to this specific context. They continued to practice formulating prompts based on the Context example. Students should follow these examples and practice creating detailed prompts as illustrated in Fig. 2. For example, the student might pose the following prompt: “Please provide a detailed overview of the types, frequency and duration of the daily scheduled activities.”
At the Specificity stage, students were required to understand the mechanisms of action in long-term care and to apply their insights by using prompts based on the results from the Context stage. They should follow the provided examples and practice creating specific prompts as illustrated in Fig. 3. For instance, students might pose the following prompt: “Please specify the steps involved in devising, implementing and monitoring a daily activity program to ensure optimal participation, well-being and quality of life for older residents.”
At the Attention balance stage, students were required to provide a comprehensive overview and detailed information through prompts based on the Specificity stage. They should follow the examples and practice crafting prompts as shown in Fig. 4. For example, students might be asked to pose a prompt such as, “Provide a comprehensive overview of the daily activity scheduling process specifically tailored for older individuals receiving long-term care and present this in tabular form.”
Finally, if the students were satisfied with the final results, they should upload their assignments to the school’s e-learning platform. The class teacher could then evaluate their performance on the prompts process. The example shows a total score of 23, as illustrated in Fig. 5.
3.2 ParticipantsThis study was conducted from November 2022 to January 2023. It involved 56 first-year university students enrolled in an older adult activity design course, none of whom had prior experience of using ChatGPT. Twenty-eight students formed the experimental group, which employed the GenAI-guided prompt-based learning approach, while the remaining 28 students constituted the control group, which used a conventional technology-based learning method without prompts. Both groups received identical instructional content in their older adult program courses. Table 2 displays the demographic characteristics of the study participants. The participants were aged 20–21, with an average age of 21 years and most were female (73.2 %). More than half (69.6 %) reported having over 11 years of computer use experience. The primary online activities reported were gaming (37.5 %) and social networking (32.1 %). Notably, no statistically significant differences were observed between the two groups in terms of age, gender, computer use experience, or online activities.
3.3 Experimental procedureThis study was conducted over a 3-week period, as illustrated in Fig. 6. Initially, the classroom teacher introduced the teaching objectives and learning tasks for the geriatric nursing class, while concurrently conducting a pretest questionnaire survey to assess students' baseline abilities in critical thinking, metacognition and problem-solving tendencies. In the second week of the course, the experimental group engaged in learning using the GenAI-guided prompt-based learning approach combined with a learning sheet for developing a geriatric nursing plan, while the control group used a learning sheet combined with the conventional technology-based learning method, which involved using ChatGPT without prompt-based learning and directly searching for tasks using a computer. At the end of the third week of the course, both groups of students were asked to complete a posttest questionnaire survey on critical thinking tendency, metacognition tendency and problem-solving tendency.
3.4 InstrumentsThis study was designed to assess the effectiveness of the GenAI-guided prompt-based learning approach in terms of older adult educational activity design and developing students' critical thinking, metacognition and problem-solving tendencies, as well as their performance of question generation via prompts. Instructors used the ChatGPT tool during the course introduction to assist students in achieving the educational objectives and completing the tasks.
To investigate students' critical thinking tendency, the questionnaire was modified by Chang et al. (2020) based on the measure proposed by Chai et al. (2015). It consisted of six items, including statements such as "During the learning process, I evaluate different opinions to see which one is more reasonable" and "During the learning process, I can discern which pieces of information are credible." This questionnaire employed a 5-point Likert scale, where higher scores indicated a stronger critical thinking tendency. The overall Cronbach's alpha value was.84, indicating good reliability.
To investigate students' metacognition tendency, this study used the metacognition tendency questionnaire proposed by Lai and Hwang (2014). It consisted of five items, with statements such as "I ask myself how well I accomplished my goals once I'm finished" and "I ask myself if I learned as much as I could have once I finish a task." This questionnaire also used a 5-point Likert scale, where higher scores indicated a higher metacognition tendency. The overall Cronbach's alpha value was.83, indicating good reliability.
To investigate students' problem-solving tendency, we employed the problem-solving tendency scale developed by Lai and Hwang (2014). The scale consisted of six items, including statements such as "I believe that I have the ability to solve the problems I encounter" and "I believe that I can solve problems on my own." The questionnaire used a 5-point Likert scale, with higher scores indicating a stronger problem-solving tendency. The overall Cronbach's alpha value was.78, indicating good reliability.
To appraise the students' question-generating outcome via prompt performance, Table 3 lists the expert reliability and validity of the question-generating evaluation rubric provided by two experts in the field. The expert validity index (EVI) reached 0.82 and the Cronbach's alpha value of this rubric was.83. The 5-point scoring rubric for question generating outcome performance is presented. It is a 5-point scoring scale for a structured approach, including the five key dimensions of clarity, relevance, complexity, precision and engagement. The rubric assigns each dimension a value between 1 and 5, with 5 indicating the highest level, reflecting superior performance in question generation tasks. The total possible score of 25 points serves to thoroughly evaluate the students’ question generation abilities. An example is shown in Fig. 5. This method facilitates a detailed evaluation of the overall quality and effectiveness of the prompt produced.
3.5 Ethical considerationsThe study was approved by the university's Ethics Committee. Students voluntarily chose to participate and informed consent was obtained from each participant, allowing them to withdraw from the study at any time without limitations. The study adhered rigorously to the ethical principles specified in the Declaration of Helsinki and its subsequent amendments.
3.6 Data analysisThis research examined critical thinking tendency, metacognition tendency, problem-solving tendency and question generating via prompt performance tests, using the IBM Statistical Toolset and Service Offerings, Edition 22 (SPSS v.22), for Mann-Whitney U test analysis, a nonparametric statistical test, since the sample size was not large, as suggested by Orcan (2020).
The qualitative data were obtained from the geriatric nursing plans developed by the students. Each student plan was coded following the five coding items, that is, physical well-being, cognitive stimulation, social interaction, relaxation and recreation and routine and preparation, as presented in Table 4, as suggested by Dingle et al. (2021), Romero-Ayuso et al. (2021) and Zheng et al. (2022). For example, Fig. 7 shows a plan developed by a student during the learning process and the coded results. Epistemic network analysis (ENA) was then used to calculate the relationships between codes by examining their co-occurrence across various variables within discussions. This analysis produces a weighted network of these co-occurrences, accompanied by visual representations for each analysis unit in the dataset ( Shaffer and Ruis, 2017; Zhang et al., 2022). Additionally, the study employed the ENA (epistemic network analysis) Web Tool (version 1.7.0) to analyze both groups’ performance of question generation via prompts ( Marquart et al., 2018).
4 ResultsOur study applied a GenAI-guided prompt-based learning approach and question generating strategies to an older adult activity design lesson. The outcomes demonstrated that this approach efficiently advanced the students’ critical thinking, metacognition and problem-solving tendencies as well as their question generating via prompt performance. Additionally, it successfully developed their clarity, relevance, complexity, precision and engagement in question-generating performance during the prompt process.
4.1 Quantitative results of the students’ critical thinking, metacognition and problem-solving tendencies and their question generating via prompts performanceThe data shown in Table 5 reveal that the Mann-Whitney U test identified a substantial difference in the tendencies of critical thinking, metacognition, problem solving and question generating via prompts performance learning outcomes between the experimental group (mean rank = 33.21 ∼ 36) and the control group (mean rank = 21–23.79). The experimental group scored higher on these learning outcomes, as evidenced by U = 182 ∼ 260, z = - 4.31 ∼ - 2.44 ( p < 0.001 ∼ p < 0.05). The Wilcoxon W value was also reported as 588–666. These findings suggest that the use of the GenAI-guided prompt-based learning strategy, which included question-generating techniques and evaluative components, was more effective than the conventional technology-based learning method used by the control group in terms of improving nursing students’ capabilities of critical thinking, metacognition, problem solving and question generation. In particular, the experimental group that engaged with the GenAI-guided prompt-based method showed enhanced learning outcomes compared with the control group.
4.2 Qualitative results of the students’ question generating via prompts performanceIn this study, we analyzed students' ability to generate questions via prompted performances using the Epistemic Network Analysis (ENA) Web Tool, Version 1.7.0. The qualitative analysis was structured around five dimensions within the ENA model: clarity, relevance, complexity, precision and engagement. Conversations were categorized based on all data lines associated with each of these five dimensions in a single instance. Along the X axis, a two-sample t test assuming unequal variances indicated that the experimental group (mean = 0.15, SD = 0.28, N = 9) was statistically significantly different at the alpha = 0.05 level from the control group (mean = - 0.15, SD = 0.28, N = 9; t (16.00) = 2.28, p = 0.04, Cohen's d = 1.08) ( Cohen, 1988). Along the Y axis, another two-sample t test assuming unequal variances revealed that the experimental group (mean = 0, SD = 0.44, N = 9) was not statistically significantly different at the alpha = 0.05 level from the control group (mean = 0.00, SD = 0.44, N = 9; t (16.00) = 0.00, p = 1.00, Cohen's d = 0).
Fig. 8 demonstrates the experimental group’s cognitive network analysis image with five axes representing different attributes: clarity, complexity, engagement, precision and relevance. Each axis emanates from the center of the chart and is marked with at least one node, which indicates the level of each attribute being assessed. The nodes are connected with lines, creating a pentagonal shape that visualizes the profile of a certain subject or entity in relation to these attributes. In addition, the attribute of clarity is connected with a thick red line to complexity, suggesting a highlighted relationship or focus between these two attributes. It is possible this indicates a comparative analysis or a paired evaluation where clarity and complexity are of particular interest. Moreover, the other connections are in a lighter shade, possibly indicating a secondary focus or a standard observation without special emphasis. The nodes of the experimental group for complexity, engagement, precision and relevance are located further from the center compared with clarity, which suggests that the subject or entity being evaluated scores higher on these attributes. According to Schaffer et al. (2016), the position of nodes merely indicates how frequent a node connects to nodes in the corresponding position (e.g., a node higher in the space than another node means it has more connections in higher space than the other one.)
In summary, the cognitive network analysis seems to suggest a profile where complexity, engagement, precision and relevance are notably present or rated higher, while clarity is less pronounced. This result might indicate that the use of the GenAI-guided prompt-based learning approach in this geriatric nursing class activity was particularly useful for improving competencies or characteristics, offering a snapshot of strengths and areas for improvement.
Fig. 9 determines, through ENA diagram analysis, that the image presents a cognitive network analysis, which is a graphical method of displaying multivariate data in the form of a two-dimensional chart of three or more quantitative variables represented on axes starting from the same point. The control group of ENA results showed that five axes extend from the center, each representing a different attribute for evaluation: clarity, complexity, engagement, relevance and precision. The attributes are connected by blue lines to form a shape that visualizes the level or score of each attribute. The plot points for clarity and precision are closer to the center, indicating lower scores or levels for these attributes in comparison to complexity, engagement and relevance, which extend further out, suggesting higher scores or levels. The two groups have the same network positioning in Fig. 8 and Fig. 9, a useful feature of ENA for comparison. In the context of statistical analysis, this cognitive network analysis could be used to illustrate the profile of a particular dataset, project, or study. The relative proximity of the cognitive network analysis pointing to the center suggests that while the subject being evaluated is complex, engaging and relevant, it may lack some degree of clarity and precision. As a statistical analyst, one might infer that there are opportunities for enhancing clarity and precision to achieve a more balanced profile across all evaluated attributes in the control group. This visual representation would be particularly useful in a presentation or report to stakeholders to quickly convey where the strengths and weaknesses lie in the subject being assessed.
Fig. 10 illustrates the initial results of a geriatric nursing plan developed by a student in the experimental group, while Fig. 11 is the improved plan after the student interacted with the generative AI. It can been seen that the improvements include detailed descriptions of the planned schedule and clear objectives of each item.
5 Discussion5.1 Contributions of the present study
Recent studies have shown the advantages of using GenAI prompt education to improve students' academic achievements, critical thinking skills and problem-solving abilities, which are considered essential competencies in this century ( Urban et al., 2024). Various strategies in different fields have been successfully implemented to cultivate students' learning performance, including review methods ( Kurban and Şahin, 2024), flipped classrooms via prompt engineering learning activities ( Wang et al., 2023b) and nursing and health education ( Chang et al., 2024). However, despite these diverse approaches, there remains a notable gap in customizing teaching specifically for students and consistently integrating GenAI-guided prompt-based learning within the framework of designing activities for older people in light of the rise of GenAI.
This study introduced a GenAI-guided prompt-based learning approach into a geriatric nursing class to generate daily activity schedules. It combined question generation within a constructivist learning theory, emphasizing the progress of students' critical thinking, metacognition and problem-solving tendencies, as well as their performance in question generation via prompts, in the context of older adult activity design education. The experimental results indicated that learning based on the GenAI-guided prompt-based approach and question generation within the constructivist learning theory significantly enhanced students' critical thinking and improved their metacognition and problem-solving tendencies. The GenAI-guided prompt-based learning approach not only strengthened these tendencies but also reshaped students’ understanding of older adult activity design. As suggested by Urban et al. (2024), compared with the control group, learning based on the prompt-based approach engages learners through its interactive and rapid content generation, reduces cognitive effort and provides targeted prompts that deepen training in question-asking skills. This enables students to complete learning tasks more efficiently and enhances their problem-solving capabilities.
5.2 Discussion of the findings in response to the research questionsThe findings of this study demonstrate that the experimental group, which used the GenAI- guided prompt-based learning approach, exhibited significantly higher capabilities in question generation compared with the control group. This underscores the importance of the interactivity, immediacy and precision of content generated based on GenAI prompts in learning and task completion. Specifically, in the process of older adult activity design, students can use GenAI prompts to precisely generate and adjust daily schedules for elderly activities, achieving satisfactory outcomes. This process aligns with the findings of Exintaris et al. (2023) and Marthaliakirana et al. (2022) which suggested that GenAI prompts enhance question clarity, context, specificity and attention balance, fundamentally altering the innovative strategies in the knowledge exploration process. Through the GenAI-guided prompt-based learning approach, students practice precise and critical questioning techniques, moving away from aimless exploratory learning.
Furthermore, the ability to generate questions plays a crucial role in cultivating students' core competencies for the 21st century ( Nurramadhani and Permana, 2020). Through generative artificial intelligence education, students actively engaged in older adult activity design activities produced by the GenAI-guided prompt-based learning approach, practicing in-depth prompts to complete learning tasks and objectives. Qualitative analysis of the visual network structures from the epistemic network analysis also indicated that the experimental group outperformed the control group in aspects of clarity, relevance, complexity, precision and engagement, showing marked improvements. While the experimental group's question-generating performance displayed significant structural links in clarity, engagement and complexity, the links in precision and engagement were weaker. In contrast, the control group's question-generating performance had clear structural links in clarity, relevance and complexity, but lacked links in precision and engagement, indicating that even though they used the conventional technology-based learning method for learning tasks, the lack of guided instruction resulted in less engagement. The cognitive network analysis of the experimental group aligns with the statements of Yilmaz and Yilmaz (2023) that learning based on the GenAI-guided prompt-based approach triggers students' self-efficacy in learning, effectively enhancing their question-generating ability and strengthening their tendencies of critical thinking, improving metacognition and enhancing their problem solving. By designing prompts that guide questioning, students' confidence in using GenAI and deep prompts can be bolstered. This study confirms the advantages of incorporating the GenAI-guided prompt-based learning approach into curricula in the era of artificial intelligence, promoting active student participation and enhancing learning results.
Overall, this study combined the GenAI-guided prompt-based learning approach with curricular activities; the quantitative and qualitative analyses presented that it not only inspired students' brainstorming but also boosted their knowledge acquisition and participation, thereby improving their critical thinking, metacognition and problem-solving tendencies. Qualitative cognitive network analysis also supported this view, demonstrating that, compared with the control group, the learning model based on the GenAI-guided prompt-based learning approach created a more motivating learning environment, with better performance in question generation via prompts.
5.3 Limitations of the present studyHowever, this research has some limitations. First, the small sample size precludes generalization of the results to other student populations. Future studies could involve students from different academic disciplines and cultural backgrounds using randomized controlled trials to increase the methodological rigor. Another possibility is to integrate teaching theories and conduct qualitative interviews to explore learning behaviors and analyze the impact of GenAI on students' fundamental competencies and learning outcomes, which would make important academic contributions. Future research could also analyze the different roles of GenAI—teacher/tutor, student/tutee, learning peer/partner, domain expert, administrator and learning tool—to verify its effectiveness in innovative educational research, apply the various roles across different domains and solidify the foundation of professional training.
6 ConclusionsThis study effectively integrated the GenAI-guided prompt-based learning approach with course activities, applying GenAI prompts to a specialized course on older adult activity design. According to the evidence of this mixed-method study, this approach enhanced students' tendencies of critical thinking, improved their metacognition and boosted their problem-solving capabilities, particularly their performance in question generation via prompts. This provides a novel, practical and engaging method for teaching students in specialized fields. Additionally, qualitative cognitive network analysis through ENA demonstrated that the GenAI-guided prompt-based learning approach enhanced the clarity, relevance, complexity, precision and engagement of student-generated tasks.
By employing GenAI as an expert to provide a professional content-generation learning environment, educational researchers can continue to plan and design learning activities or strategies to innovate teaching and enhance student learning outcomes. This study provides substantial empirical evidence supporting the use of GenAI technology to improve the learning of professional students. Future research could continue to employ deep prompt methods, laying a foundation for cross-disciplinary educational research applications.
CRediT authorship contribution statementGwo-Jen Hwang: Supervision, Project administration. Pei-Yu Cheng: Writing – review & editing, Writing – original draft, Visualization, Validation, Methodology, Formal analysis, Data curation. Ching-Yi Chang: Writing – review & editing, Writing – original draft, Resources, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation.
Declaration of Competing InterestThe author would like to declare that there is no conflict of interest in this study. Author declare that this manuscript is original, has not been published before, and is not currently being considered for publication elsewhere. No conflicts of interest are associated with this publication, and there has been no significant financial support for this work that could have influenced its outcome.
AcknowledgementsThis study is supported in part by the National Science and Technology Council of Taiwan under contract numbers NSTC 112-2410-H-011-012-MY3, NSTC 112-2410-H-031-035-MY3, NSTC 113-2410-H-038-026 -MY2, and NSTC 112-2622-H-038-002. The study is also supported by the “ Empower Vocational Education Research Center” of National Taiwan University of Science and Technology (NTUST) from the Featured Areas Research Center Program within the framework of the Higher Education Sprout Project by the Ministry of Education (MOE) in Taiwan.
| Prompt process | Definition | Example |
| Clarity | The prompt is very clear. It directly asks for a specific piece of information regarding a particular issue. | As the director of a long-term care facility, please organize
|
| Context | The prompt provides enough context for the model to
“understand” what is being asked. It specifies the daily activity scheduling for older adults in long-term care, which helps in narrowing down the response to this specific context. | Please provide a
|
| Specificity | The prompt is highly specific. It focuses solely on the mechanism of action of long-term care. There is no ambiguity about
what information is being sought. | Please specify the steps involved in devising, implementing and monitoring the
|
| Attention Balance | The balance here is well-maintained. The prompt asks for detailed information but only about one aspect (the daily activity scheduling), which is neither too broad nor too narrow. | Provide a comprehensive overview of the
|
| Variable | Mean±SD /frequency (%) | |
| Experimental group (
| Control group (
| |
| Age (years) | 20.85 ± 0.85 | 21.32 ± 0.24 |
| Gender | ||
| | 20 (71 %) | 21 (75 %) |
| | 8 (29 %) | 7 (25 %) |
| Computer use experience | ||
| | 2 (7 %) | 3 (11 %) |
| | 6 (21 %) | 6 (21 %) |
| | 20 (71 %) | 19 (68 %) |
| Current living arrangement | ||
| | 25 (89 %) | 22 (79 %) |
| | 3 (11 %) | 6 (21 %) |
| Primary online activities | ||
| | 6 (21 %) | 5 (18 %) |
| | 9 (32 %) | 12 (43 %) |
| | 4 (14 %) | 2 (7 %) |
| | 9 (32 %) | 9 (32 %) |
| Criteria | 1 Point | 2 Points | 3 Points | 4 Points | 5 Points |
| Clarity | Unclear and difficult to comprehend. | Somewhat confusing or poorly structured. | Understandable but could be clearer or more concise. | Clear with minimal ambiguity. | Exceptionally clear, concise and easy to understand. |
| Relevance | Irrelevant to the topic. | Tangentially related but misses key aspects. | Some relevance but includes slightly off-topic elements. | Relevant with a clear connection to the topic. | Highly relevant and contributes significantly. |
| Complexity | Trivial or too basic, adding little value. | Overly simple and does not encourage much thought. | Addresses basic aspects but lacks depth. | Encourages considerable thought and investigation. | Prompts deep thinking beyond the surface level. |
| Precision | Extremely vague and unfocused, resulting in irrelevant answers. | Broad and unfocused, leading to scattered responses. | Somewhat broad or vague; needs clarification. | Specific enough to guide answers clearly. | Highly specific, focusing sharply on a particular aspect. |
| Engagement | Uninteresting and unlikely to motivate or engage. | Limited appeal, might not engage effectively. | Moderately engaging but may not captivate all. | Interesting and likely to prompt engagement. | Highly engaging, stimulates interest and participation. |
| Coding item | Description |
| Clarity | Design of detailed routine and preparation items in the plan |
| Relevance | Design of cognitive stimulation activities related to elderly care objectives |
| Complexity | Design of plentiful social interactions |
| Precision | Design of precise activity for physical well-being |
| Engagement | Design of activities for relaxation and recreation |
| Variable | Groups | | Mean rank | Sum of ranks | | Wilcoxon W | Z value |
| Critical thinking tendency | Experimental group | 28 | 36 | 1008 | 182 | 588 | −4.31 *** |
| Control group | 28 | 21 | 588 | ||||
| Metacognition tendency | Experimental group | 28 | 35.54 | 995 | 195 | 601 | −3.62 *** |
| Control group | 28 | 21.46 | 601 | ||||
| Problem-solving tendency | Experimental group | 28 | 33.21 | 930 | 260 | 666 | −2.44* |
| Control group | 28 | 23.79 | 666 | ||||
| Question generating via prompt performance | Experimental group | 28 | 35.50 | 994 | 196 | 602 | −3.71 *** |
| Control group | 28 | 21.50 | 602 |
©2025. Elsevier Ltd