Content area
Aim
To evaluate the effectiveness of a gamified educational workshop on nurses’ evidence-based practice competence, self-efficacy, attitudes, satisfaction and practice report completion.
Background
Evidence-based practice is essential for high-quality healthcare; however, teaching and sustaining its use remains challenging. Gamification may enhance motivation, engagement and learning outcomes in nursing education.
Design
Randomized controlled trial. ClinicalTrials.gov Identifier: NCT06531187; registered July 28, 2024; first participant enrolled on April 9, 2020.
Methods
A total of 102 nurses from a medical center in Taiwan were randomly assigned to either a gamified workshop group (n = 54) or a control group (n = 48), which received case-based small-group discussions. Both groups attended a 40-minute lecture before the intervention. The gamified workshop, guided by the Octalysis framework, incorporated points, badges, leaderboards and time-limited team challenges. Outcomes were measured at baseline, immediately post-intervention and six months later. Generalized estimating equations were used to evaluate the effects of the intervention.
Results
Both groups improved in competence, self-efficacy and attitudes immediately post-intervention. At six months, the gamified group maintained greater improvements in overall competence ( B = 1.37, p = 0.023), especially in the “acquire” ( d = 0.50) and appraise ( d = 0.45) domains. The gamified group also reported higher satisfaction ( p < 0.001) and completed practice reports faster ( p = 0.043), although report passing rates were similar between groups.
Conclusions
Gamified education enhanced and sustained nurses’ evidence-based competence. This approach may provide an effective strategy for promoting the timely and confident application of evidence in clinical settings.
1 Introduction
Evidence-based practice is a core nursing competence that integrates the best research evidence, clinical expertise and patient preferences to optimize patient outcomes ( Clarke et al., 2021). While various educational strategies—including workshops, journal clubs, problem-based learning and flipped classrooms—have been widely adopted to strengthen evidence-based practice competence ( Chandran et al., 2023; Howard et al., 2022; Hsieh and Chen, 2020), a persistent gap remains between evidence-based practice education and its application in clinical settings. Many nurses lack confidence in searching for and appraising evidence, which leads them to rely on traditional experience-based practices rather than integrating evidence-based practice into decision-making ( Giesen et al., 2024). Addressing these challenges requires innovative teaching methods that enhance competence in evidence-based practice and build confidence for sustained implementation.
Barriers to the adoption of evidence-based practice are multifaceted. These include limited evidence-based practice competence, low self-efficacy, passive attitudes toward evidence-based practice ( Koota et al., 2021; Lai et al., 2022; Landsverk et al., 2023), unfamiliarity with research terminology, reluctance to conduct literature reviews and anxiety about applying research findings ( Hines et al., 2022). Organizational barriers, such as time constraints, heavy workloads and a lack of managerial support, further hinder the translation of evidence-based practice knowledge into practice ( Camargo et al., 2018; Dagne and Beshah, 2021). Addressing these challenges requires learning strategies that can simultaneously strengthen competence, foster active engagement and enhance confidence in applying evidence-based practices in real-world contexts.
Several facilitators have shown promise in promoting the adoption of evidence-based practices. Institutional support, mentorship programs and structured post-training reinforcement are correlated with improved evidence-based practice implementation in nursing ( Melnyk et al., 2021). Additionally, interactive learning methods, including simulation-based education and team-based learning, are effective in enhancing competence in evidence-based practice ( Horntvedt et al., 2018). These findings suggest that successful evidence-based practice education depends not only on imparting knowledge but also on creating engaging, motivating and supportive learning environments that reduce anxiety, encourage critical thinking and promote the sustained application of evidence-based practice in clinical settings.
Gamification—the application of game-like elements in non-game settings—has gained recognition as a strategy for increasing engagement, motivation and knowledge retention ( Deterding et al., 2011). By incorporating elements such as badges, points, leaderboards and time-limited challenges, gamification increases engagement, motivation and perseverance while reducing passivity and anxiety ( Riar et al., 2022). Evidence in healthcare education shows that gamification enhances knowledge acquisition, learning motivation and problem-solving skills ( Elzeky et al., 2022; García-López et al., 2023; Qiao et al., 2023) and directly addresses key barriers to evidence-based practice, such as low motivation, limited self-efficacy and reluctance to engage with complex evidence-based tasks. Although mixed results have been reported depending on training contexts, systematic reviews highlight overall positive effects on academic motivation and confidence, alongside a need for rigorous randomized controlled trials (Seo et al., 2021).
The Gamification Octalysis Framework, developed by Yu-kai Chou (2019), provides a structured approach to designing gamified interventions through eight core motivational drivers: epic meaning (instilling purpose in learning), accomplishment (rewarding progress), empowerment (fostering innovation), ownership (enhancing commitment), social influence (promoting teamwork), scarcity (creating time-sensitive challenges), unpredictability (sustaining engagement through unexpected rewards) and avoidance (preserving accumulated points). These elements foster ownership of learning, stimulate curiosity and strengthen resilience in the face of challenging tasks—all of which are crucial for overcoming barriers to the adoption of evidence-based practices. By linking game mechanics with evidence-based practice steps, such as question formulation, evidence searching, appraisal and application, gamification offers a pedagogically innovative approach that extends beyond knowledge delivery to enhance active participation, motivation and long-term retention.
Despite its potential, limited research has examined gamified evidence-based practice education for nurses. To address this gap, this randomized controlled trial evaluated whether a gamified educational approach enhanced nurses’ evidence-based practice competence compared with conventional training. The primary outcome was evidence-based practice competence at six months post-intervention, while the secondary outcomes included self-efficacy, attitudes, satisfaction and practice report completion. We hypothesized that nurses who participated in gamified learning would exhibit significantly greater improvements in evidence-based practice competence and self-efficacy than those who received conventional training.
2 Materials and methods
2.1 Design
This two-arm, parallel, randomized controlled trial adhered to the Consolidated Standards of Reporting Trials (CONSORT) guidelines ( Moher et al., 2010). Participant enrollment began on 9 April 2020 and the final follow-up was completed in January 2023. The trial was registered at ClinicalTrials.gov (registration ID: NCT06531187) on July 28, 2024. To ensure methodological rigor and transparency, we have clearly disclosed this in the manuscript and emphasize that the study protocol—including the prespecified primary and secondary outcomes, as well as the statistical analyses—was finalized prior to participant enrollment and before any data were inspected. The trial was conducted under the oversight of the Institutional Review Board at the principal investigator's hospital. This study used a six-month follow-up design to assess the sustainability of the intervention effects.
2.2 Participants and settings
Participants were recruited from a medical center in northern Taiwan, where evidence-based practice training has been an integral part of professional nursing development for over a decade. The hospital’s nursing department conducts four-hour evidence-based practice workshops twice annually, requiring nurses to pass an evidence-based practice report assessment to advance from N1 to N2. Additionally, the department organizes annual evidence-based practice competitions and actively promotes interdisciplinary collaboration by encouraging nurses to participate in hospital- or nationwide evidence-based practice competitions each year.
Eligible participants were full-time registered nurses aged ≥ 20 years with at least three months of clinical experience at the study hospital. Nurses who had previously submitted evidence-based practice reports, as well as head nurses and nurse practitioners, were excluded.
Sample size was estimated using G*Power (version 3.1.9.4;
2.3 Recruitment
Following approval from the head of the nursing department, recruitment materials were distributed via institutional email and posted on bulletin boards in the nursing units. The principal investigator visited each unit to provide study information in person and address any questions from potential participants. All interested nurses received an information sheet outlining the study’s purpose, procedures and measures for maintaining confidentiality.
2.4 Randomization and blinding
After providing informed consent, participants were randomly allocated to either the experimental group or the control group. The random sequence was generated using the RAND function in Microsoft Excel, with a 1:1 allocation ratio; no block randomization or stratification was applied. Allocation concealment was maintained using consecutively numbered, opaque, sealed envelopes prepared by an independent administrator who was not involved in recruitment or intervention delivery. Participants were enrolled by the principal investigator and group assignments were revealed by the independent administrator after informed consent was obtained. Outcome assessors were blinded to group allocation to minimize bias. However, owing to the nature of the intervention, blinding was not possible for instructors and participants.
2.5 Intervention
The intervention was delivered as a single-day, four-hour workshop, comprising five sessions to ensure adequate statistical power. Each workshop comprised two sessions: (1) a 40-min lecture covering the Patient/Intervention/Comparison/Outcome (PICO) framework, literature search, critical appraisal and evidence application; and (2) case study activities, with the experimental group participating in gamified learning and the control group engaging in conventional small-group discussions.
2.5.1 Experimental group: gamified workshop
The gamified workshop was designed using the Octalysis framework, incorporating game mechanics such as points, badges, leaderboards, unlockable rewards, random scenario assignments and mission-driven report submissions, all mapped to the eight motivational core drives (
Chou, 2019). Participants were divided into teams of three to five and each team appointed a leader responsible for coordinating task allocation. A summary of the alignment between the five steps of evidence-based practice competence, gamification elements and Octalysis core drives is presented in
1. Ask (20 min): Teams matched clinical scenarios with the corresponding PICO framework components using worksheets. Correct matches earned points and badges, with scaffolding and immediate instructor feedback reinforcing accuracy. Team-based competitions and time-limited challenges maintained engagement.
2. Acquire (50 min): Participants completed a timed quiz on literature searches, identified keywords and Medical Subject Headings (MeSH) terms and retrieved relevant articles. Leaders distributed tasks and points were awarded for accuracy and speed. Unlockable hint rewards and time-limited competitions sustain motivation.
3. Appraise (70 min): Teams ranked levels of evidence under time constraints and critically appraised two research articles using the Critical Appraisal Skills Programme checklist. Badges were awarded for accurate and efficient appraisals. The random assignment of scenarios introduced unpredictability.
4. Apply (25 min): Teams designed implementation strategies using prompt cards that represented evidence, resources and patient preferences. Leaders guided group decision-making and presented solutions. Authentic clinical problem-solving tasks were rewarded with points, public recognition and time-limited challenges.
5. Assess (15 min): Individual and group performances were recognized through leaderboards, prizes, badges and public acknowledgment. In addition, participants were required to submit an evidence-based practice report within one month of the workshop as a mission-driven assignment, with unexpected rewards offered for timely submissions. This reinforced ownership, accountability and long-term application in practice.
To ensure intervention fidelity, the sessions were facilitated by a master’s-level instructor with more than six years of experience in teaching evidence-based practice and formal training in gamified teaching strategies. The instructor followed a standardized facilitation guide and used worksheets, slides and predefined game mechanics to deliver each step consistently.
2.5.2 Control group: case-based small-group discussions
The control group engaged in conventional case-based small-group discussions aligned with the hospital’s standard evidence-based practice training. Structured discussions guided by instructors helped nurses analyze sample clinical cases and apply evidence-based practice principles step-by-step. The sessions were supported by slide presentations and instructor-led explanations. Participants were divided into teams of four, with each team being guided by one instructor. Three instructors, all master-level nurse educators with at least three years of experience teaching evidence-based practice, received standardized training and followed a discussion guide to ensure fidelity and consistency across sessions. Unlike in the experimental group, no gamification elements, incentives, or competitive activities were incorporated.
2.6 Outcome measures
Assessments were conducted at baseline (T0), immediately post-intervention (T1) and six months post-intervention (T2).
2.6.1 The demographic questionnaire
Demographic data collected included gender, age, education level, years of nursing practice and prior evidence-based practice training.
2.6.2 Primary outcome: evidence-based practice competence
Evidence-based practice competence was assessed using the Assessing Competency in Evidence-Based Medicine (ACE) tool, a 15-item, clinical scenario-based instrument that covers question formulation (two items), literature search (two items), critical appraisal (seven items) and evidence application (four items; Ilic et al., 2014). Each correct response was scored one point (total range: 0–15), with higher scores indicating greater evidence-based practice competence. The ACE tool has demonstrated acceptable reliability (Cronbach’s α = 0.69; Ilic et al., 2014). A validated Chinese version used in this study exhibited high internal reliability (Cronbach’s α = 0.90). Different clinical scenarios were used at each assessment time point to minimize recall bias.
2.6.3 Secondary outcome
2.6.3.1 Self-efficacy and attitudes toward evidence-based practice
The 26-item Taipei Evidence-Based Practice Questionnaire (TEBPQ) was used to assess nurses’ self-efficacy in performing evidence-based practice steps (asking, acquiring, appraising and applying) and their attitudes toward evidence-based practice ( Chen et al., 2014). Responses were rated on a five-point Likert scale (1 = strongly disagree to 5 = strongly agree), with higher scores indicating greater self-efficacy. The TEBPQ has demonstrated high internal consistency (Cronbach’s α = 0.87; Chen et al., 2014) and the Cronbach’s α value was 0.92 in this study.
2.6.3.2 Satisfaction with the intervention
Workshop satisfaction was measured using an adapted student learning satisfaction questionnaire that assessed instructor expertise, teaching clarity, course pacing, clinical relevance and overall satisfaction ( Topala and Tomozii, 2014). Each response was rated on a five-point Likert scale (1 = very unsatisfied, 5 = very satisfied). The Cronbach’s α value for this study was 0.94.
2.6.3.3 Evidence-based practice report assessment
Participants submitted an evidence-based practice report within six months of the intervention, which was independently assessed by two blinded evaluators. Reports were scored using a 100-point checklist to evaluate the appropriateness of PICO question formation, the effectiveness of the literature search, the rigor of critical appraisal and clinical applicability. The passing score was set at 60 points. In this study, a high inter-rater reliability was achieved (intraclass correlation coefficient = 0.871).
2.7 Data analysis
Statistical analyses were performed using SPSS version 23 (IBM Corporation, Armonk, NY, USA), with statistical significance set at p < 0.05. Independent t-tests and chi-square tests were used to examine baseline differences in demographic characteristics and outcome variables. Within-group changes over time (T1 vs. T0, T2 vs. T0 and T2 vs. T1) were evaluated using repeated-measures analyses of variance and post-hoc tests. To account for within-group correlations over time, generalized estimating equations (GEE) were employed to evaluate group × time interactions for evidence-based practice competence and self-efficacy ( Liang and Zeger, 1986). For longitudinal outcomes, model-based marginal means (estimated marginal means [ EMM] ± standard errors [ SE]) were reported and standardized effect sizes were expressed as Cohen’s d with 95 % confidence intervals ( CIs), calculated using pooled standard deviations.
2.8 Ethical considerations
This study was approved by the Institutional Review Board of the principal investigator’s medical center (No. TSGHIRB: B202005016; March 6, 2020). Written informed consent was obtained from all participants prior to randomization. Participants were assured that their responses would remain confidential and anonymous, that participation was entirely voluntary and that they could withdraw at any time without consequence. Their decision to participate or withdraw will not affect their employment or working conditions. This study complied with the Declaration of Helsinki and adhered to ICMJE guidelines for research involving human participants.
3 Results
3.1 Participant characteristics and baseline analysis
Of the 112 nurses screened for eligibility, 10 declined to participate. The remaining 102 nurses were randomly assigned to either the experimental group ( n = 54) or the control group ( n = 48). All participants completed the intervention and follow-up assessments ( Fig. 1).
The sample primarily comprised female nurses (93.1 %) with a mean age of 25.08 years (
SD = 3.67). Most participants held a bachelor’s degree (80.4 %) and their clinical experience ranged from 0.75 to 6.42 years, with a mean of 1.81 years (
SD 1.16). Baseline homogeneity testing showed no significant differences between groups in gender, age, education level, years of nursing practice, or prior evidence-based practice training (all
p > 0.05). Moreover, no significant between-group differences were observed in baseline outcome variables, including overall evidence-based practice competence, subdomain scores (ask, acquire, appraise, apply), self-efficacy, or attitudes (all
p > 0.05;
3.2 Intervention outcomes: within-group changes
The control group also showed significant improvement in competence at T1, particularly in the “acquire” ( p = 0.008) and “appraisal” ( p = 0.001) domains. Similarly, self-efficacy increased significantly across the four domains (p < 0.001) and attitudes toward evidence-based practice improved significantly ( p < 0.001). The control group followed a similar pattern to the experimental group, with significant improvements at T1, but no further enhancements at T2.
3.3 Intervention outcomes: between-group comparison
3.3.1 Evidence-based practice competence
The GEE model (
Between-group comparisons based on model-based marginal means (
3.3.2 Self-efficacy and attitudes toward evidence-based practice
The GEE analysis ( Table 4) indicated significant improvements in both self-efficacy and attitudes toward evidence-based practice over time in both groups ( p < 0.001). However, no significant group × time interactions were detected, suggesting that both groups followed similar patterns of change.
3.3.3 Satisfaction with the intervention
At T1, the experimental group reported significantly higher satisfaction levels compared with the control group (111.41 ± 10.85 vs. 101.79 ± 14.02, p < 0.001), suggesting that gamification enhanced engagement and enjoyment in evidence-based practice learning ( Table 2).
3.3.4 Evidence-based practice report completion
At T2, no significant between-group differences were observed in evidence-based practice report passing rates (experimental group: 55.6 % vs. control group: 54.2 %, p = 0.888; Table 2). However, nurses in the experimental group completed their reports significantly faster than those in the control group (3.37 ± 1.89 months vs. 4.40 ± 1.83 months, p = 0.043), suggesting that gamification facilitated the timely execution of evidence-based practice tasks.
4 Discussion
This study investigated the effectiveness of a gamified educational workshop, designed using the Octalysis framework to enhance nurses’ evidence-based practice competencies, self-efficacy and attitudes. Our findings demonstrated that the experimental group maintained significantly higher evidence-based practice competence at the six-month follow-up compared with the control group, particularly in the “acquire” and “appraise” domains. The experimental group also reported higher satisfaction with the intervention and completed their evidence-based practice reports in a significantly shorter time frame, although the passing rates were similar between the groups. These results suggest that gamified education not only promotes immediate competence gains but also supports sustained improvements in evidence acquisition and appraisal over time.
4.1 Evidence-based practice competence
The sustained improvement in evidence-based practice competence in the experimental group aligns with previous research, suggesting that gamification can enhance cognitive learning outcomes and retention ( Sailer and Homner, 2020). In our study, the significant differences in the “acquire” and “appraise” domains are particularly noteworthy, as these represent critical aspects of evidence-based practice. The “acquire” domain improvements indicate that gamification effectively enhanced nurses’ abilities to formulate clinical questions and search for relevant evidence. This finding aligns with Garrison et al. (2021), who discovered that gamified learning increased knowledge acquisition and engagement among nurses. The enhanced performance in the “appraisal” domain suggests that gamification supports critical evaluation skills, enabling nurses to better assess the validity and applicability of research evidence ( Han et al., 2021). Importantly, improved acquisition and appraisal competence directly strengthens clinical decision-making, enabling nurses to reduce reliance on tradition, adapt care plans to patient needs and improve care quality and safety ( Melnyk et al., 2021).
The greater gains observed in the experimental group may be explained by the mechanisms through which gamified elements influence learning. Features such as points, badges and leaderboards provide immediate feedback and recognition, which reinforce correct behaviors and sustain learner engagement ( Sailer and Homner, 2020). Randomized scenario assignments and time-limited challenges foster active participation and cognitive stimulation, both of which are essential for deeper information processing and long-term retention ( Chen T. S. et al., 2023). Mission-driven tasks, such as report submissions linked to rewards, encouraged learners to extend their learning beyond the workshop and apply knowledge in practice ( Kim et al., 2024). Together, these gamification mechanics likely enhanced intrinsic motivation and persistence, thereby contributing to greater improvements in evidence acquisition and appraisal competencies compared with traditional case-based discussions.
4.2 Evidence-based practice self-efficacy
In our study, both groups showed improvements in self-efficacy immediately after the intervention; however, slight regression was noted at six months. It is possible that the requirement for all participants to complete an evidence-based practice report within six months helped maintain their confidence to some degree by reinforcing the practical application of evidence-based practice skills. This finding contrasts with a longitudinal study in South Korea, where the continuous integration of evidence-based practice content across the curriculum produced steadily increasing self-efficacy among nursing students ( Song, 2024). Consistent with Melnyk et al. (2021), who emphasized the importance of mentorship and a culture of evidence-based practice in fostering the implementation of evidence-based practices, our results suggest that while short-term interventions such as gamified workshops or case-based group discussions can enhance confidence, reinforcement through structured follow-up activities—such as report writing, ongoing mentorship, or repeated exposure—is essential to maintain and strengthen evidence-based practice self-efficacy over time.
4.3 Attitudes toward evidence-based practice
Both groups demonstrated improved attitudes toward evidence-based practice immediately after the intervention; however, these gains declined by six months, returning close to baseline levels. This pattern suggests that initial enthusiasm may diminish without reinforcement, a finding that is consistent with prior research among emergency nurses in Finland, where short-term evidence-based training effects on attitudes faded over time ( Koota et al., 2021). The decline in our study may also reflect the practical challenges of completing evidence-based practice reports, which could dampen motivation. As attitudes are a key determinant in translating evidence-based practice knowledge and skills into practice, ongoing institutional support is essential for preserving positive perceptions and encouraging the long-term adoption of evidence-based practice in clinical settings.
4.4 Satisfaction with evidence-based practice education
The significantly higher satisfaction levels reported by the experimental group provide valuable insights into the subjective experience of gamified learning compared with the control group. This finding aligns with the growing body of evidence suggesting that gamification elements, such as real-time feedback, rewards and social interaction, can increase learner engagement and enjoyment in educational settings ( Chen C.-M. et al., 2023; Malicki et al., 2020). Enhanced satisfaction is critical in continuing education, where motivation for participation can have a significant impact on learning outcomes. The positive reception of the gamified intervention suggests that incorporating game elements into evidence-based practice education addresses knowledge acquisition and engagement challenges in nursing education ( Garrison et al., 2021). As noted by Seo et al. (2024), gamification can improve learner motivation and confidence. Our findings indicate that gamification may help overcome these barriers and promote sustained gains in evidence-based practice competence by creating a more engaging and enjoyable learning experience.
4.5 Evidence-based practice report completion
An unexpected but promising finding was the significantly shorter completion time of the evidence-based practice reports in the experimental group, suggesting that the gamified intervention facilitated knowledge acquisition and the practical application of evidence-based practice skills. Although time constraints are frequently cited as barriers to the implementation of evidence-based practice ( Crawford et al., 2023), the similar passing rates between groups indicate that, while gamification promoted timeliness, additional supports may be necessary to achieve higher report quality. Report outcomes are influenced by factors such as academic writing ability, access to resources and supervisory support, which were not directly addressed by the gamified elements. Rohan and Fullerton (2020) emphasized that developing advanced practice nurses’ writing competencies is a corequisite for the successful integration of evidence-based practice. Similarly, Karlsholm et al. (2024) found that writing a bachelor’s thesis significantly enhanced undergraduate nursing students’ evidence-based practice competence. Taken together, these findings suggest that while gamification can improve efficiency, optimal outcomes may require combining strategies with targeted support in academic writing to maximize both the timeliness and quality of evidence-based outputs.
4.6 Strengths and limitations
A key strength of this study lies in its methodological innovation. By combining a randomized controlled trial design with the Octalysis gamification framework, we evaluated the intervention using both subjective outcomes (competence, self-efficacy, attitudes and satisfaction) and objective indicators (report pass rate and completion time). The six-month follow-up further enhanced the rigor of the study by assessing the sustainability of the intervention’s effects rather than limiting the evaluation to immediate post-training outcomes.
Some limitations should be considered when interpreting the findings. First, participants were recruited from a single hospital, and the sample was composed primarily of early career nurses in a hospital that promotes a strong evidence-based practice culture. Moreover, most were female (93 %). These factors may limit the generalizability to more diverse nursing populations and practice environments. Second, although outcome assessors were blinded, instructors and participants could not be blinded to group allocation, which may introduce performance bias. The use of incentives and prizes to reinforce gamification may have also influenced motivation and performance independently of the intervention content. Third, the trial was retrospectively registered, which raises the potential risk of selective outcome reporting. To address this concern, we have clearly disclosed it in the manuscript and emphasized that the study protocol, including the prespecified outcomes and statistical analyses, was finalized before enrollment and data collection. Fourth, while the intervention was explicitly guided by the Octalysis framework, we did not directly measure learner motivation, which is a key component of the framework. Fifth, the reliance on self-reported measures may limit the ability to fully capture objective changes in competence or actual application in clinical practice. In particular, social desirability bias could have led participants to overreport positive attitudes, self-efficacy, or satisfaction with the intervention, thereby inflating perceived effects relative to actual behavior.
4.7 Implications
Our findings indicate that integrating gamification elements into evidence-based practice education programs can enhance engagement and knowledge retention. Gamification may facilitate the translation of evidence-based practice knowledge into practice by improving the efficiency of applying evidence-based practice principles. However, the lack of differences in report pass rates suggests that efficiency gains alone may not ensure higher-quality outputs, which are influenced by additional factors such as academic writing skills, resource availability and supervisory support. Future educational strategies may, therefore, benefit from combining gamification with targeted training in writing to maximize both efficiency and quality. Beyond educational outcomes, sustained improvements in evidence acquisition and appraisal competence have direct implications for patient care. By equipping nurses with the ability to efficiently identify, critically evaluate and apply high-quality evidence, gamified training can strengthen clinical decision-making, reduce reliance on tradition-based practices and improve the quality and safety of care. Future research should also link gamified evidence-based education to patient outcomes—such as quality indicators, safety events, or patient-reported outcomes—to further establish its clinical relevance and impact. To address the limitations of self-reported data, future studies should incorporate performance-based assessments, objective clinical indicators, or observational data to provide a more comprehensive evaluation of intervention effects.
5 Conclusions
This study provides evidence that a gamified, evidence-based practice training program can improve and sustain evidence-based practice competence among clinical nurses, particularly in evidence acquisition and appraisal skills. The higher satisfaction levels and faster evidence-based practice report completion times in the experimental group suggest that gamification may enhance both the subjective learning experience and the practical application of evidence-based practice skills. Importantly, by fostering stronger competencies in acquiring and appraising evidence, gamified education has the potential to directly enhance clinical decision-making. These findings support the integration of gamification into continuing nursing education programs to promote evidence-based practice and ultimately improve patient care.
Funding statement
This study was supported by the
CRediT authorship contribution statement
Tzeng Wen-chii: Writing – review & editing, Writing – original draft, Visualization, Validation, Supervision, Methodology, Formal analysis, Data curation. Chien Ling-Yu: Writing – original draft, Visualization, Validation, Software, Project administration, Methodology, Investigation, Funding acquisition, Formal analysis, Data curation, Conceptualization. Chiang Li-Chi: Validation, Supervision, Investigation, Funding acquisition, Formal analysis, Conceptualization. Hsiao Peng-Ching: Validation, Resources, Conceptualization.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Acknowledgments
We would like to thank all participants for their involvement in this study and the Department of Nursing at the Tri-Service General Hospital for their assistance.
Table 1
| Step (5 A) | Intervention activities | Game elements operationalized |
| Ask | Match clinical scenarios with PICO worksheets (team leader coordinated) | Leader role (CD1, CD5);
Points (CD2); Scaffolding (CD2); Instructor feedback (CD3); Team-based competition (CD5); Task allocation by leader (CD5); Time-limited challenge (CD6) |
| Acquire | Acquire: (a) timed quiz on string searches; (b) identification of keywords and MeSH terms; and (c) retrieval of relevant articles (team leader allocated tasks). | Leader role (CD1, CD5);
Points (CD2); Unlockable hint reward (CD6); Instructor feedback (CD3); Team-based competition (CD5); Task allocation by leader (CD5); Time-limited challenge (CD6) |
| Appraise | (a) Rank evidence levels under time limit; (b) Critically appraise two articles (RCT, SR) with CASP worksheets (team leader allocated tasks) | Leader role (CD1, CD5);
Points (CD2); Badges (CD2); Scaffolding (CD2); Instructor feedback (CD3); Task allocation by leader (CD5); Time-limited challenge (CD6); Random scenario assignment (CD7) |
| Apply | Group strategy design using prompt cards (evidence, resources, values); team leader guided decision-making and presentation | Leader role (CD1, CD5);
Authentic clinical problem-solving task (CD1); Points (CD2); Instructor feedback (CD3); Public reporting (CD5); Task allocation by leader (CD5); Time-limited challenge (CD6) |
| Assess | Recognition of group and individual performance; Submission of EBP reports within 1 month ( proxy for clinical application) | Authentic clinical task – report submission as mission-driven assignment (CD1, CD4);
Prizes (CD2); Badges (CD2); Team-based competition (CD5); Public recognition (CD5); Unexpected reward for timely submission (CD7, CD8) |
Table 2
| Variable | Experimental ( n = 54) | Control ( n = 48) | p |
| Demographic factors | |||
| Gender a | .117 | ||
| Male | 6 (11.1) | 1 (2.1) | |
| Female | 48 (88.9) | 47 (97.9) | |
| Age b | 24.48 ± 2.44 | 25.75 ± 4.62 | .093 |
| Education level a | .220 | ||
| Diploma | 8 (14.8) | 12 (25) | |
| Bachelor | 46 (85.2) | 36 (75) | |
| Years of nursing practice b | 1.71 ± 1.15 | 1.92 ± 1.17 | .373 |
| Prior EBP Training Experience a | 29 (53.7) | 32 (66.7) | .183 |
| Outcome variables | |||
| Competency b | 6.63 (2.81) | 7.27 (2.63) | .239 |
| Ask b | 1.02 (0.57) | 0.96 (0.50) | .574 |
| Acquire b | 1.24 (0.85) | 1.48 (0.71) | .130 |
| Appraisal b | 2.43 (1.72) | 2.85 (1.44) | .175 |
| Apply b | 1.94 (1.04) | 1.98 (0.93) | .860 |
| Self-efficacy b | 3.02 (0.55) | 2.92 (0.51) | .350 |
| Ask b | 3.37 (0.61) | 3.22 (0.54) | .173 |
| Acquire b | 2.88 (0.61) | 2.88 (0.61) | .963 |
| Appraisal b | 2.62 (0.91) | 2.57 (0.72) | .773 |
| Apply b | 3.19 (0.59) | 3.00 (0.54) | .095 |
| Attitude b | 3.60 (0.69) | 3.47 (0.64) | .351 |
Table 3
| Experimental group ( n = 54) | Control group ( n = 48) | ||||||||
| T0 | T1 | T2 | Within p | T0 | T1 | T2 | Within p | ||
| Competency | 6.63 (2.81) | 8.70 (1.56) | 8.19 (1.57) | < .001 a | 7.27 (2.63) | 8.52 (1.22) | 7.46 (2.63) | .001 b | |
| Ask | 1.02 (0.57) | 1.15 (0.53) | 1.04 (0.43) | .344 | 0.96 (0.50) | 1.08 (0.35) | 0.94 (0.43) | .060 | |
| Acquire | 1.24 (0.85) | 1.89 (0.42) | 1.81 (0.44) | < .001 c | 1.48 (0.71) | 1.81 (0.45) | 1.58 (0.74) | .008 d | |
| Appraisal | 2.43 (1.72) | 3.63 (1.10) | 3.43 (1.18) | < .001 c | 2.85 (1.44) | 3.58 (0.96) | 3.02 (1.55) | .001 b | |
| Apply | 1.94 (1.04) | 2.04 (0.67) | 1.91 (0.78) | .595 | 1.98 (0.93) | 2.04 (0.58) | 1.92 (0.77) | .632 | |
| Self-efficacy | 3.02 (0.55) | 3.92 (0.48) | 3.37 (0.55) | < .001 a | 2.92 (0.51) | 3.88 (0.53) | 3.32 (0.49) | < .001 a | |
| Ask | 3.37 (0.61) | 4.12 (0.55) | 3.62 (0.59) | < .001 a | 3.22 (0.54) | 3.98 (0.52) | 3.56 (0.61) | < .001 a | |
| Acquire | 2.88 (0.61) | 3.86 (0.60) | 3.37 (0.69) | < .001 a | 2.88 (0.61) | 3.83 (0.60) | 3.25 (0.56) | < .001 a | |
| Appraisal | 2.62 (0.91) | 3.83 (0.59) | 3.21 (0.71) | < .001 a | 2.57 (0.72) | 3.83 (0.63) | 3.16 (0.70) | < .001 a | |
| Apply | 3.19 (0.59) | 3.88 (0.51) | 3.29 (0.64) | < .001 b | 3.00 (0.54) | 3.87 (0.58) | 3.33 (0.45) | < .001 a | |
| Attitude | 3.60 (0.69) | 4.06 (0.62) | 3.46 (0.71) | < .001 b | 3.47 (0.64) | 4.06 (0.60) | 3.58 (0.67) | < .001 b | |
| Satisfaction | - | 111.41 (10.85) | - | - | - | 101.79 (14.02) | - | - | |
| Application | |||||||||
| Passing Rate (%) e | - | - | 30 (55.6) | - | - | - | 26 (54.2) | - | |
| Passing Time (Months) f | - | - | 3.37(1.89) | - | - | - | 4.40(1.83) | - |
Table 4
| Between Group | Within time | Interaction: Group × Time(T1-T0) | Interaction: Group × Time(T2-T0) | ||
| EG vs. CG
B [95 % CI] χ² ( p) a |
T1 vs. T0
B [95 % CI] χ² ( p) b |
T2 vs. T0
B [95 % CI] χ² ( p) c |
B [95 %
CI]
χ² ( p) d |
B [95 %
CI]
χ² ( p) e | |
| Competence | −0.64 [−1.69–0.40]
1.44(.230) |
1.25 [0.55–1.95]
12.12(<.001***) |
0.19 [−0.70–1.08]
0.17(.679) |
0.82 [−0.20–1.85]
2.48(.116) |
1.37 [0.19–2.55]
5.19(.023*) |
| Ask | 0.06 [−0.15–0.27]
0.33(.566) |
0.13 [−0.03–0.28]
2.36(.124) |
−0.02 [−0.19–0.15]
0.06(.808) |
0.00[−0.23–0.24]
0.00(.970) |
0.04 [−0.19–0.27]
0.11(.738) |
| Acquire | −0.24 [−0.54–0.06]
2.43(.119) |
0.33 [0.11–0.55]
8.93(.003**) |
0.10 [−0.17–0.38]
0.54(.463) |
0.31 [−0.02–0.65]
3.42(.065) |
0.47 [0.10–0.84]
6.31(.012*) |
| Appraisal | −0.43 [−1.04–0.18]
1.90(.168) |
0.73 [0.32–1.14]
12.07(.001***) |
0.17 [−0.35–0.68]
0.40(.525) |
0.47 [−0.17–1.12]
2.10(.148) |
0.83 [0.13–1.54] 5.32(.021*) |
| Apply | −0.03 [−0.41–0.34]
0.03(.857) |
0.06 [−0.20–0.32]
0.22(.639) |
−0.06 [−0.32–0.19]
0.23(.630) |
0.03 [−0.36–0.42]
0.02(.879) |
0.03 [−0.41–0.46]
0.01(.908) |
| Self-efficacy | 0.10 [−0.11–0.30]
0.91(.341) |
0.96 [0.83–1.10]
202.16(<.001***) |
0.41 [0.26–0.55]
29.35(<.001***) |
−0.06 [−0.24–0.13]
0.36(.549) |
−0.05 [−0.27–0.17]
0.19(.662) |
| Ask | 0.16 [−0.06–0.38]
1.96(.162) |
0.77 [0.61–0.92]
96.55(<.001***) |
0.34 [0.19–0.50]
18.28(<.001***) |
−0.02 [−0.24–0.20]
0.04(.845) |
−0.10 [−0.32–0.12]
0.75(.388) |
| Acquire | 0.01 [−0.23–0.24]
0.00(.963) |
0.96 [0.79–1.12]
127.16(<.001***) |
0.37 [0.18–0.55]
15.23(<.001***) |
0.02 [−0.20–0.25]
0.04(.837) |
0.12 [−0.15–0.39]
0.77(.381) |
| Appraisal | 0.05 [−0.27–0.36]
0.09(.768) |
1.26 [1.06–1.46]
155.79(<.001***) |
0.58 [0.35–0.82]
23.82(<.001***) |
−0.05 [−0.33–0.23]
0.13(.717) |
0.01 [−0.34–0.36]
0.00(.959) |
| Apply | 0.19 [−0.03–0.40]
2.93(.087) |
0.87 [0.73–1.01]
143.68(<.001***) |
0.33 [0.17–0.49] 17.00(<.001***) | −0.17 [−0.37–0.03]
2.93(.087) |
−0.23 [−0.47–0.02]
3.32(.069) |
| Attitude | 0.12 [−0.13–0.38]
0.90(.342) |
0.58 [0.42–0.75]
48.87(<.001***) |
0.11 [−0.07–0.29]
1.37(.242) |
−0.13 [−0.38–0.13]
0.93(.336) |
−0.24 [−0.50–0.02]
3.36(.067) |
Table 5
| Variable | Experimental (n = 54)
EMM ( SE) |
Control (n = 48)
EMM ( SE) |
Δ(T2–T0) Between Groups | Cohen’s
d
(95 % CI) |
p |
| Competence | T0 6.63 (0.38)
T1 8.70 (0.21) T2 8.19 (0.21) |
T0 7.27 (0.38)
T1 8.52 (0.18) T2 7.46 (0.38) |
1.37 | 0.45 (0.05, 0.85) | .026* |
| Ask | T0 1.02 (0.08)
T1 1.15 (0.07) T2 1.04 (0.06) |
T0 0.96 (0.07)
T1 1.08 (0.05) T2 0.94 (0.06) |
0.04 | 0.07 (–0.33, 0.46) | .741 |
| Acquire | T0 1.24 (0.12)
T1 1.89 (0.06) T2 1.82 (0.06) |
T0 1.48 (0.10)
T1 1.81 (0.06) T2 1.58 (0.11) |
0.47 | 0.50 (0.10, 0.90) | .014* |
| Appraisal | T0 2.43 (0.23)
T1 3.63 (0.15) T2 3.43 (0.16) |
T0 2.85 (0.21)
T1 3.58 (0.14) T2 3.02 (0.22) |
0.83 | 0.45 (0.05, 0.85) | .025* |
| Apply | T0 1.94 (0.14)
T1 2.04 (0.09) T2 1.91 (0.11) |
T0 1.98 (0.14)
T1 2.04 (0.08) T2 1.92 (0.11) |
0.02 | 0.02 (–0.37, 0.42) | .912 |
| Self-efficacy | T0 3.02 (0.08)
T1 3.92 (0.07) T2 3.37 (0.08) |
T0 2.92 (0.07)
T1 3.88 (0.08) T2 3.32 (0.07) |
–0.05 | –0.09 (–0.48, 0.31) | .669 |
| Ask | T0 3.37 (0.08)
T1 4.12 (0.08) T2 3.62 (0.08) |
T0 3.22 (0.08)
T1 3.98 (0.08) T2 3.56 (0.09) |
–0.10 | –0.17 (–0.56, 0.23) | .396 |
| Acquire | T0 2.88 (0.08)
T1 3.86 (0.08) T2 3.37 (0.09) |
T0 2.88 (0.09)
T13.83 (0.09) T2 3.25 (0.08) |
0.12 | 0.17 (–0.22, 0.57) | .391 |
| Appraisal | T0 2.62 (0.12)
T1 3.83 (0.08) T2 3.21 (0.10) |
T0 2.57 (0.10)
T1 3.83 (0.09) T2 3.16 (0.10) |
0.01 | 0.01 (–0.38, 0.40) | .960 |
| Apply | T0 3.19 (0.08)
T1 3.88 (0.07) T2 3.29 (0.09) |
T0 3.00 (0.08)
T1 3.87 (0.08) T2 3.33 (0.07) |
–0.23 | –0.35 (–0.75, 0.04) | .078 |
| Attitude | T0 3.60 (0.09)
T1 4.06 (0.09) T2 3.46 (0.10) |
T0 3.47 (0.09)
T1 4.06 (0.09) T2 3.58 (0.10) |
–0.24 | –0.36 (–0.76, 0.04) | .074 |
© 2025 Elsevier Ltd