Content area
Introduction
Continuing Professional Development (CPD) is essential for healthcare professionals to maintain and enhance their competencies, particularly in specialized fields such as gynecological oncology. In low- and lower-middle-income countries (LLMICs), access to CPD is often limited, and virtual platforms have emerged as a viable solution to overcome these barriers. This study uses the Kirkpatrick Model to evaluate a national virtual CPD program in gynecological oncology, focusing on participant satisfaction and knowledge acquisition.
Methods
A quasi-experimental study was conducted with 104 gynecological oncology specialists and fellows in Iran. The virtual CPD program was delivered via a Moodle-based Learning Management System (LMS) and included multimedia content, interactive modules, discussion forums, and live webinars. Data were collected through pre- and post-course assessments to measure knowledge gains and a post-course satisfaction survey to evaluate participant reactions. Descriptive statistics and paired t-tests were used to analyze the data, with statistical significance set at p < 0.05.
Results
The Level 1 evaluation revealed high participant satisfaction, with 64.42% reporting that the course content aligned with its objectives, 75% expressing satisfaction with the program’s delivery, and 82.69% finding the course facilities highly satisfactory. The Level 2 evaluation demonstrated a statistically significant improvement in knowledge scores from pretest (mean = 37.21, SD = 12.71) to posttest (mean = 42.07, SD = 6.09), with a mean score increase of 4.86 (p < 0.001). The effect size (Cohen’s d = -0.416) indicated a small to moderate practical significance.
Conclusion
The virtual CPD program effectively engaged participants and enhanced their knowledge, highlighting the potential of online learning platforms to deliver high-quality CPD in resource-constrained settings. Future research should explore the long-term impact of virtual CPD on clinical practice and patient outcomes.
Introduction
Continuing Professional Development (CPD) is increasingly expected and required of healthcare professionals to maintain their credentials and right to practice [1]. CPD refers to the ongoing education and competency development by healthcare professionals beyond their initial training; the purpose is to update and advance their knowledge, skills, and professional proficiency [2, 3]. CPD enables the healthcare workforce to evolve and better respond to patient’s needs and the ever-changing practice environment [4]. Ultimately, this may lead to better care and health outcomes [5]. CPD is essential to modern healthcare, enabling medical professionals to maintain and enhance their competence in an ever-evolving field [6]. In gynecological oncology, where advancements in diagnostics, treatments, and patient care strategies occur rapidly, CPD is particularly vital. Gynecological oncologists must navigate complex clinical scenarios, integrate cutting-edge therapies, and provide empathetic, patient-centered care.
In many low- and lower-middle-income countries (LLMICs), CPD is not mandated, and uptake and participation by healthcare professionals are limited due to access barriers [7, 8]. The shift toward virtual learning platforms has transformed the delivery of CPD, offering flexibility, accessibility, and innovative ways to engage learners [9]. Several studies have reported CPD in virtual care using online delivery formats [10, 11]. Both synchronous and asynchronous modalities were used in providing CPD on virtual care or telehealth; however, the most common delivery format was web conferencing [12].
The current evidence surrounding the most effective e-learning modalities is limited because the reported program designs differ with variations in the types of modalities used to deliver virtual CPD. This study’s virtual course under evaluation was delivered through a Learning Management System (LMS) based on Moodle - a platform that facilitates structured, interactive, and scalable education. While virtual CPD has gained significant traction, particularly in the wake of global advancements in digital technology, there remains a need to evaluate its effectiveness rigorously. In their review, Merry et al. (2023) declared that leadership and buy-in from key stakeholders, including government bodies and healthcare professional organizations, and a framework are essential for developing, implementing, and sustaining a CPD system in an LLMIC [2]. Guillaume et al. (2022) assess evidence on digital platforms used for CPD in LLMICs [13]. A structured framework is deemed essential to guide the CPD system’s development and ensure its alignment with the needs and resources of the LLMIC context. Additionally, the platform should be tailored to healthcare professionals’ specific learning needs and cultural context, ensuring relevance and engagement for practical skill development and knowledge retention. One of the most critical parts of implementing training programs is accurately evaluating their impact; assessing programs require a suitable method [14].
Evaluating training programs within CPD systems is of great importance, as these evaluations help identify the strengths and weaknesses of educational initiatives. Regular assessments can ensure that the training content aligns with the current needs of healthcare professionals and evolving trends in the healthcare sector. Additionally, evaluations contribute to improving the quality of training and enhancing its impact on professional performance and patient outcomes. In low- and lower-middle-income countries (LLMICs) and low- and middle-income countries (LMICs), these assessments can also help optimize the use of limited resources and ensure the return on investment in educational programs. Calleja et al. [15] suggest that virtual care training programs lack standardized knowledge evaluation. Therefore, the field would benefit from a more consistent application of systematic evaluation frameworks, such as Kirkpatrick’s [16] evaluation models. While broader impacts exist for evaluating CPD programs—including the Moore-Green-Gallis model for Continuing Medical Education (CME) [17], which emphasizes outcomes across 7 levels from participation to community health impact—the Kirkpatrick model was selected for this study due to its practicality in assessing immediate learning outcomes in resource-constrained settings. Its focus on reaction and learning (Levels 1–2) aligns with our goal of evaluating knowledge acquisition and engagement in a virtual CPD program, where longitudinal behavioral impact (Levels 3–4) requires sustained infrastructure for clinical observation and patient outcome tracking—beyond the scope of this initiative in an LLMIC context with limited resources for longitudinal follow-up.
Although all models have some deficiencies, according to the evaluation, Kirkpatrick’s model has a suitable and acceptable performance for evaluating educational programs [18]. This study employs the Kirkpatrick Model, a robust framework for evaluating training programs, to assess the effectiveness [19] of a national virtual CPD course in gynecological oncology. The Kirkpatrick Model evaluates outcomes across four levels: reaction, learning, behavior, and results [20]. Each program evaluation model has strengths and weaknesses in measuring training activities, but research supports Kirkpatrick’s model as a widely applicable framework for evaluating training outcomes [20, 21], though complementary models like Moore et al.’s CME framework also offer valuable perspectives.
This study employed a two-level evaluation framework based on the Kirkpatrick Model to assess the effectiveness of a national virtual CPD course in gynecological oncology. It highlights the potential of a virtual CPD program in gynecological oncology while emphasizing the importance of an evaluation to ensure that digital learning translates into tangible improvements in practice. Through this analysis, we seek to contribute to the growing body of evidence on effective virtual CPD strategies and to provide insights that can guide the design and implementation of future online educational initiatives in healthcare.
Methods
This quasi-experimental study was conducted in the School of Medical Education and Learning Technologies at Shahid Beheshti University of Medical Sciences, Iran. The study received ethical approval from Shahid Beheshti University (IR.SBMU.SME.REC.1403.088) before recruitment. Participants provided written informed consent after reviewing privacy protections, including anonymized data storage and the right to withdraw. This study employed a structured, two-level evaluation framework based on the Kirkpatrick Model to assess the effectiveness of a national virtual CPD course in gynecological oncology. The evaluation focused on Levels 1 and 2 of the Kirkpatrick Model, which measures participant reactions (participant satisfaction with the virtual platform and content) and learning outcomes (knowledge acquisition). The course was delivered through a Learning Management System (LMS) based on Moodle, and the methodology was designed to ensure rigorous, reproducible, and academically sound evaluation. This was a certificate-awarding program, and participants who achieved at least 80% of the total score were granted a completion certificate.
Study design, context, and participant recruitment and demographics
The study was designed as a program evaluation of a national virtual CPD course tailored for gynecological oncologists and allied healthcare professionals, including 104 specialists in obstetrics and gynecology surgery and a subspecialty fellowship. The program’s educational objectives were: (1) Enhance evidence-based management knowledge for cervical, endometrial, ovarian, vulvar, and breast cancers; (2) Improve diagnostic accuracy for CIN and GTN; (3) Develop proficiency in interpreting colposcopy results and virtual case scenarios. Participants were recruited through professional societies, institutional networks, and direct email invitations. They were required to practice actively in gynecological oncology or a related field and have access to the LMS platform as inclusion Criteria. Information on participants’ professional roles and ages was collected to contextualize the findings and ensure the sample represented a diverse cohort. The course was hosted on an LMS platform based on Moodle of the Shahid Beheshti University of Medical Sciences, facilitating asynchronous learning through multimedia content, interactive modules, and assessments. The evaluation framework was anchored in the Kirkpatrick Model, focusing on participant satisfaction and engagement with the virtual course (Level 1: Reaction) and knowledge acquisition resulting from the course (Level 2: Learning). This model was chosen for its structured approach to evaluating immediate educational outcomes (reaction and learning), which was feasible given the virtual delivery constraints and resource limitations in our setting. While other models (e.g., Moore et al.’s CME framework) offer comprehensive evaluation across higher-level outcomes, our study prioritized actionable insights for rapid program improvement.
Procedure
The course covered key topics such as Cervical Intraepithelial Neoplasia (CIN), Colposcopy, Cervix, Endometrium, Gestational Trophoblastic Neoplasia (GTN), Ovary, Vulva, and Breast. The procedure for implementing and evaluating the program is outlined below:
1. 1)
Pre-course preparation
Before the course began, several steps were taken: (a) Content Development: Comprehensive educational materials, including lectures, case studies, interactive modules, and multimedia resources, were developed and uploaded to the Moodle platform. These materials were tailored to the learning objectives of each topic; (b) Platform Setup: The Moodle LMS was configured to include various features such as discussion forums, quizzes, and resource libraries. User accounts were created for all participants, and access permissions were set accordingly; and (c) Pretest Administration: A pretest was designed to assess participants’ baseline knowledge of the topics covered in the course. The pretest was administered through the Moodle platform before the commencement of the course.
1. 2)
Course implementation
Several stages were completed to implement the course: (a) Orientation Session: An initial orientation session was conducted to familiarize participants with the Moodle platform, course structure, and expectations. This session was recorded and made available for reference throughout the course; (b) Content Delivery: The course content was structured, with each topic being released sequentially. Participants were encouraged to engage with the materials at their own pace within the stipulated timeframe; (c) Interactive Forums: Discussion forums were established for each topic to facilitate peer-to-peer interaction and collaborative learning. Participants were encouraged to post questions, share insights, and discuss case studies; and (d) Live Webinars: Periodic live webinars were conducted by subject matter experts to provide in-depth discussions on complex topics and to address participant queries in real-time (Fig. 1).
[IMAGE OMITTED: SEE PDF]
1. 3)
Post-course evaluation
After the program, the executive team conducted two key activities: (a) Posttest Administration: A posttest, similar in format and difficulty to the pretest, was administered through the Moodle platform after the course. This test was designed to evaluate the knowledge gained by participants, and (b) Satisfaction Survey. A comprehensive survey was distributed to participants to gather feedback on various aspects of the course, including content quality, platform usability, and overall learning experience. The survey was conducted using an online tool integrated with the Moodle platform. The timeline of the virtual CDP program is illustrated in Table 1. Three safeguards ensured knowledge gains reflected CPD impact: (1) screening for concurrent oncology training (2), fixed assessment intervals (pretest: 1 week pre-course; posttest: immediately post-course), and (2) test items mapped to Iranian National Cancer Guidelines. A pilot showed stable test-retest scores (r = 0.81) in controls.
[IMAGE OMITTED: SEE PDF]
Data collection and analysis
Data were collected at two stages, corresponding to Levels 1 and 2 of the Kirkpatrick Model. Satisfaction survey: A comprehensive survey was distributed to participants to gather feedback on various aspects of the course, including content quality, platform usability, and overall learning experience. The survey was conducted using an online tool integrated with the Moodle platform. Learning survey: Assessments included 100 MCQs and case-based scenarios across six domains: CIN/colposcopy (30 items), cervical cancer (20 items), endometrial cancer (16 items), GTN (14 items), ovarian cancer (12 items), and vulvar/breast cancers (8 items). Domains were weighted by prevalence and clinical urgency in LLMIC settings.
Scores were calculated as the percentage of correct responses (total correct answers ÷ 100 × 100), with a maximum score of 100. The aggregate score represents comprehensive knowledge across all domains, while domain-specific sub-scores (correct items per topic ÷ total items in that domain) pinpoint areas of high/low performance. For example, a participant scoring 75/100 demonstrated 75% mastery of the total curriculum.
Level 1: reaction
Responses from the satisfaction survey were analyzed to evaluate participant satisfaction and identify areas for improvement in future course iterations. Using the LMS’s integrated tool, a post-course survey was conducted immediately after completion. The survey used in this study was adopted from the study by Yarmohammadian et al. [22]. The survey consists of a 27-item questionnaire on a 5-point Likert scale (1 = strongly disagree, 5 = strongly agree) and was used to assess participants’ satisfaction with course content, virtual platform usability, topics’ relevance, and overall learning experience. Medical education experts determined the validity of the questionnaire, and the reliability was confirmed after a pilot study (r = 0.79)—the survey aimed to capture participants’ perceptions of the course. Quantitative data were analyzed using descriptive statistics, precise frequencies, and percentages.
Level 2: learning
Pretest and posttest scores were collected and analyzed to assess the learning outcomes. Statistical methods were employed to determine the significance of knowledge improvement. The assessments consisted of multiple-choice questions (MCQs) and case-based scenarios aligned with the course objectives. Changes in knowledge scores were calculated by comparing the course assessment results. The assessments were designed to evaluate the application of foundational knowledge and concepts in clinical scenarios. Paired t-tests were used to compare scores, with statistical significance set at p < 0.05. Effect sizes were calculated to determine the magnitude of knowledge gains.
Results
This quasi-experimental study was conducted on 104 female obstetrics and gynecology surgery specialists and a subspecialty fellowship with a mean work experience of 11 ± 3 years. The mean age of the participants was 40.47 ± 7.494 years, while 58 individuals (55.77%) were specialists in obstetrics and gynecology surgery, and 46 participants (44.23%) were subspecialty fellows.
Domain analysis revealed the greatest improvements in GTN management (mean + 22.4%, p < 0.001) and CIN staging (+ 18.1%, p < 0.001), attributed to case-based webinars. Vulvar cancer showed modest gains (+ 5.3%, p = 0.07), likely due to the need for hands-on histopathology training.
The results of the first-level Kirkpatrick evaluation indicated that 67 individuals (64.42%) who participated in the virtual CPD program declared that the course content was directly aligned with its objectives and responded positively. Seventy-eight participants (75%) were delighted with how the program was conducted. Eighty-six individuals (82.69%) declared the implementation and provision of course facilities were highly satisfactory. Seventy-nine respondents (75.96%) expressed that presenting this course in the future will benefit learners. Over 50% of participants were satisfied with the program in all other questionnaire domains.
A paired samples t-test was conducted to examine the effect of the intervention (virtual CPD course) on participants’ scores. The descriptive statistics indicate that the mean score on the pretest was 37.21 out of 50 (SD = 12.71), while the mean score on the posttest increased to 42.07 out of 50 (SD = 6.09). It revealed a statistically significant improvement in scores from the pretest to the posttest, t (103) = − 4.246 (p < 0.001). The mean score difference was − 4.86 (95% CI: −7.12, −2.59), indicating a notable increase in performance following the intervention. Cohen’s ?? was calculated to assess the magnitude of the effect, yielding a value of −0.416 (95% CI: −0.616, −0.215), which suggests a small to moderate effect size. Similarly, Hedges’ correction resulted in an effect size of −0.415 (95% CI: −0.614, −0.214). Learning outcomes aligned with objectives: 82% of participants achieved competency (≥ 80% score) in GTN/CIN, while 68% met thresholds for ovarian/endometrial cancers. 75.96% agreed the course should be repeated, indicating perceived relevance (Table 2).
[IMAGE OMITTED: SEE PDF]
These findings indicate that the national virtual CPD course had a statistically significant impact on improving participants’ knowledge and learning outcomes. However, the effect size suggests that while the change is meaningful, the practical significance may be moderate.
Discussion
Unlike prior Kirkpatrick applications relying on proprietary platforms, this study proves the viability of cost-free Moodle LMS for specialty training - a critical innovation for LLMICs. Our Farsi-language assessments, validated against Iranian cancer guidelines, further demonstrate contextual adaptation absent in Western models. This study evaluated a national virtual Continuing Professional Development (CPD) program in gynecological oncology using the Kirkpatrick Model, focusing on Levels 1 (Reaction) and 2 (Learning). The findings demonstrate that the program successfully engaged participants and enhanced their knowledge, aligning with the growing body of evidence supporting the effectiveness of virtual CPD programs in specialized medical fields. The results provide valuable insights into the design, implementation, and evaluation of virtual CPD initiatives, particularly in resource-constrained settings.
The evaluation of Level 1 revealed high levels of participant satisfaction with the virtual CPD program. Most participants (64.42%) reported that the course content was directly aligned with its objectives, and 75% expressed satisfaction with the program’s delivery. Furthermore, 82.69% of participants found the implementation and provision of course facilities highly satisfactory, and 75.96% believed that offering this course in the future would benefit learners. These findings underscore the program’s success in meeting participants’ expectations and highlight the importance of aligning course content with learners’ needs and professional contexts. The positive reception of the virtual format also reflects the growing acceptance and effectiveness of online learning platforms in medical education, particularly in overcoming barriers to access and participation in low- and lower-middle-income countries (LLMICs). This LLMIC context magnifies the program’s value: virtual delivery saved an estimated $1,200 per participant in travel costs while maintaining clinic coverage. However, infrastructure limitations caused uneven access - rural participants completed 23% fewer live webinars than urban peers (p = 0.02), necessitating downloadable content in future iterations.
The high satisfaction rates can be attributed to several factors, including using a structured Learning Management System (LMS), which facilitated asynchronous learning through multimedia content, interactive modules, and discussion forums. The inclusion of live webinars further enhanced engagement by providing opportunities for real-time interaction with subject matter experts. The program’s success stemmed from three key design elements: (1) modular LMS structure accommodating varied schedules (2), expert-led webinars applying knowledge to clinical cases, and (3) peer discussion forums. However, lower gains in vulvar cancer (+ 5.3%, p = 0.07) indicated the need for enhanced visual teaching tools, while connectivity issues in rural areas (reported by 18% of participants) suggested offline materials would improve accessibility. These findings are consistent with previous studies that have emphasized the importance of interactive and flexible learning environments in virtual CPD programs because it seems that at a CPD level, online education may be desirable for busy practitioners who choose to participate in short CPD courses to develop knowledge and skills [12, 21].
The evaluation of Level 2 demonstrated a statistically significant improvement in participants’ knowledge scores from pretest to posttest (p < 0.001), with a mean score increase of 4.86 points. The effect size, calculated using Cohen’s d, was − 0.416, indicating a small to moderate practical significance. This improvement in knowledge acquisition highlights the program’s effectiveness in achieving its educational objectives and underscores the potential of virtual CPD programs to enhance clinical knowledge and skills.
Using case-based scenarios and multiple-choice questions (MCQs) in the assessments ensured the evaluation captured foundational knowledge and its application in clinical practice [21]. This approach aligns with best practices in medical education, which emphasize the importance of assessing factual recall and the ability to apply knowledge in real-world settings [14]. The significant improvement in the posttest scores suggests that the program successfully addressed key learning gaps and provided participants with the tools to integrate new knowledge into their clinical practice.
The success of this virtual CPD program has several implications for the design and implementation of future initiatives in gynecological oncology and other specialized medical fields. First, the findings highlight the importance of tailoring course content to the target audience’s specific needs [22], ensuring relevance and engagement [23]. Second, using interactive and multimedia elements, combined with opportunities for real-time interaction, can enhance participant engagement and learning outcomes [24]. Third, the structured evaluation framework [25] provided by the Kirkpatrick Model offers a robust approach to assessing the effectiveness of CPD programs, enabling continuous improvement and alignment with professional and organizational goals.
Limitations
While this study provides valuable insights, it is not without limitations. The evaluation was confined to Levels 1 and 2 of the Kirkpatrick Model, which focuses on immediate outcomes (reaction and learning). Long-term impacts on behavior (Level 3) and results (Level 4) were not assessed, limiting the ability to conclude the program’s influence on clinical practice and patient outcomes. Additionally, Future studies should integrate multi-level frameworks like Moore et al.’s CME model to assess broader impacts on clinical practice and patient outcomes in LMIC contexts. Moreover, the reliance on self-reported data for Level 1 may introduce response bias, and the virtual nature of the course limited the ability to observe or verify the practical application of learning in clinical settings. Future studies should incorporate longitudinal follow-up to assess the sustained impact of virtual CPD programs on professional practice and patient care.
Conclusion
This study provides a scalable model for assessing short-term CPD outcomes (Kirkpatrick Levels 1–2) in LLMICs, featuring: (1) Moodle LMS adaptation (2), culturally responsive assessments, and (3) cost tracking (averaging $15/participant). Future work must address Levels 3–4 through clinical partnerships. This study contributes to the growing body of evidence on the effectiveness of virtual CPD programs in specialized medical fields such as gynecological oncology. The high levels of participant satisfaction and significant improvements in knowledge acquisition demonstrate the potential of virtual learning platforms to overcome barriers to access and participation, particularly in resource-constrained settings. By employing a structured evaluation framework, this study provides a model for assessing the impact of CPD programs. It highlights the importance of continuous improvement to meet the evolving needs of healthcare professionals. Future research should explore the long-term effects of virtual CPD on clinical practice and patient outcomes, as well as the potential for scaling such programs to other medical specialties and contexts.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author upon reasonable request.
Cox JL, Simpson MD. Cultural humility: a proposed model for a continuing professional development program. Pharmacy. 2020. https://doi.org/10.3390/pharmacy8040214.
Merry L, Castiglione SA, Rouleau G, Létourneau D, Larue C, Deschênes M-F, et al. Continuing professional development (CPD) system development, implementation, evaluation and sustainability for healthcare professionals in low- and lower-middle-income countries: a rapid scoping review. BMC Med Educ. 2023;23(1):498.
Sherman LT, Chappell KB. Global perspective on continuing professional development. The Asia pacific scholar. 2018;3(2):1.
Jeyakumar T, Karsan I, Williams B, Fried J, Kane G, Ambata-Villanueva S, et al. Paving the way forward for evidence-based continuing professional development. J Contin Educ Health Prof. 2024;44(1):53–7.
Sargeant J, Wong BM, Campbell CM. CPD of the future: a partnership between quality improvement and competency-based education. Med Educ. 2018;52(1):125–35.
Owen JA, Skelton JB, Maine LL. Advancing the adoption of continuing professional development (CPD) in the United States. Pharmacy (Basel). 2020. https://doi.org/10.3390/pharmacy8030157.
Baloyi OB, Jarvis MA. Continuing professional development status in the World Health Organisation, Afro-region member states. Int J Afr Nurs Sci. 2020;13:100258.
Mack HG, Golnik KC, Murray N, Filipe HP. Models for implementing continuing professional development programs in low-resource countries. MedEdPublish. 2017. https://doi.org/10.15694/mep.2017.000018.
Soklaridis S, Shier R, Zaheer R, Scully M, Williams B, Daniel SJ, et al. The genie is out of the bottle: a qualitative study on the impact of COVID-19 on continuing professional development. BMC Med Educ. 2024;24(1):631.
Felker BL, Towle CB, Wick IK, McKee M. Designing and implementing telebehavioral health training to support rapid and enduring transition to virtual care in the COVID era. J Technol Behav Sci. 2023;8(3):225–33.
Hassani K, McElroy T, Coop M, Pellegrin J, Wu WL, Janke RD, et al. Rapid implementation and evaluation of virtual health training in a subspecialty hospital in British Columbia, in response to the COVID-19 pandemic. Front Pediatr. 2021;9:638070.
Curran V, Glynn R, Whitton C, Hollett A. An approach to the design and development of an accredited continuing professional development e-learning module on virtual care. JMIR Med Educ. 2024;10:e52906.
Guillaume D, Troncoso E, Duroseau B, Bluestone J, Fullerton J. Mobile-social learning for continuing professional development in low-and middle-income countries: integrative review. JMIR Med Educ. 2022;8(2):e32614.
Kojuri J, Amini M, Karimian Z, Dehghani MR, Saber M, Bazrafcan L, et al. Needs assessment and evaluation of a short course to improve faculties teaching skills at a former world health organization regional teacher training center. J Adv Med Educ Prof. 2015;3(1):1–8.
Calleja P, Wilkes S, Spencer M, Woodbridge S. Telehealth use in rural and remote health practitioner education: an integrative review. Rural Remote Health. 2022;22(1):1–13.
Kirkpatrick D, Kirkpatrick J. Evaluating training programs: The four levels. Berrett-Koehler; 2006.
Moore DE Jr., Green JS, Gallis HA. Achieving desired results and improved outcomes: integrating planning and assessment throughout learning activities. J Contin Educ Health Prof. 2009;29(1):1–15.
Hasani H, Bahrami M, Malekpour A, Dehghani M, Allahyary E, Amini M, et al. Evaluation of teaching methods in mass CPCR training in different groups of the society, an observational study. Medicine (Baltimore). 2015;94(21):e859.
Jones C, Fraser J, Randall S. The evaluation of a home-based paediatric nursing service: concept and design development using the Kirkpatrick model. J Res Nurs. 2018;23(6):492–501.
Peiris-John R, Selak V, Robb G, Kool B, Wells S, Sadler L, et al. The state of quality improvement teaching in medical schools: a systematic review. J Surg Educ. 2020;77(4):889–904.
Al-Kubaisi KA, Elnour AA, Sadeq A. Factors influencing pharmacists’ participation in continuing education activities in the united Arab emirates: insights and implications from a cross-sectional study. J Pharm Policy Pract. 2023;16(1):112.
Lorenz S, Dessai S, Forster PM, Paavola J. Tailoring the visual communication of climate projections for local adaptation practitioners in Germany and the UK. Philos Trans A Math Phys Eng Sci. 2015. https://doi.org/10.1098/rsta.2014.0457.
Nisselle A, Janinski M, Martyn M, McClaren B, Kaunein N, Barlow-Stewart K, et al. Ensuring best practice in genomics education and evaluation: reporting item standards for education and its evaluation in genomics (RISE2 Genomics). Genet Med. 2021;23(7):1356–65.
Seymour A, Borggren M, Baker R. Escape the monotony: gamification enhances nursing education. J Emerg Nurs. 2023;49(6):805–10.
Ebn Ahmady A, Barker M, Fahim M, Dragonetti R, Selby P. Evaluation of web-based continuing professional development courses: aggregate mixed-methods model. JMIR Med Educ. 2017;3(2):e19.
© 2025. This work is licensed under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.