Full Text

Turn on search term navigation

© 2019. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.

Abstract

Background: Electronic learning (e-learning) in postgraduate medical education has seen a rapid evolution; however, we tend to evaluate it only on its primary outcome or learning aim, whereas its effectiveness also depends on its instructional design. We believe it is important to have an overview of all the methods currently used to evaluate e-learning design so that the preferred method may be identified and the next steps needed to continue to evaluate postgraduate medical e-learning may be outlined.

Objective: This study aimed to identify and compare the outcomes and methods used to evaluate postgraduate medical e-learning.

Methods: We performed a systematic literature review using the Web of Science, PubMed, Education Resources Information Center, and Cumulative Index of Nursing and Allied Health Literature databases. Studies that used postgraduates as participants and evaluated any form of e-learning were included. Studies without any evaluation outcome (eg, just a description of e-learning) were excluded.

Results: The initial search identified 5973 articles, of which we used 418 for our analysis. The types of studies were trials, prospective cohorts, case reports, and reviews. The primary outcomes of the included studies were knowledge, skills, and attitude. A total of 12 instruments were used to evaluate a specific primary outcome, such as laparoscopic skills or stress related to training. The secondary outcomes mainly evaluated satisfaction, motivation, efficiency, and usefulness. We found 13 e-learning design methods across 19 studies (4% 19/418). The methods evaluated usability, motivational characteristics, and the use of learning styles or were based on instructional design theories, such as Gagne’s instructional design, the Heidelberg inventory, Kern’s curriculum development steps, and a scale based on the cognitive load theory. Finally, 2 instruments attempted to evaluate several aspects of a design, based on the experience of creating e-learning.

Conclusions: Evaluating the effect of e-learning design is complicated. Given the diversity of e-learning methods, there are many ways to carry out such an evaluation, and probably, many ways to do so correctly. However, the current literature shows us that we have yet to reach any form of consensus about which indicators to evaluate. There is a great need for an evaluation tool that is properly constructed, validated, and tested. This could be a more homogeneous way to compare the effects of e-learning and for the authors of e-learning to continue to improve their product.

Details

Title
How We Evaluate Postgraduate Medical E-Learning: Systematic Review
Author
de Leeuw, Robert  VIAFID ORCID Logo  ; de Soet, Anneloes  VIAFID ORCID Logo  ; van der Horst, Sabine  VIAFID ORCID Logo  ; Walsh, Kieran  VIAFID ORCID Logo  ; Westerman, Michiel  VIAFID ORCID Logo  ; Scheele, Fedde  VIAFID ORCID Logo 
Section
Reviews in Medical Education
Publication year
2019
Publication date
Jan-Jun 2019
Publisher
JMIR Publications
e-ISSN
23693762
Source type
Scholarly Journal
Language of publication
English
ProQuest document ID
2511975058
Copyright
© 2019. This work is licensed under https://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.