Content area
Background
Video review is a feasible, commonly used learning tool, but current literature lacks a comprehensive review of its impact on learning in postgraduate medical education. This systematic review aims at examining the learning effect of video review of resident performance in clinical practice during postgraduate medical education.
Methods
A systematic literature search was conducted from May 2023 to July 2023 with an update on 12/12/2023. Databases of MEDLINE (Pubmed), Web of Science, Embase and ERIC (through Webquest) were searched. Eligible articles had to describe the learning effects of video review in clinical practice in postgraduate medical education. The videos had to be actively recorded in a setting where a camera was not normally used for standard patient care. The investigated effect needed to be classified at least as a Kirkpatrick level 2. We iteratively developed a standardized data extraction form to extract study characteristics. The methodological quality of the individual studies was assessed using the Medical Education Research Quality Instrument.
Results
Out of 9323 records after deduplication, 11 studies were included. The designs were randomised controlled trials (n = 4) and single-group pre-test post-tests trials (n = 7). The studies had outcomes related to knowledge and skills (n = 5), resident behaviours (n = 5) and patient outcome (n = 1). All studies reported outcomes regarding learning effect.
Conclusions
Video review appears to have a positive impact on residents’ learning outcomes in postgraduate medical education. However, it is mostly not tailored to the specific learning needs of residents, and there is a lack of information regarding its optimal integration with other learning methods and within distinct clinical contexts. The heterogeneity observed among the included studies makes it challenging to formulate clear recommendations in the use of video.
Introduction
Postgraduate medical education (PGME) historically relied on the assumption that participation in clinical practice, under experienced supervision, was sufficient for resident training. With medical practice becoming more complex, there is a growing societal demand for ensuring that medical education produces competent doctors [1, 2]. This shift has prompted research into intentional and structured learning processes taking place in clinical practice [3, 4]. Emphasis is now placed on reflection, feedback and dedicating time to foster deep learning during workplace learning [5,6,7]. Additionally, recognising residents as adults with specific learning needs, there is a focus on self-regulated learning where goal setting shapes their educational journey [8]. In the dynamic nature of PGME, achieving medical competence demands innovative, evidence-based high-quality educational methodologies, with technology playing an important role in this evolution [9].
The introduction of video recordings over half a century ago has provided opportunities for residents and their supervisors to optimise the learning experience with technology. During clinical work, where patient care often takes priority over learning, video review can offer valuable support. Video review, involving the analysis of residents’ clinical practices through recorded footage, fosters reflection and feedback by enabling both residents and supervisors to revisit events exactly as they unfolded [10, 11]. During video review, residents can focus solely on reflection. The cognitive load theory suggests that minimizing mental effort enhances deeper learning due to human cognitive system limitations [12]. By reviewing video, the mental effort associated with clinical care is minimised. Supervisors also benefit from minimising their cognitive load when reviewing a video of a resident, as they have no concurrent responsibilities of supervising clinical practice. This could potentially lead to more extensive and specific feedback [10, 11].
Video review is a feasible, commonly used tool and has already been investigated in different contexts [13,14,15,16,17,18,19,20]. However, transferring research results to all clinical contexts in PGME might not be recommended for several reasons. The use of video in simulation practices [15] is different from clinical learning, as the simulation environment does not fully replicate the clinical learning setting. The roles of undergraduate medical students [16] differ from the roles PGME residents have, where increased responsibilities in clinical care require more specific and deeper learning. In surgical settings [17,18,19,20], residents are more familiar with camera use. However, integrating a camera to record clinical care in settings where videos are not routinely used (e.g., recording consultations or clinical rounds on inpatient wards) may evoke a Hawthorne effect [21]. As we are uncertain about the various effects on the learning process of residents, we cannot automatically apply all the insights and recommendations from video use in these contexts to the utilisation of video review in PGME. This indicates a need for a review of the results in the specific context of video use for resident training in the clinical, non-simulation setting where cameras are not commonly used. There is a need for meaningful comparisons and the establishment of comprehensive evidence and guidelines for video use in PGME.
Conceptualising the learning effect of an educational intervention (e.g. video) in medical education is difficult. The model of Kirkpatrick [22], which is globally used to evaluate the results of training and learning programs, is commonly applied in adapted forms for medical education reviews [23]. It classifies learning outcomes and assesses the effectiveness of interventions. Much of the literature on video use in PGME predominantly evaluates participant experiences, also known as Kirkpatrick level 1. However, it lacks insights into participants’ enhancements in the three other levels. Although educational interventions should positively impact residents’ learning, studies at Kirkpatrick Level 1 do not indicate what residents have actually learned through video review, as this level only measures their opinion on the method used. More insight is needed in how video changes residents’ knowledge and skills that were observed while reviewing a video (level 2), as well as how residents change their behaviour by optimising their clinical care (level 3). Ideally, studies should focus on how the change in residents’ care followed by the observation of their video has a positive impact on patient and healthcare outcomes (level 4). Therefore, this systematic review focused on learning effects beyond opinions of participants (level 1), namely Kirkpatrick levels 2 to 4. The following research question was established: what is the learning effect of using video review of resident performance in clinical practice during PGME?
Materials and methods
This systematic review was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Meta-Analysis Protocols (PRISMA) guidelines [24]. The checklist is provided in Appendix 1. The protocol was registered on the PROSPERO International Prospective Register of Systematic Reviews (CRD42023442318). The primary outcome measure was the learning effect of video review in clinical practice in PGME settings, secondary outcomes were participants’ perceptions of being recorded and costs (temporal and monetary).
Search strategy
A systematic literature search was conducted from May 2023 to July 2023. Scoping searches were conducted to refine the search strategy, a research librarian was consulted for advice, and an expert in the field of medical education was contacted to ensure that no relevant articles were omitted. The search strategy combined terms related to ‘resident’, ‘video’, and ‘learning’. It involved searching titles, abstracts, and keywords by combining different search terms and applying database-specific standardised keywords where possible. Databases of MEDLINE (Pubmed), Web of Science, Embase and ERIC (through Webquest) were searched. The complete search strategy for each database can be found in Appendix 2. References of eligible articles were manually reviewed. Finally, we updated the database search on 12/12/2023.
Eligibility criteria
For articles to be eligible, it was necessary to describe the learning effects of video review in clinical practice in PGME. This included recordings of both general practitioners (GPs) and medical specialists in training. The videos had to be actively recorded in a setting where a camera was not normally used for standard patient care. The investigated effect needed to be classified as Kirkpatrick levels 2 to 4 [23]. All study designs were eligible for inclusion. Included studies were in English or Dutch.
Studies on video use with simulation, animals, summative feedback or evaluation, endoscopy, classroom teaching, or prerecorded educational videos were excluded. Conference papers and letters to the editor were not considered for inclusion due to their limited provision of sufficient detail for adequate data extraction and quality assessment.
Study selection
Duplicate records were removed in Endnote (Clarivate™, Philadelphia, PA, USA) using the approach described by Bramer et al. [25]. The screening process was conducted in Rayyan AI assisted Systematic Literature Review web application (Rayyan Systems, Inc; Cambridge MA, USA) [26]. First, one reviewer (MR) conducted an initial screening of all titles and abstracts, while a group of three reviewers (MVW, ME, HD) independently performed one-third each of the second screening. Any disagreements regarding eligibility were resolved through discussions between two reviewers (MR, MVW, ME, HD). The remaining full texts were then independently screened by two reviewers for inclusion (MR, MVW).
Data collection
We iteratively developed a standardized data extraction form to extract study characteristics. One reviewer (MR) performed the initial data extraction for all included articles and a second reviewer (MVW) reviewed data for completeness. All outcomes and variables for which data were extracted are available in Appendix 3.
Quality assessment.
The methodological quality of the individual studies was assessed using the Medical Education Research Quality Instrument (MERSQI), which has been demonstrated as valid and reliable in the assessment of medical education research [27, 28]. The MERSQI instrument includes 10 items, reflecting 6 domains of study quality: study design, sampling, type of data, validity of evaluation instrument, data analysis and outcomes [27]. Two reviewers (MR, MVW) performed the initial quality appraisal for all included full-text articles. Disagreements were resolved by discussion.
Data extraction and analysis
The findings of each included study were summarised in tables, which included the main characteristics of the study and the results in natural units as reported by the investigators. In accordance with the data presented in Appendix 4, a narrative synthesis was formulated.
Results
Study selection
A total of 15,864 studies were retrieved, of which 6541 were removed as duplicates. The remaining 9323 records were screened based on title and abstract, resulting in 133 records for full-text screening. Thirteen articles were excluded because the full-texts were not retrievable. Another 109 studies were excluded as they did not meet de eligibility criteria. Ultimately, 11 studies were included. The complete flowchart is available in Fig. 1.
[IMAGE OMITTED: SEE PDF]
Study characteristics
The details of the included studies (n = 11) are presented in Appendix 4. Study designs were either randomised controlled trials (RCT) (n = 4) or single-group pre-test post-tests (n = 7). Only 1 RCT specified the sequence generation. Study duration was specified in six studies and varied between 30 days and 15 months, with a median of three months. Study locations included the United States (n = 6), United Kingdom (n = 2), France (n = 1), the Netherlands (n = 1) and Australia (n = 1). Studies included postgraduate residents from various medical specialist disciplines: surgical residents (n = 4), anaesthesiology residents (n = 2), GPs in training (n = 2), otolaryngology (n = 1), emergency medicine (n = 1), dermatology and plastic surgery residents (n = 1). Post-graduate years ranged from the first to the last year. Participant numbers varied from 5 to 44 residents (median: 16). Video recording decisions were made by the study investigators during specific procedures (n = 6) or within selected time periods (n = 3), with no instances of resident-initiated recording. The remaining studies lacked information regarding video recording decisions timing criteria. Studies often combined self-reflection or self-assessment from the resident with feedback from a supervisor (n = 9), while one study focused solely on self-reflection and another on external feedback. The number of videos recorded per participant ranged from 1 to 20 (median: 1.5). Residents reviewed recordings once (n = 7), twice (n = 2) or 1 to 3 times, depending on their participation in video-feedback sessions. The time period between recording and reviewing was not specified (n = 5), between 0 and 43 days (n = 2), or between 1 and 3 days (n = 4).
Primary outcome: learning effect
It was not possible to calculate summary measures or average effects across studies due to the diversity of intervention designs, study methods, and heterogeneity in outcome measures, which emerged unexpectedly during the data extraction process.
Kirkpatrick level 2
Four studies had outcomes related to knowledge and skills. Isreb et al. [29] compared the total scores of 10 surgical residents on a procedure-based assessment form post-operation and post-video review, assessing 6 general assessment domains. Only two out of 10 video reviews resulted in an additional point on a 10-point scale. Goldberg et al. [30] recorded GPs in training using a psychiatric screening questionnaire. Experienced GPs and GPs in training completed rating scale scores before and after video review. Significant differences in favour of video review were observed when comparing pre- and post-test scores between the index and control groups. Mazer et al. [31] compared evaluation questionnaires from surgical residents and supervisors, addressing the interaction between them and compared the content between discussions in the operating room and discussions during the video review. They did not find differences in the results of the evaluation questionnaires. The transcript analysis of the discussions showed some significant differences: the discussion of anatomy and steps of the procedure were discussed more intra-operatively while discussion of intraoperative decision making was discussed more during video-based coaching. Hu et al. [32] conducted a follow-up study based on the transcriptions of Mazer et al. [31] Additional insights included the presence of more teaching points per unit time, residents and supervisors being more focused on resident education providing and increased depth of teaching in the video sessions.
Kirkpatrick level 3
Six studies reported learning effects on behaviour. Hays et al. [33] investigated three self-assessment scores and one feedback score per video for GPs in training, measured over two videos with 10 to 13 weeks in between. They found that the mean of all scores, including communication, history taking, and diagnostic and management skills, improved after the second video. Kava et al. [34] studied leadership management of emergency medicine residents during resuscitations. The video review group, engaging in self-reflection after the first recording, exhibited improved scores and thus leadership in the second resuscitation, whereas the control group did not. Parker et al. [35] measured medication prescribing errors in the period before and after video feedback and self-assessment sessions with surgical residents, revealing a significant decrease of errors post-video review. Wouda et al. [10] included dermatology and plastic surgery residents engaging in video communication assessment and feedback (video-CAF) sessions at different time points. After video-CAF sessions, following results were found: (1) an increased number of learning objectives; (2) a significant increase in one out of four subcompetency scores in patient education rating scale scores; but (3) patient surveys showing no difference in their opinions about residents. Birnbach et al. [36] combined two rating scale scores evaluating the technical performances of anaesthesia residents on three different time points. These scores, graded by assessors at three time points, consistently demonstrated a positive and significant score improvement in the video review group compared to the non-video review group. Son et al. [37] had otolaryngology residents to review 10 videos with feedback from patients and faculty regarding communication skills. On a second occasion, 10 additional videos were recorded and rated. Patient feedback significantly improved, and faculty feedback showed a significant increase in scores for 11/14 questions after the video intervention.
Kirkpatrick level 4
Only one study reported on the impact on patient outcomes. Guerrier et al. [38] partly evaluated the learning effect on Kirkpatrick level 4. They investigated anaesthesia residents, recorded twice with a 7-day gap performing an anaesthetic bloc. The video review group engaged in self-reflection and feedback, and showed a greater increase in “akinesia score”, greater decrease in duration of procedure and lesser need for supplemental injection on day five compared to the non-video review group. All of these differences were significant.
Secondary outcomes: perceptions and costs
Four studies addressed perceptions of being recorded. Parker et al. [35] found that although residents were conscious of being recorded, it did not hinder them. In the study of Birnbach et al. [36], residents initially felt uncomfortable being recorded, but this feeling disappeared by the end. The reviewing of the recordings motivated them for improving their technique. Two studies [29, 33] stated that residents found review of video recordings beneficial for their learning. In the study of Isreb et al. [29], residents even favoured video review feedback over verbal feedback using a standardised assessment form.
Only five studies gave an indication of the temporal cost by describing the duration of review sessions, which showed substantiate time effort: 30–45 min [35], 16–90 min with a mean of 44 min [29], approximately 45 min [30], 60–90 min [10], and 90 min [33]. Only one study described monetary costs by specifying the cost of the recording equipment, which was $1500 [37].
Quality assessment of the included studies
The quality of the studies was assessed using the MERSQI [27], which provided a score between five and 18. The mean total MERSQI score of studies in this review was 11.53 (range 7-13.5). No studies were excluded based on the MERSQI scores. A complete overview is given in Table 1.
[IMAGE OMITTED: SEE PDF]
Discussion
This systematic review explores the learning effect of incorporating video review, where residents review their own actions, into the education of residents in PGME. It focuses on clinical settings where a camera, such as those used in endoscopic procedures, is not a regular component of daily clinical care. The results indicate positive learning effects associated with video review, with 8 studies reporting significant results [10, 30,31,32, 35,36,37,38]. Positive effects are reported for levels 2 to 4 of Kirkpatrick’s model for evaluation. These findings align with studies in other contexts, suggesting favourable outcomes for using video review with residents [11, 16,17,18,19,20].
While residents are expected to be self-regulated learners [8], in all studies the investigators, and not the residents, determined video recording parameters, revealing a gap in aligning video practices with personalised learning trajectories. Wouda et al. [10] measured residents’ learning objectives; however, it is unclear if the video recordings were aligned with the content of these objectives. More tailored educational practices could be achieved through fostering self-regulated learning, stimulating reflective practices and providing more helpful goal-based feedback [6,7,8]. Despite this absence, residents expressed positive experiences, finding video reviews useful, as indicated by other studies [13, 14].
The specific value of video review for the purpose of feedback and formative assessment, and how it can be optimally integrated into resident education, remains complicated to tackle because of different reasons. First, only 11 studies could be included in this review despite more than half a century of video use. This seems a small number given the frequent use of video in the training of GPs and medical specialists [10, 13, 39]. The great variability in study design complicates this even further. Second, few studies [29, 32, 34] delve into theoretical educational frameworks, overlooking factors influencing the learning effect and lacking nuanced insights into how video impacted learning. Integrating and evaluating theoretical insights from various medical and non-medical contexts, such as those found in teacher training [40] or sports [41], may help bridge the existing gap. Third, exploring the interplay between video review and other educational strategies is crucial. Most studies combined video review with feedback sessions or coaching [10, 29,30,31, 33, 35, 36, 38], complicating the assessment of video’s isolated impact and impeding formulation of recommendations for effective use of video review in resident education. The study of Mazer [31] demonstrated that certain factual topics (e.g. discussion of anatomy) are extensively discussed intraoperatively, while clinical reasoning predominates in video review session with supervisors. This implies potential complementarity between video and other educational interventions.
The studies demonstrate strengths and weaknesses. Strengths include 10 out of 11 studies measured objective data. Weaknesses were exclusively single-institutional studies, only three reporting a response rate, and many omitting information on evaluation instrument validity. Additionally, most studies employed a single-group pretest-posttest design, potentially introducing bias due to the inherent learning effect over time. Although Isreb et al. [29] discussed the Hawthorne effect in their introduction, only Son et al. [37] explored its implications in their study context. Next to this, only one study [37] discussed monetary costs, even though these can be an significant burden when implementing new technology. Future research could place more emphasis on this aspect to enhance implementation. Lastly, only one study investigated outcomes at the Kirkpatrick level 4 in their analysis. This is a challenging outcome to measure, but considering that improved patient and healthcare outcomes are the main goal of optimising resident’s training, it deserves more attention and more studies should orient their investigations toward this aspect.
This review is subject to several limitations that should be considered when interpreting the findings. Firstly, only 11 studies meet the inclusion criteria with four RCT’s, underscoring the restricted pool of available studies. The scarcity of RCT’s may affect the robustness of the evidence base. Secondly, the heterogeneity observed among the included studies poses a challenge in synthesizing results due to variations in study design, interventions, and outcome measures. This diversity introduces complexities in drawing overarching conclusions. Lastly, we chose to confine the scope of this review to specific clinical situations, namely those where a camera is not routinely incorporated into daily clinical care. This restriction may limit the applicability of the findings to various, heterogeneous clinical contexts.
Conclusions
In conclusion, using video review in clinical practice where cameras are not routinely integrated, appears to have a positive learning effect on residents. However, studies are scarce and heterogenous in design, complicating the formulation of clear recommendations for video review in PGME. There is insufficient information regarding the optimal integration with other learning methods and in diverse clinical contexts. Additionally, the current approach lacks customisation to residents’ learning needs. Future studies should not only investigate the impact of video on (self-regulated) learning at different Kirkpatrick levels but also delve into the mechanisms through which it enhances learning. This research is essential for developing recommendations aimed at maximizing video review’s learning potential in PGME.
Data availability
All data generated or analysed during this study are included in this published article and its supplementary information files.
Abbreviations
GPs:
General practitioners
MERSQI:
Medical education research quality instrument
PGME:
Postgraduate medical education
PRISMA:
Preferred reporting items for systematic reviews and meta-analysis
RCT:
Randomised controlled trial
Video-CAF:
Video communication assessment and feedback
Ludmerer KM. Resident burnout: working hours or working conditions? J Grad Med Educ. 2009;1(2):169–71.
Patel M. Changes to postgraduate medical education in the 21st century. Clin Med (Lond). 2016;16(4):311–4.
Mann KV. Theoretical perspectives in medical education: past experience and future possibilities. Med Educ. 2011;45(1):60–8.
Miller BM, Moore DE Jr., Stead WW, Balser JR. Beyond Flexner: a new model for continuous learning in the health professions. Acad Med. 2010;85(2):266–72.
Sullivan GM. A Mile Wide but 1 Cell Thick: The Need to Prioritize Learning in Graduate Medical Education. J Grad Med Educ. 2016;8(4):488–91.
Mann K, Gordon J, MacLeod A. Reflection and reflective practice in health professions education: a systematic review. Adv Health Sci Educ Theory Pract. 2009;14(4):595–621.
Weallans J, Roberts C, Hamilton S, Parker S. Guidance for providing effective feedback in clinical supervision in postgraduate medical education: a systematic review. Postgrad Med J. 2022;98(1156):138–49.
Sandars J, Cleary TJ. Self-regulation theory: applications to medical education: AMEE Guide 58. Med Teach. 2011;33(11):875–86.
Harden RM. Ten key features of the future medical school-not an impossible dream. Med Teach. 2018;40(10):1010–5.
Wouda JC, van de Wiel HB. The effects of self-assessment and supervisor feedback on residents’ patient-education competency using videoed outpatient consultations. Patient Educ Couns. 2014;97(1):59–66.
Beckman HB, Frankel RM. The use of videotape in internal medicine training. J Gen Intern Med. 1994;9(9):517–21.
Szulewski A, Howes D, van Merrienboer JJG, Sweller J. From Theory to Practice: The Application of Cognitive Load Theory to the Practice of Medicine. Acad Med. 2021;96(1):24–30.
Eeckhout T, Gerits M, Bouquillon D, Schoenmakers B. Video training with peer feedback in real-time consultation: acceptability and feasibility in a general-practice setting. Postgrad Med J. 2016;92(1090):431–5.
Paul S, Dawson KP, Lanphear JH, Cheema MY. Video recording feedback: a feasible and effective approach to teaching history-taking and physical examination skills in undergraduate paediatric medicine. Med Educ. 1998;32(3):332–6.
Bokken L, Rethans JJ, Scherpbier AJ, van der Vleuten CP. Strengths and weaknesses of simulated and real patients in the teaching of skills to medical students: a review. Simul Healthc. 2008;3(3):161–9.
Hammoud MM, Morgan HK, Edwards ME, Lyon JA, White C. Is video review of patient encounters an effective tool for medical student learning? A review of the literature. Adv Med Educ Pract. 2012;3:19–30.
Ahmet A, Gamze K, Rustem M, Sezen KA. Is Video-Based Education an Effective Method in Surgical Education? A Systematic Review. J Surg Educ. 2018;75(5):1150–8.
Herrera-Almario GE, Kirk K, Guerrero VT, Jeong K, Kim S, Hamad GG. The effect of video review of resident laparoscopic surgical skills measured by self- and external assessment. Am J Surg. 2016;211(2):315–20.
Green JL, Suresh V, Bittar P, Ledbetter L, Mithani SK, Allori A. The Utilization of Video Technology in Surgical Education: A Systematic Review. J Surg Res. 2019;235:171–80.
Augestad KM, Butt K, Ignjatovic D, Keller DS, Kiran R. Video-based coaching in surgical education: a systematic review and meta-analysis. Surg Endosc. 2020;34(2):521–35.
McCambridge J, Witton J, Elbourne DR. Systematic review of the Hawthorne effect: new concepts are needed to study research participation effects. J Clin Epidemiol. 2014;67(3):267–77.
Kirkpatrick D. Evaluation of training. In: Training and development handbook. edn. Edited by Craig R, Bittel L. New York: McGraw-Hill; 1967: 131–167.
Hammick M, Dornan T, Steinert Y. Conducting a best evidence systematic review. Part 1: From idea to data coding. BEME Guide 13. Med Teach. 2010;32(1):3–15.
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, et al. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int J Surg. 2021;88:105906.
Bramer WM, Giustini D, de Jonge GB, Holland L, Bekhuis T. De-duplication of database search results for systematic reviews in EndNote. J Med Libr Assoc. 2016;104(3):240–3.
Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan-a web and mobile app for systematic reviews. Syst Rev. 2016;5(1):210.
Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA. 2007;298(9):1002–9.
Cook DA, Reed DA. Appraising the quality of medical education research methods: the Medical Education Research Study Quality Instrument and the Newcastle-Ottawa Scale-Education. Acad Med. 2015;90(8):1067–76.
Isreb S, Attwood S, Hesselgreaves H, McLachlan J, Illing J. Synchronized Video-Review as a Tool to Enhance Reflection and Feedback: A Design-Based Feasibility Study. J Surg Educ. 2021;78(1):1–8.
Goldberg DP, Steele JJ, Smith C, Spivey L. Training family doctors to recognise psychiatric illness with increased accuracy. Lancet. 1980;2(8193):521–3.
Mazer LM, Hu YY, Arriaga AF, Greenberg CC, Lipsitz SR, Gawande AA, Smink DS, Yule SJ. Evaluating Surgical Coaching: A Mixed Methods Approach Reveals More Than Surveys Alone. J Surg Educ. 2018;75(6):1520–5.
Hu YY, Mazer LM, Yule SJ, Arriaga AF, Greenberg CC, Lipsitz SR, Gawande AA, Smink DS. Complementing Operating Room Teaching With Video-Based Coaching. JAMA Surg. 2017;152(4):318–25.
Hays RB. Self-evaluation of videotaped consultations. Teach Learn Med. 1990;2(4):232–6.
Kava L, Jones K, Ehrman R, Smylie L, McRae M, Dubey E, Reed B, Messman A. Video-assisted self-reflection of resuscitations for resident education and improvement of leadership skills: A pilot study. Perspect Med Educ. 2022;11(2):80–5.
Parker H, Farrell O, Bethune R, Hodgetts A, Mattick K. Pharmacist-led, video-stimulated feedback to reduce prescribing errors in doctors-in-training: A mixed methods evaluation. Br J Clin Pharmacol. 2019;85(10):2405–13.
Birnbach DJ, Santos AC, Bourlier RA, Meadows WE, Datta S, Stein DJ, Kuroda MM, Thys DM. The effectiveness of video technology as an adjunct to teach and evaluate epidural anesthesia performance skills. Anesthesiology. 2002;96(1):5–9.
Son E, Halbert A, Abreu S, Hester R, Jefferson G, Jennings K, Pine H, Watts T. Role of Google Glass in improving patient satisfaction for otolaryngology residents: a pilot study. Clin Otolaryngol. 2017;42(2):433–8.
Guerrier G, Rothschild PR, Lehmann M, Azan F, Baillard C. Impact of Video Technology for Improving Success of Medial Canthus Episcleral Anesthesia in Ophthalmology. Reg Anesth Pain Med. 2017;42(6):757–9.
Kilminster S, Cottrell D, Grant J, Jolly B. AMEE Guide 27: Effective educational and clinical supervision. Med Teach. 2007;29(1):2–19.
Fukkink RG, Trienekens N, Kramer LJC. Video Feedback in Education and Training: Putting Learning in the Picture. Educational Psychol Rev. 2010;23(1):45–63.
Gil-Arias A, Del Villar F, Garcia-Gonzalez L, Moreno A, Perla Moreno M. Effectiveness of Video Feedback and Interactive Questioning in Improving Tactical Knowledge in Volleyball. Percept Mot Skills. 2015;121(3):635–53.
© 2025. This work is licensed under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.