Content area
Background
Clinical reasoning (CR) is a critical competency in medical education, essential for effective decision-making in clinical practice. This study aimed to enhance CR skills among undergraduate medical students by comparing two instructional strategies: the E-learning by Concordance (e-LbC) approach and an interactive lecture-based method.
Methods
A quasi-experimental comparative study was conducted at the Faculty of Medicine, Suez Canal University, Egypt, during the 2021–2022 academic year. The study involved 60 fifth-year medical students through comprehensive sampling and was implemented over one academic term. It consisted of three phases. In the first phase, an online Script Concordance Test (SCT) was used via the Wooclap platform to assess students’ baseline CR skills. The second phase included the educational intervention, in which the e-LbC method was used to teach the topic of painless vision loss, while the interactive lecture method was used for painful vision loss. In the final phase, a researcher-developed questionnaire assessed students’ perceptions regarding the impact of each instructional method on CR development, difficulty level, and satisfaction. The questionnaire’s validity was established by medical education experts, and reliability was confirmed using Cronbach’s alpha.
Results
Statistical analysis using paired t-tests revealed no significant difference in the pre-SCT scores between groups. However, post-SCT scores showed a statistically significant improvement in both groups, with the e-LbC, painless vision loss theme, demonstrating a greater effect size (Cohen’s d) and overall higher performance (p < 0.001). Additionally, 62% of students expressed satisfaction with the e-LbC method.
Conclusion
the e-LbC approach positively influenced students’ clinical reasoning skills and engagement. Its integration with real-time assessment tools like Wooclap, combined with its cost-effectiveness, flexibility, and user-friendliness, positions it as a valuable tool for enhancing medical education in diverse learning environments.
Introduction
Clinical reasoning (CR) is widely recognized as a foundational competence in both health professional education and clinical practice [1]. As such, structured clinical reasoning curricula are essential in undergraduate medical education. However, teaching this skill poses significant pedagogical challenges, particularly among undergraduate learners.
Clinical reasoning can be defined as a skill, process, or outcome by which clinicians observe, collect, and interpret clinical data to diagnose and treat patients [2]. Research shows that medical expertise is less about memory or problem-solving alone and more about how knowledge is structured and applied through individualized scripts developed via learning and experience [3]. Expert clinicians often make rapid and accurate diagnoses by drawing on these internalized cognitive frameworks known as illness scripts [4].
The theoretical underpinnings of this study draw on script theory and Croskerry’s dual-process theory, both of which illuminate how clinicians think and make decisions in complex clinical contexts. Script theory suggests that clinicians retrieve relevant mental models, illness scripts, based on prior knowledge and experience [2]. Croskerry’s dual-process theory differentiates between two modes of reasoning: intuitive, experience-based (Type 1), and analytical, deliberate (Type 2). Both are used in tandem depending on the clinical scenario [5, 6].
Learning by Concordance (LbC) builds directly on these theoretical foundations. It offers a structured and authentic method to engage learners in clinical reasoning under uncertainty, promoting the development of robust illness scripts and supporting both Type 1 and Type 2 reasoning processes [5, 7]. LbC simulates real-life decision-making by presenting learners with contextualized clinical scenarios, asking them to make judgments, and comparing their responses with those of expert panels.
LbC follows the model of a cognitive apprenticeship, allowing learners to observe expert thinking and progressively gain independence as their reasoning skills evolve [8]. Unlike traditional instructional models that emphasize knowledge acquisition followed by application, LbC integrates both processes within a single, interactive experience [9]. Its versatility allows it to be adapted to various domains, including clinical decision-making, ethics, and professionalism [10]. With advances in educational technology, LbC can now be delivered online (e-LbC), making it accessible, scalable, and convenient for a broad range of learners [11].
Prior studies have demonstrated the effectiveness of LbC in medical fields such as ECG interpretation and oral pathology. For example, Charton and colleagues found that LbC fostered deeper reflection among future general practitioners by prompting learners to compare their decisions with expert justifications [10]. Another study highlighted how LbC supported clinical reasoning development in dental education through a low-cost and scalable platform [12].
Roche et al. (2025) conducted a scoping review of the Learning-by-Concordance (LbC) approach in health professions education and found that, although interest in LbC is growing, the literature remains limited and heterogeneous, reflecting its innovative and emerging nature [13]. The approach has been applied across various learner types, disciplines, and contexts [10, 11, 13, 14] Most studies used cohort-based follow-up designs to investigate learner engagement and implementation strategies [10, 11, 13, 14].
Learners consistently reported positive experiences, including enhanced engagement, interactivity, and support for structuring clinical reasoning [5, 7, 11, 15,16,17,18]. However, these study designs do not allow determination of the added value of LbC compared with other educational approaches such as problem-based learning or case-based discussions [13]. The asynchronous digital format was especially appreciated for its flexibility, enabling learners to progress at their own pace [12, 19, 20].
Moreover, existing research has not sufficiently explored LbC in ophthalmology, a specialty that relies heavily on visual pattern recognition and diagnostic accuracy. Also, comparative evaluations between LbC and traditional instructional methods, such as interactive lectures, remain scarce in this domain. This gap limits our understanding of LbC’s broader applicability across medical specialties.
Accordingly, this study aims to evaluate the effectiveness of electronic Learning by Concordance (e-LbC) in enhancing clinical reasoning skills among undergraduate medical students during an ophthalmology clerkship. Specifically, it compares the effect of e-LbC instruction of painless vision loss theme with that of traditional interactive lectures of painful vision loss theme. The study raises the following question and hypothesis:
Research question
Does the e-LbC instructional method improve clinical reasoning in undergraduate medical students more effectively than traditional interactive lectures?
Hypothesis
Students exposed to e-LbC will demonstrate greater improvement in clinical reasoning, as measured by SCT scores, compared to those receiving traditional lecture-based instruction.
Materials and methods
Study design & participants
This quasi-experimental study (pre-test/post-test design) was conducted during the 2021–2022 academic year at the Faculty of Medicine, Suez Canal University. The Ophthalmology clerkship spanned two academic terms, each comprising three consecutive rounds. Each round lasted six weeks and included 20 fifth-year medical students, resulting in a total of 60 students per term. A comprehensive sample of 60 students enrolled in the second-term Ophthalmology clerkship was designated as the study group. These students participated in the intervention and were assessed for clinical reasoning using the Script Concordance Test (SCT).
An independent pilot sample of 30 students was selected from the 60 students enrolled in the first-term clerkship. This sample was used solely for piloting and validating the SCT instrument prior to its use in the main study. The pilot sample size followed the general rule of thumb for test validation—10% to 20% of the main study sample, or a minimum of 30–50 participants [10]—to ensure adequate content validity and clarity of the test items.
The Ophthalmology branch was selected from among the major branches in the fifth year of medical education as its staff were well-trained on learning by concordance. Additionally, they depended on SCT in their formative assessment. All participants in the second-term group (study group) were exposed to two modes of instruction. They experienced painless vision loss through E-LBC and painful vision loss through an interactive lecture. This design was adopted to ensure that all students had a similar learning experience while avoiding any ethical concerns related to depriving one group of an experience. Additionally, it eliminated the need for a crossover design.
To ensure that any observed effect could be attributed to the instructional method rather than external factors, we adopted several steps:
1. 1.
Baseline Assessment: A pre-test was administered to all students to establish their baseline knowledge and minimize the influence of prior information as a confounding variable.
2. 2.
Standardized Assessment Tool: All participants were evaluated using the same Script Concordance Test (SCT) to control for potential bias arising from variations in assessment methods.
3. 3.
Comparison within subject and instructional methods design: To ensure fairness and provide all students with a comparable learning experience, the same group of 60 students was exposed to both instructional methods, but across different thematic content. Specifically:
*
The study condition involved teaching the Painless Vision theme using the e-Learning by Concordance (e-LbC) approach, with performance assessed through a Script Concordance Test (SCT).
*
The control condition involved teaching the Painful Vision Loss theme using an interactive lecture, also assessed using the same SCT format.
This within-subjects design allowed for a direct comparison of instructional methods while ensuring consistency in participant exposure and assessment.
The students were selected based on the following inclusion and exclusion criteria: students who were enrolled in the Ophthalmology clerkship during the academic year 2021–2022, students who consented to participate in the study, students who completed both the pre-test and post-test assessments, and students with no prior exposure to the content of the study in previous academic years. Exclusion criteria included students who were not enrolled in the Ophthalmology clerkship during the academic year 2021–2022 and students who did not consent to participate in the study.
The study design was composed of three phases, Pre-intervention, intervention, and post-intervention phases, as shown in Fig. 1.
[IMAGE OMITTED: SEE PDF]
Pre- and post-intervention phases
The student’s clinical reasoning skills were assessed using an electronic Script concordance test. The test was administrated using an online electronic platform (Wooclap) [16]. The Pre- and Post-SCT were conducted in an online proctored and closed-book setting in the faculty electronic exam hall. The students were provided with general guidelines before starting the exam. Then, they were requested to complete the SCT individually within one and half hour frame.
Script concordance test
Despite several tools to assess clinical reasoning skills, a series of studies have illustrated that the script concordance test (SCT) has interesting psychometric properties, in terms of reliability, face validity, and construct validity. The SCT was developed based on the illness script theory as well. It was designed to assess clinical reasoning skills & measure the degree of concordance that exists between examinees’ scripts and the panels of experts. It can be used in a paper or electronic format [21, 22].
Each test item of the SCT was designed to have four components: (1) the patient’s presentation (2) a diagnostic hypothesis, an investigation action, or a treatment option provided that is relevant to the particular situation, (3) new information is introduced in the form of a condition that might affect the diagnostic hypothesis, investigative action, or treatment option, and (4) a 5-point Likert-type scale is used to record the examines’ response. The examiner’s task was to assess the effect of the new information on the status of a diagnostic hypothesis, an investigation action, or a treatment option given.
The online ophthalmology version of the Script concordance test consists of two sections of a total of 30 vignettes and a total number of 76 items (Supplementary 1). The SCT was designed to assess diagnostic, investigative and treatment goals. The developed SCT was divided based on the major theme “Vision Loss” into two sections. The first section, containing seven clinical scenarios with fifteen vignettes of two to three items each, was developed to assess students’ clinical reasoning skills in relevance to the painful vision loss theme that was instructed by interactive lectures. The second section, containing eight clinical scenarios with fifteen vignettes of two to three items each, was developed to assess students’ clinical reasoning skills in relevance to the painless vision loss theme that was instructed by the electronic Learning by Concordance (e- LbC) approach.
Validation
To ensure the validity and reliability of the SCT used in this study, a multi-step validation process was conducted involving expert review and statistical analysis.
Expert panel review for face and content validity
The SCT was initially developed based on the illness script theory and reviewed by a panel of 10 subject matter experts in ophthalmology and medical education. These experts were selected based on their extensive experience in clinical reasoning and SCT development. They assessed the test items for relevance, clarity, and alignment with real-world clinical scenarios. Each expert provided qualitative feedback on the appropriateness of the cases, the plausibility of diagnostic options, and the wording of SCT items. Necessary modifications were made based on their input to enhance clarity and ensure clinical authenticity.
Pilot testing for construct validity
A pilot study was conducted with 30 students, separate from the study participants, to further evaluate the SCT’s validity. These students completed the SCT under exam conditions, and their responses were compared with those of the expert panel. A statistically significant difference (p < 0.05) between expert and student responses was observed. As script concordance is based on the assumption that examinees with more evolved illness scripts interpret data and make decisions in uncertain situations that increasingly agree with those of experienced clinicians given the same clinical scenarios, and that performance of these skills can be measured using a five-anchor Likert type scale [6]. The observation that SCT scores consistently tend to rise with increasing level of training validates this inference [23].
Reliability testing
The internal consistency of the SCT was measured using Cronbach’s alpha, which yielded a coefficient of 0.819, indicating good reliability.
Scoring
We invited ten ophthalmology expert faculties to answer the SCT (Reference Panel). These experts were selected based on their extensive experience in clinical reasoning and SCT development. The aggregate method was used to develop the key score in which participants’ answers are compared to those given by a reference panel. The correct answer for an SCT was weighted based on expert response. For each answer, the credit is the number of members that chose that answer, divided by the modal value for the question [23]. With this method, all questions have the same maximum (1) and minimum (0) value. Scores obtained on each question are added to obtain a total score for the test. This number is then divided by the number of questions and multiplied by 100 to get a percentage score [24].
Wooclap application
Wooclap is an interactive online platform for enhancing classroom interactions and assessing students’ comprehension in real-time, through the usage of cell phones or laptops. Wooclap employs straightforward methods to make studying more engaging for students.
It enables educators to design multiple interactive questions. Wooclap was selected due to its unique feature of having an integrated template for SCT (Script Concordance Test) within the system. This specific template was created by Bernard Charlin. It simplifies the process of adding the vignette and three columns to the test. Furthermore, the Likert scale integrated into Wooclap enables us to perform the hypothesis testing required for the exam. Students respond to these questions by mobile device, tablet, or computer. The findings are then presented live on the teacher’s presentation screen. Wooclap allows tutors to collect, display, and compare the answers of students and experts on a single platform and conduct Script concordance exams online as shown in Fig. 2.
[IMAGE OMITTED: SEE PDF]
When creating a Script Concordance Test on Wooclap, there are two possibilities: either you have already obtained the opinions of the experts through some other method, or you will collect that data using Wooclap. In the current study we have followed the first option and collected the data from the panel of experts then used Wooclap to collect the students’ answers and compare them to the experts following these steps:
1. You have already gathered the data from the panel of experts.
Step 1: Create the question.
Select the Script Concordance Test in the list of interactions.
Then, fill in the required fields (i.e. the case description, the hypothesis and the additional information), and specify how many experts have selected each answer on the Likert scale.
Step 2: Ask the question to your live audience.
Use the “correct answer” button to display the experts’ opinions alongside the students’.
answers [25].
We used the Wooclap platform to administrate the Script Concordance Test (SCT) electronically and synchronously at the end of each session. This tool facilitated real-time data collection by allowing both students and expert panel members to submit their responses through the application. Wooclap enabled the automatic compilation and comparison of student scores with expert reference scores, providing immediate visualization of concordance levels between learners and experts.
Intervention phase
Following the pre-test using online SCT, the intervention phase follows. One major theme (vision loss) of ophthalmology was chosen and was divided into two sub-themes. The sub-theme, painful vision loss was instructed by online interactive lectures followed by case examples as shown in Fig. 3. The other sub-theme, painless vision loss was instructed by e-LbC as shown in Fig. 4. Both themes were demonstrated in instructional sessions and were led by a content expert in the ophthalmology field as shown in Table 1.
[IMAGE OMITTED: SEE PDF]
[IMAGE OMITTED: SEE PDF]
[IMAGE OMITTED: SEE PDF]
Evaluation of the student perceptions
At the end of the post-SCT, students’ perception towards the electronic LbC was assessed by using an anonymous questionnaire. The questionnaire was developed by the authors to explore the student perception. The questionnaire was composed of 24 items/questions and was adopted from previous studies [16, 17]. To ensure face and content validity, the questionnaire underwent a validation process involving ten medical education experts with significant experience in SCT. The experts received the questionnaire and completed an evaluation via an online form, which included an area for suggestions for improvement. Their feedback was used to refine and finalize the questionnaire, ensuring its relevance and clarity. The final version of the validated questionnaire was then administered to the students. At the end of the session, students were reminded to complete the perception questionnaire, as outlined and approved in the informed consent. The questionnaire was administered via Microsoft Google Forms disseminated through a shared link once, at the end of the sessions. All the students in the study group were informed about the questionnaire and received it. 60 students respond with response rate 100%. No missing data were identified, so all the completed questionnaires were included in the analysis. Students assessed each statement using a 5-point Likert-type scale from strongly disagree (1) to strongly agree (5).
Statistical analysis
Data were analysed with descriptive and analytic statistics such as paired t-tests using SPSS, version 24. P values less than 0.05 were considered significant. The effect size was measured by Cohen’s d method.
Results
The study compared two different educational methods across two themes. Both were evaluated against each other before and after learning using the script concordance test (SCT). The results were discussed in five sections, covering changes in overall SCT scores, pre- and post-SCT scores, comparisons within and between groups, vignette analysis, and students’ perceptions.
Overall SCT score improvement (Table 2)
Given that the total SCT scores were normally distributed, parametric tests were applied for their comparison. Students in both groups demonstrated statistically significant improvements in post-SCT scores relative to their pre-SCT scores (Table 2). Moreover, the e-LbC, painless vision loss theme, achieved a greater improvement, with a large effect size (Cohen’s d = 4.16), compared with the interactive lecture, painful vision loss theme, (Cohen’s d = 3.75).
Within-group comparisons (Table 2)
The average change in SCT scores was 17.45 (95% CI: 16.39–18.51) in the e-LbC group and 12.37 (95% CI: 11.54–13.20) in the lecture group, yielding a Cohen’s d of 1.30, which reflects a large effect size, reinforcing the instructional advantage of e-LbC (Table 2).
[IMAGE OMITTED: SEE PDF]
Between-group comparison (Fig. 5; Table 2)
While baseline (pre-SCT) scores were comparable between the two groups (p = 0.667), the post-SCT scores were significantly higher in the painless vision loss theme (mean = 25.18) compared to the painful vision loss theme (mean = 20.19), indicating superior performance following e-LbC instruction (Fig. 5; Table 2). This analysis assumed normality of the data, which was verified using the Shapiro-Wilk test. The Shapiro-Wilk test indicated that the data were normally distributed (p = 0.03) supporting the assumption of normality required for the paired t-test.
[IMAGE OMITTED: SEE PDF]
Item-level (Vignette) analysis (Table 3)
Unlike the total SCT scores, which were normally distributed and therefore suitable for parametric tests, the individual vignette scores did not meet normality assumptions. The requirements for the Mann–Whitney U test, which does not assume normality, were satisfied, as the Shapiro–Wilk test (p = 0.56) confirmed that the 30 vignette scores of the developed SCT were non-normal distributed. Accordingly, post-SCT analysis showed that students in the e-LbC group outperformed their peers in 10 of the 15 vignettes, with statistically significant differences in 8 (p < 0.05), highlighting e-LbC’s effectiveness across diverse clinical contexts (Table 3).
[IMAGE OMITTED: SEE PDF]
Student perception (Table 4)
The questionnaire, as shown in Table 4, results indicated the intervention of using electronic learning by concordance as an instructional method in a painless vision loss theme was generally well-received by students. The reliability coefficient of the perception questionnaire was 0.823 with good internal consistency. Most students (90%) strongly agreed or agreed that the e-LbC approach aids in improving clinical reasoning ability for the future and (62%) of students reported that they were overall satisfied with the instructional session. Moreover, most students (69%) strongly agreed or agreed that SCT can be used as a useful instructional method for the future and (75%) strongly agreed or agreed that SCT is an effective assessment tool.
[IMAGE OMITTED: SEE PDF]
Discussion
This study introduces a novel electronic approach to teaching clinical reasoning in ophthalmology using the Learning by Concordance (e-LbC) method. It promotes student-centered, active engagement by allowing learners to apply knowledge in real time and receive immediate expert feedback through the Wooclap platform. This feature was especially beneficial during the remote learning challenges posed by the COVID-19 pandemic.
While this study focused on ophthalmology, the structured design of e-LbC, rooted in illness script theory and dual-process reasoning, makes it broadly applicable across medical specialties that involve diagnostic uncertainty or complex clinical judgment. Its adaptability across educational levels allows tailored implementation: novice learners can engage with guided cases and detailed feedback, while advanced learners can benefit from complex, ambiguous cases that support the transition from intuitive (Type 1) to analytical (Type 2) reasoning [3]. This adaptability was also observed in studies by Lafond et al. [26] and Charton et al. [10], who successfully implemented e-LbC in pulmonary and ECG training, respectively. Similarly, Vaillant-Corroy et al. [27] demonstrated that LbC can effectively foster the development of professionalism in dentistry, highlighting its versatility across different health professions and educational contexts.
Our findings highlight the potential of e-LbC to strengthen clinical reasoning, as shown by the significant improvement in post-SCT scores, particularly in the painless vision loss theme instructed using e-LbC. This improvement may be attributed to the contextual, dynamic learning environment that mirrors real-life scenarios and supports illness script formation [28]. A systematic review also emphasized the value of structured reasoning programs in facilitating the transition from pre-clinical to clinical learning [29]. A Cohen’s d of 4.16, as observed in the e-LbC painless vision loss theme, is considered an exceptionally large effect size, far exceeding the conventional threshold of 0.8 for a large effect. This suggests that the improvement in students’ clinical reasoning skills following the e-LbC intervention was not only statistically significant but also educationally and practically meaningful. Such a large effect reflects an improvement in performance, indicating that the instructional method influenced students’ ability to reason through clinical cases involving diagnostic uncertainty [30]. This discrepancy may also reflect a novelty effect—when learners encounter a new, interactive instructional method, their initial engagement and performance can be unusually high, even if the effect may diminish over time [31].
Furthermore, the confidence intervals for the change in SCT scores were narrow and non-overlapping (e-LbC: 95% CI 16.39–18.51; lecture: 95% CI 11.54–13.20), demonstrating a high level of precision in the estimates and implying that the observed differences between groups are statistically significant. The consistency of these intervals adds to the strength of the findings, as even the most cautious estimations show a significant improvement, particularly in the e-LbC group. This strengthens the intervention’s practical instructional impact.
Students appreciated the ability to compare their reasoning to that of expert panels, which reinforced their metacognitive skills and supported ongoing development through practice and feedback. This is consistent with Fernandez et al. [7], who noted that LbC fosters reflective thinking. Moreover, as Deschênes et al. [11] and Lecours et al. [5] reported, online LbC tools in clinical domains like dermatology helped students monitor their understanding and refine clinical knowledge through expert justifications.
Importantly, the e-LbC method also supports the development of learners’ tolerance for clinical uncertainty—a crucial competency in medical education. Encouraging the expression of uncertainty during LbC activities allows students to normalize and address it, as supported by research linking intolerance of uncertainty with poor clinical decision-making, burnout, and cognitive errors [30, 32]. However, the effectiveness of e-LbC may be influenced by factors such as prior knowledge, cognitive load, and case complexity, indicating the need for further research to optimize implementation conditions. Designing Learning-by-Concordance (LbC) cases presents the challenge of creating clinical scenarios that guide students in developing reasoning for complex and uncertain situations. LbC is particularly suited for cases where multiple responses may be appropriate, emphasizing the reasoning process over identifying a single correct answer [9].
The inclusion of SCTs, administered electronically via Wooclap, not only provided a methodologically aligned assessment strategy but also enhanced learner engagement. Students valued the SCT format both as a learning and assessment tool, aligning with findings by Lineberry et al. [33], and highlighting SCT’s role in promoting deeper understanding and reflective learning.
Overall, this study supports the scalability and practicality of e-LbC as an instructional strategy. Its simplicity, cost-effectiveness, and capacity to accommodate various group sizes make it suitable for diverse medical education settings. Student feedback in this study strongly endorsed the e-LbC method, consistent with findings across specialties that highlight its usability and pedagogical value [12, 33, 34].
Limitations
This study has some limitations, including a small, single-cohort sample within the ophthalmology specialty, which limits the generalizability of the findings. The quasi-experimental design without randomization may have introduced selection bias and using different forms for pre- and post-SCT questions raises concerns about test equivalence. Additionally, since item-level analyses involved multiple comparisons, the risk of Type I errors cannot be ruled out, so the results should be interpreted with caution. Relying on self-reported, non-validated feedback also introduces response bias. The study may have emphasized positive outcomes while overlooking other explanations. Moreover, the lack of long-term follow-up limits insights into knowledge retention. While dividing the SCT into “painless” and “painful” versions helped differentiate themes, it might have introduced content-related bias. However, both versions were carefully matched in structure and difficulty, and baseline scores indicated similar initial knowledge levels. The significant improvement in the painless vision loss theme suggests the instructional method influenced the results. Another limitation is potential crossover effects, since the same students experienced both instructional methods (e-LbC and interactive lectures) on different topics. Although thematic separation was maintained, some overlap in knowledge or reasoning strategies may have occurred.
Additional strengths in this study included that our study is the first to implement in Ophthalmology, has strong study design, introduced students to two different teaching approaches, and used a valid and reliable assessment method. Moreover, use of Wooclap helped in recording and comparing the scores. However, future studies should include randomized designs, larger and more diverse samples, validated assessment tools, and incorporate qualitative and long-term follow-up analyses.
Conclusion
This study highlights the promise of combining the e-LbC method with SCT as a novel and effective way to develop clinical reasoning in Ophthalmology. Students valued both approaches pedagogically and suggested using them for other clinical subjects in their curriculum. As medical education advances, especially amid challenges like the COVID-19 pandemic, electronic formats such as e-LbC offer affordable and flexible solutions suitable for both small and large groups of students.
Data availability
Data can be obtained from the corresponding author upon request.
Abbreviations
SCT:
Script Concordance Test
e-LbC:
Electronic Learning by Concordance
CR:
Clinical Reasoning
Lateef F. Clinical reasoning: the core of medical education and practice. Eur J Cardiovasc Med. 2021;11(3):1–9.
Daniel M, Rencic J, Durning SJ, Holmboe E, Santen SA, Lang V, Ratcliffe T, Gordon D, Heist B, Lubarsky S, et al. Clinical reasoning assessment methods: A scoping review and practical guidance. Acad Med. 2019;94(6):902–12.
Lubarsky S, Dory V, Audétat M-C, Custers E, Charlin B. Using script theory to cultivate illness script formation and clinical reasoning in health professions education. Can Med Educ J. 2015;6(2):e61.
Rencic J, Trowbridge RL, Fagan M, Szauter K, Durning S. Clinical reasoning education at US medical schools: results from a National survey of internal medicine clerkship directors. J Gen Intern Med. 2017;32:1242–6.
Lecours J, Bernier F, Friedmann D, Jobin V, Charlin B, Fernandez N. Learning-by-Concordance for family physicians: revealing its value for continuing professional development in dermatology. MedEdPublish. 2018;7:236.
Charlin B, Brailovsky C, Leduc C, Blouin D. The diagnosis script questionnaire: a new tool to assess a specific dimension of clinical competence. Adv Health Sci Educ. 1998;3:51–8.
Fernandez N, Foucault A, Dubé S, Robert D, Lafond C, Vincent A-M, Kassis J, Kazitani D, Charlin B. Learning-by-Concordance (LbC): introducing undergraduate students to the complexity and uncertainty of clinical practice. Can Med Educ J. 2016;7(2):e104.
Charlin B, Deschênes M-F, Fernandez N. Learning by concordance (LbC) to develop professional reasoning skills: AMEE guide 141. Med Teach. 2021;43(6):614–21.
Fernandez N, Deschênes M-F, Akremi H, Lecours L, Jobin V, Charlin B. What can designing Learning-by-concordance clinical reasoning cases teach Us about instruction in the health sciences? Perspect Med Educ. 2023;12(1):160.
Charton L, Lahmar A, Hernandez E, Rougerie F, Lorenzo M. Impact of an online learning by concordance program on reflection. BMC Med Educ. 2023;23(1):822.
Deschênes M-F, Goudreau J, Fernandez N. Learning strategies used by undergraduate nursing students in the context of a digitial educational strategy based on script concordance: A descriptive study. Nurse Educ Today. 2020;95:104607.
Mainville G, Charlin B. Learning by concordance–a new tool for developing clinical reasoning in oral pathology and dental education. Oral Surg Oral Med Oral Pathol Oral Radiol. 2022;133(5):e127.
Roche A, Turcot AAR, St-Pierre A, Cherrier S, Audétat MC, Charlin B, Dyer JO. Learning-by-Concordance approach in health professions education: A scoping review. Perspect Med Educ. 2025;14(1):387–98.
Henriksen C, Jobin V, Deschênes M-F, Tremblay C, Charlin B, Fernandez N. Formation par concordance avec rétroaction multi-source aux questions Qui émergent de La pratique médicale En contexte de pandémie COVID-19. Pédagogie Médicale. 2020;21:203–5.
Gisèle M, Hélène B, Bernard C, Marion S. Learning by concordance as a tool for paediatric dental traumatology education. Eur J Dent Educ. 2025;29(2):392–400.
Duprez F, Veleur M, Kania R, Zagury-Orly I, Fernandez N, Charlin B. Using learning-by-concordance to develop reasoning in epistaxis management with online feedback: A pilot study. Sci Prog. 2024;107:368504241274583.
Funk KA, Kolar C, Schweiss SK, Tingen JM, Janke KK. Experience with the script concordance test to develop clinical reasoning skills in pharmacy students. Curr Pharm Teach Learn. 2017;9(6):1031–41.
Tedesco-Schneck M. Use of script concordance activity with the Think-Aloud approach to foster clinical reasoning in nursing students. Nurse Educ. 2019;44(5):275–7.
Deschênes MF, Charlin B, Akremi H, Lecours L, Moussa A, Jobin V, Fernandez N. Beliefs and experiences of educators when involved in the design of a Learning-by-concordance tool: A qualitative interpretative study. J Prof Nurs. 2024;54:180–8.
Verillaud B, Veleur M, Kania R, Zagury-Orly I, Fernandez N, Charlin B. Using learning-by-concordance to develop reasoning in epistaxis management with online feedback: A pilot study. Sci Prog. 2024;107(3):368504241274583.
Abouzeid E, Sallam M. Teaching by concordance: individual versus team-based performance. Innov Educ Teach Int. 2022;60:1–11.
Karila L, François H, Monnet X, Noel N, Roupret M, Gajdos V, Lambotte O, Benhamou D, Benyamina A. The script concordance test: a multimodal teaching tool. La Revue De Med Interne. 2018;39(7):566–73.
Lubarsky S, Dory V, Duggan P, Gagnon R, Charlin B. Script concordance testing: from theory to practice: AMEE guide 75. Med Teach. 2013;35(3):184–93.
Fournier JP, Demeester A, Charlin B. Script concordance tests: guidelines for construction. BMC Med Inf Decis Mak. 2008;8(1):18.
Wooclap. - Script Concordance Test [https://docs.wooclap.com/en/articles/2743688-script-concordance-test
Chantal L, Driss K, Gagnon R, Bernard C, Fernandez N. Learning-by-Concordance of perception: A novel way to learn to read thoracic images. Acad Radiol. 2022;30(1):132–7.
Vaillant-Corroy AS, Girard F, Virard F, Corne P, Gerber Denizart C, Wulfman C, Vital S, Gosset M, Naveau A, Delbos Y, et al. Concordance of judgement: A tool to foster the development of professionalism in dentistry. Eur J Dent Educ. 2024;28(3):789–96.
Moghadami M, Amini M, Moghadami M, Dalal B, Charlin B. Teaching clinical reasoning to undergraduate medical students by illness script method: a randomized controlled trial. BMC Med Educ. 2021;21:1–7.
Si J. Medical students’ self-directed learning skills during online learning amid the COVID-19 pandemic in a Korean medical school. Korean J Med Educ. 2022;34(2):145.
Sullivan GM, Feinn R. Using effect Size-or why the P value is not enough. J Grad Med Educ. 2012;4(3):279–82.
Rodrigues L, Pereira FD, Toda AM, Palomino PT, Pessoa M, Carvalho LSG, Fernandes D, Oliveira EHT, Cristea AI, Isotani S. Gamification suffers from the novelty effect but benefits from the familiarization effect: findings from a longitudinal study. Int J Educational Technol High Educ. 2022;19(1):13.
Gheihman G, Johnson M, Simpkin AL. Twelve tips for thriving in the face of clinical uncertainty. Med Teach. 2020;42(5):493–9.
Lineberry M, Hornos E, Pleguezuelos E, Mella J, Brailovsky C, Bordage G. Experts’ responses in script concordance tests: a response process validity investigation. Med Educ. 2019;53(7):710–22.
Gómez CI, Sequeros OG, Brugada GS, del Rey MLP, Martínez DS. Usefulness of SCT in detecting clinical reasoning deficits among pediatric professionals. Prog Pediatr Cardiol. 2021;61:101340.
© 2025. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.