Content area
Introduction
Medical improvisation (improv) is gaining recognition as an effective method for enhancing communication skills among healthcare professionals. Successful implementation of medical improv training relies on dynamic refinement of curricula. In this project, we articulate a theoretically grounded framework for modifying a medical improv program that puts the focus on the crucial yet often overlooked process of curriculum refinement in this growing field.
Methods
We used the Collaborative Improvement Model [1] as a guiding framework for the curriculum revision. Our analytic approach involved a sub-analysis of quantitative and qualitative data collected as a part of the Alda Healthcare Experience (AHE) mixed-methods process evaluation including surveys, in-depth interviews, and observations. Quantitative and qualitative data were collected from key stakeholders of the AHE program including clinical co-facilitators, improv co-facilitators, and participants. Our analysis was guided by interpretive descriptive design, drawing on inductive and deductive approaches as well as quantitative and qualitative mixed-methods triangulation.
Results
Inductive analysis of co-facilitator interviews revealed four dimensions for interpreting the identified areas of revision: desirability (of the revision), feasibility (of the revision), triangulation (whether different sources of data are divergent or corroborated), and evaluability (whether the revision would confound the program evaluation). We illustrate how this framework is implemented through specific examples from our curriculum revision.
Conclusion
Our manuscript articulates a theoretically grounded framework for revising medical improv curriculum. This structured yet adaptable framework could be useful for curricular modification in other medical improv and experiential learning curricula.
Introduction
Medical improvisation (improv) training has been shown to enhance team communication among healthcare professionals [2]. Implementation of medical improv training is widely reported in medical education literature. However, the effectiveness of such training depends on continual modification of curriculums (i.e., content and delivery of the training) to meet participants’ needs and contextual constraints. There is a pressing need for theoretically grounded comprehensive process evaluations and subsequent modifications to the curriculum to ensure the effectiveness of such training. In this paper we aim to achieve two primary objectives: (1) To outline a framework designed to facilitate curricular modification for medical improv and other experiential learning-based training programs and (2) to illustrate how this framework is implemented through specific examples from our curriculum revision.
Medical improvisation training
Improvisation, often referred to as “improv,” is a form of theater characterized by spontaneous and collaborative actions between partners [3, 4]. The principles and practices of improv have been used in non-theatrical settings including clinical contexts [2, 3, 5]. Medical/clinical improvisational training draws upon applied improvisation theories and exercises to enhance communication skills of health professionals (e.g., interpersonal communication, teamwork/collaboration, adaptability, resilience in uncertain situations) [4,5,6]. During the training, participants engage in improv-based exercises designed to develop their skills for collaborative and spontaneous communication; subsequent debrief sessions then further help participants understand how these exercises can be applied in medical settings [6, 7]. Medical/clinical improv has been adopted by educational programs at academic medical centers due to its potential to enhance healthcare professionals’ communication, teamwork, and various professional skills crucial to future success. Empirical studies have revealed that participation in medical improv programs can result in improved responsiveness, communication skills, teamwork, wellbeing, and resilience among clinical health professionals [2,3,4,5,6,7].
A need for curriculum revision
A unique feature of medical improv, as with other applied improv training, lies in its adaptability to both participants and the specific contexts in which it is applied [8, 9]. At the core of improv training is the goal of enhancing participants’ “mind muscles” to respond effectively during interactions grounded in the practice of deep listening [8]. However, the exercises, reflection routines, and methods used to embody core improv principles can vary based on participants’ backgrounds, curricular design, and the specific context of application. Consequently, the success of medical improv training depends heavily on timely evaluations and the ongoing fine-tuning of curricula, including content and delivery of the training [9]. Negative consequences of not revising improv curricula can include ineffectiveness and wasted resources.
Furthermore, all facilitators of improv training in our organization have training in dramaturgy, a field that places great importance on continuous fine-tuning based on contexts and real-time dynamics [10]. By applying principles from dramaturgy to facilitate medical/clinical improv training, facilitators often establish routine debrief sessions among themselves following each workshop to fine-tune the curriculum [9]. In this case, it becomes essential to articulate the processes of curriculum revision and refinement for applied improv training. However, there has been scant attention paid to explaining the processes of revising medical improv curricula or conducting theoretically grounded curricular evaluations. This oversight hinders the advancement - especially widespread dissemination and implementation - of medical/clinical improv training.
The Alda healthcare experience training
The Alda Healthcare Experience training (AHE), the focus of our project, is an evidence-based improv communication workshop designed to enhance communication skills and resilience among health professionals [11]. The AHE training described in this manuscript consists of two components: an in-person, two-hour component that focuses on the application of improv principles in healthcare team communication, and a one-hour online “booster” session that reinforces the applicability of improv principles in healthcare team contexts. Nearly 500 diverse healthcare professionals from five departments at Stony Brook Medicine participated in the workshops over the course of 16 months.
The design of the AHE workshop was inspired by Kolb’s experiential learning theory and Rolfe’s reflective model [12, 13]. Each AHE workshop is co-facilitated by an improv-trained facilitator (the improv co-facilitator) and another facilitator with a clinical background (the clinical co-facilitator). During each AHE workshop, participants engage in experiential improv-based exercises to gain first-hand, concrete experiences. Those exercises embody two principles from improv theater - “yes…and…” (validate and build) and “make your partner look good” - and focus on many aspects of communication skills [11]. Specifically, AHE consists of experiential exercises that focus on identity perception, deep listening, relationship building, and situational awareness with an emphasis on workplace communication in clinical contexts. Those activities are followed by debrief and reflection sessions, aiming to facilitate the transformation of experience, and thus, knowledge gain among participants. Specifically, debrief sessions in AHE include guided discussions of participants’ observations of their own experiences, analyses of their experiences, and application in workplace communication.
The collaborative improvement model
Curriculum revision entails the improvement of the training quality according to established standards, including those set by authoritative consensus, those emerged from prior practices, and those derived from theories [14]. Our curriculum revision, specifically, was inspired by the Collaborative Improvement Model (CIM) [1], a theory-driven framework used in medical curriculum revision. Collaborative Improvement Model prescribes four phases in curriculum revision: preparation, discovery, interpretation, and implementation. The preparation phase sets the stage for curriculum revision. It usually includes obtaining leadership commitment and establishing stakeholder involvement. The discovery phase involves collecting relevant information necessary for curriculum revision, such as needs assessments. The interpretation phase entails the analyses and interpretation of the obtained evidence and subsequent decision-making processes. The implementation phase includes the allocation of resources to implement the revised curriculum [1].
We aim to fill the gap in rigorous curricular revision approaches by outlining a substantive framework for curricular modification of medical improv program and other experiential medical training by describing our AHE curricular modifications informed by a mixed-methods process evaluation.
Methods
Our analytic approach involved a sub-analysis of quantitative and qualitative data collected as a part of the AHE mixed-methods program evaluation. A convergent design was used for the program evaluation: [15] we collected both quantitative and qualitative data, involving independent and integrated analyses. The objective for the current sub-analysis of these data was to characterize and articulate a framework for medical improv curriculum revision. Our analysis was guided by interpretive descriptive design, drawing on inductive and deductive approaches to qualitative analysis as well as quantitative and qualitative mixed-methods triangulation.
AHE program evaluation
Data included in the current sub-analysis were from the AHE mixed-methods program evaluation [11]. Quantitative and qualitative data were collected from both participants and facilitators (see Table 1 for the AHE process evaluation data collection schedule). Healthcare providers from Stony Brook Medicine who participated in the Alda Healthcare Experience workshops were recruited and their participation was voluntary. Specifically, data for this sub-analysis came from 150 participants and facilitators; data collection for this study occurred between September 2022 to April 2023, during the first wave of our overall AHE intervention. Prior to data collection, participants were provided with information about the evaluation project, their voluntary participation, confidentiality measures, and compensation details. This project was funded by a cooperative agreement with the Health Resources and Services Administration (U3NHP45406). Participants received $5 gift cards for each completed survey and $15 gift cards for each interview. Facilitators were compensated for their roles, which included facilitating workshops and participating in interviews. The amount of compensation reflects the time and effort required from participants. This compensation is not excessive, and participation is voluntary, unlikely to create injustice, exploitation, or coercion [16]. Additionally, incentivizing participants with gift cards is a very widely-used and acceptable way to increase participation rate in research [17]. Our team members were responsible for the designing of quantitative and qualitative data collection and analysis, all of whom obtained specialized training in their respective areas as reflected in their educational credentials.
Quantitative data collection and analysis. Two online surveys of participants were conducted - one was conducted before the AHE (baseline T1) and the other was conducted at the conclusion of the AHE (T2). A different survey was completed by improv co-facilitators immediately after each workshop (T2). Outcome variables in this sub-analysis include participants’ self-rated engagement in the workshop and improv co-facilitators’ rated engagement levels. Participants rated ‘how engaged you were in today’s workshop’ on a sliding scale ranging from 0 (not engaged at all) to 100 (extremely excited/engaged).11 Improv co-facilitators rated participants’ engagement on a scale [ranging from 0 (none) to 3 (a lot)] using the following items: the degree to which participants initiate questions, initiate applications (draw a connection between the exercise and a scenario in the workplace), and are willing to engage in exercises and debriefs during the workshop. Average scores (of the three items, alpha = 0.78) were used to calculate co-facilitator-rated engagement. Other variables of interest included the number of participants in each workshop and co-facilitator reported disruptions and cohesion. Improv co-facilitators rated five kinds of disruptions, such as late arrivals and early leavers, on a scale ranging from 0 (none) to 4 (many) and an average score was calculated to represent overall disruptions. For cohesion, improv co-facilitators rated ‘how would you rate the cohesion of the participants in today’s training on a sliding scale ranging from 0 (very disconnected) to 100 (very united). We used correlational analyses to examine quantitative data.
Qualitative data collection and analysis. For qualitative data collection, semi-structured interviews were conducted with clinical co-facilitators after every AHE workshop. These individual interviews lasted an average of 21 min, ranging from 11 to 46 min. In the interviews clinical co-facilitators were asked about issues related to implementation in the AHE workshop and participants’ reactions. In addition to the qualitative survey questions given to the improv facilitators outlined in the previous paragraph, we asked one open-ended survey question. This open-ended question prompted improv co-facilitators to provide detailed information about any curricular and/or logistical challenges they saw and experienced during the workshop. Additionally, we conducted semi-structured interviews with participants who attended the workshop. Those interviews lasted about 60 min. The questions related to the process evaluation included participants’ overall experiences and their challenges and thoughts about the AHE workshop.
[IMAGE OMITTED: SEE PDF]
Integrated mixed-methods analysis for the current project
Following the Collaborative Improvement Model [1], we describe the curriculum revision process in the preparation, discovery, interpretation, and implementation phases. Since there is no pre-existing framework to guide interpretation of results, we used interpretive descriptive design to characterize dimensions in a framework that can guide curriculum revision decision making for medical/clinical improv workshops and other experiential curricula.
Analysis of the qualitative data was guided by interpretive description [18]. This method has been widely used in medical education research due to its power to identify themes and patterns among subjective perspectives while accounting for variations between individuals [19]. Instead of coding data deductively with small units of analysis, interpretive descriptions strive to identify overarching themes and patterns. Researchers thus generate codes in a “bottom-up” manner by immersing oneself in the data [18, 19]. Additionally, we drew upon quantitative and qualitative mixed-methods triangulation in the interpretation process.
Qualitative data were first analyzed inductively, guided by the broad practical research question, “What can be learned from co-facilitators and participants about the AHE curriculum and areas for improvement?” This resulted in a list of potential areas for revision in the AHE curriculum. Next, qualitative data from co-facilitators were analyzed inductively with a goal of identifying critical dimensions for guiding the curriculum revision processes and interpretation of data. Finally, quantitative data and qualitative data from different stakeholders were compared and triangulated. To address trustworthiness in qualitative data analyses, we used triangulation, peer debriefing, and critical reflexivity [20]. Specifically, triangulated data from various sources (e.g., co-facilitators, participants) using different methods (e.g., qualitative, quantitative methods) were compared iteratively against one another to ensure credibility and confirmability. Independent colleagues who were not directly involved in the evaluation were invited to critically review the analyses and preliminary results; this peer debriefing was used to ensure credibility and confirmability of our analysis. Additionally, we generated an audit trail of critically reflexive analytic memos to ensure the analyses were credible and dependable.
Results
Preparation and discovery phases (of the curriculum revision)
The preparation and discovery phases of the curriculum revision started with the designing of the AHE program and its process evaluation. Prior to beginning the project, the AHE team planned to continuously assess and improve the curriculum. To this end, the leadership and AHE evaluation team used a process evaluation as a basis for the curriculum revision [11]. In the process evaluation, qualitative and quantitative data were collected from AHE informants (key stakeholders and participants) to understand individual- and group-level facilitators and barriers to workshop implementation. After about half of the workshops were completed (wave 1), we analyzed the data and proposed a revised curriculum to implement in the latter half of the workshops (wave 2).
The lead AHE curriculum designer, an experienced improv facilitator, constructed the workshop to be modifiable. All three AHE improv co-facilitators have extensive experience in experiential learning curriculum design and facilitation. Previously, workshops were followed by routine debrief sessions to fine-tune the curriculum. This curriculum revision process therefore connected naturally to the existing more informal process and built upon the facilitator’s previously held values of constant improvement and refinement.
To discover potential areas of change in the curriculum (Table 2), we used data from the AHE mixed-methods process evaluation [11]. Table 2 shows the main results of our integrated mixed-methods analysis. The first column represents identified areas of revisions, and the second column explicates the data sources from which we identified the revisions. Potential areas of revision included changes in setting such as group composition and size, changes in improv activities including context, flow, and relevance of activities, and changes in logistics such as workshop timing (e.g., daytime, evening, etc.) and reducing disruptions. Column three shows the identified dimensions for decision-making about the revision processes (detailed in the “interpretation” section) and column four shows the final implementation of revisions (detailed in the “implementation” section). Columns three and four are color-coded to indicate need for change (green = yes; orange = maybe; red = no). Please see the Appendix for exemplar quotations from qualitative interviews.
[IMAGE OMITTED: SEE PDF]
Interpretation phase
Inductive analysis of improv co-facilitator interviews revealed four dimensions for interpreting the identified areas of revision. In other words, we used those four dimensions to benchmark our decision-making processes in the curriculum revision. These dimensions were validated through iterative discussions among researchers and peer debriefing. The first dimension is desirability (desirable, undesirable, and unclear). This dimension considers whether the identified areas of change from the discovery phase were desirable. Specifically, this dimension was operationalized through clearly defined criteria, namely, whether the potential change aligns with the learning goal, whether it improves learner satisfaction, and whether it enhances learning effectiveness. The second dimension is feasibility (high, low, and varied). This dimension considers whether the potential revision is feasible or not. The third dimension is termed triangulation, which examines whether different sources of data are divergent or corroborated. The last dimension for our study is termed evaluability (confounding and non-confounding). This dimension emerged due to the fact that our curricular revision was embedded in a process and outcome program evaluation. This dimension was operationalized through a clearly defined criterion: whether the potential change would confound the outcome evaluation or maintain methodological rigor of the outcome evaluation effort.
Implementation phase
The implementation phase included evaluators working with the lead AHE curriculum designer to revise the curriculum based on what was discovered and the four interpretive dimensions (desirability, triangulation, feasibility, and evaluability). As can be seen in the Implementation Column of Table 2, there was a binary decision (change vs. no change) in a given identified area of revision. When identified areas of revision satisfied the four interpretive dimensions, the lead AHE curriculum designer collaborated with the evaluation team to incorporate changes in the revised curriculum. Specific revisions are described in the implementation column in Table 2.
No change could result from one or multiple dissatisfactions in the four interpretive dimensions. For example, considering each individual participant’s career ranking was an identified area for curricular change mentioned by both participants and co-facilitators. However, we did not implement any revision in this regard because, firstly, it was unclear how desirable this change was; specifically, it was not clear whether controlling for participants’ career ranking aligns with the learning goal, and whether it improves learner satisfaction or enhances learning effectiveness. Second, the feasibility of controlling for the career ranking of participants is relatively low since it requires high administrative effort to do so. Third, the data were divergent on this potential area of revision. According to the interviews, some participants mentioned that being with higher-ranked participants increased anxiety while others found that the career rankings of co-participants did not matter. Lastly, control for participants’ career ranking can confound the evaluative effort. This is because the AHE aims to enhance communication skills within healthcare teams which consist of professionals with various career rankings in real-world settings.
Discussion
Our project addresses a gap in the literature by articulating a theoretically grounded framework for modifying medical improv training curricula. Using a sub-analysis of a mixed-method process evaluation and guided by the Collaborative Improvement Model (CIM), we structured the AHE curriculum revision process into four phases: preparation, discovery, interpretation, and implementation. Preparing the AHE curriculum revision required leadership commitment, evaluation support, and a sustained collaboration between the lead curriculum designer and evaluation team.
Drawing on data collected from key stakeholders, we identified a list of areas for potential revision and four interpretive dimensions as decision-making benchmarks for our curriculum revision. Specifically, potential changes to workshop setting, improv activities, and logistics were subsequently interpreted using the four dimensions: (1) desirability (whether the identified areas of change were desirable with regard to learning goal alignment, learner satisfaction, and learning effectiveness), (2) feasibility, (3) triangulation (whether data from different sources are divergent or corroborated), and (4) evaluability (whether the identified area of change confound the program evaluation). To our knowledge, ours is the first to outline a framework for revising medical improv training curricula using multi-source empirical data. The framework and method used here may also be applicable to modifying other experiential learning curricula.
Our paper echoes and extends existing literature on best practices for medical improv training. Several changes made in our curriculum have also been mentioned by others studying applied improv training. For example, Hoffmann-Longtin, Rossing, and Weinstein [9] have emphasized the importance of linking improv activities to real-world scenarios. Likewise, we identified the need to further contextualize the workshop focusing more on healthcare team communication and to increase the number of connections made between improv activities and learning outcomes during the workshop. Extending this literature, our project highlights the importance of setting/context and logistics in medical improv workshop implementation. Furthermore, the four dimensions used to interpret curriculum revisions could serve as a stepping stone, for future studies, to explicate and formalize the decision-making processes in curriculum revision.
There are several limitations to this study. First, a key challenge in experiential training is its contextual-dependent nature. The substantive content in areas of revision and interpretive dimensions uncovered in our study (i.e., desirability, triangulation, feasibility, and evaluability) may be less applicable in training with different topics. Additionally, the implementation of curriculum revisions may differ as a function of resources and organizational setting. However, the way we explicate and implement the framework for curriculum revision should still be applicable. The framework’s flexibility should offer a starting point for others to adapt it across diverse institutional and curricular contexts. For instance, when replicating this framework, programs should consider adding a rigorous evaluation from key stakeholders. When interpreting and implementing revisions, programs should consider organizational constraints such as class sizes, scheduling, and facilitator expertise, while maintaining the interpretive rigor we outline here. Future studies comparing implementation across different settings would offer valuable insights into which aspects of the framework are transferable and which require more contextual adjustment.
Second, the implemented curriculum revisions were proposed by the lead AHE curriculum designer in collaboration with the evaluation team. There is an absence of longitudinal follow-up data controlling for confounding variables to assess whether the revised curriculum enhances the learning experience or leaves a sustained impact on participants. Future work should use longitudinal data and control for potential confounding variables to assess the extent to which our implemented revisions addressed stakeholders’ concerns and whether the revised curricular enhances the learning outcomes in the long run. Lastly, since the mixed-methods process evaluation was not designed for the sole purpose of curriculum revision, there could be other potential areas of revision revealed by informants if given more space and prompts. Additionally, future studies may benefit from incorporating an approach to analysis that includes interrater agreement for coding qualitative data as an alternative to the methods used in this study and to enhancing reliability of qualitative findings.
Despite those limitations, our manuscript articulates a theoretically grounded framework for revising medical improv curriculum. By grounding our framework in the Collaboration Improvement Model and mixed-methods design, we offer a structured yet adaptable approach for curricular modification that might be applicable in other experiential learning curricula. The findings presented here could serve as a starting point for other programs seeking to modify and revise their curricula. We look forward to partnering with others interested in pursuing this work in the hopes of providing rigorously evaluated experiential training that is key to supporting the next generation of healthcare professionals to succeed in our dynamic healthcare environment.
Data availability
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Nosek CM, Scheckel MM, Waterbury T, MacDonald A, Wozney N. The collaborative improvement model: an interpretive study of revising a curriculum. J Prof Nurs. 2017;33(1):38–50. https://doi.org/10.1016/j.profnurs.2016.05.006.
Landry-Wegener BA, Kaniecki T, Gips J, Lebo R, Levine RB. Drama training as a tool to teach medical trainees communication skills: A scoping review. Acad Med. 2023;98(7):851–60. https://doi.org/10.1097/ACM.0000000000005121.
Gao L, Peranson J, Nyhof-Young J, Kapoor E, Rezmovitz J. The role of improv in health professional learning: A scoping review. Med Teach. 2019;41(5):561–8. https://doi.org/10.1080/0142159X.2018.1505033.
Kaplan-Liss E, Lantz-Gefroh V, Bass E, et al. Teaching medical students to communicate with empathy and clarity using improvisation. Acad Med. 2018;93(3):440–3. https://doi.org/10.1097/ACM.0000000000002031.
Mehta A, Fu B, Chou E, Mitchell S, Fessell D. Improv: transforming physicians and medicine. MedSciEduc. 2021;31(1):263–6. https://doi.org/10.1007/s40670-020-01174-x.
Watson K, Fu B. Medical improv: A novel approach to teaching communication and professionalism skills. Ann Intern Med. 2016;165(8):591. https://doi.org/10.7326/M15-2239.
Fessell D, McKean E, Wagenschutz H, et al. Medical improvisation training for all medical students: 3-Year experience. MedSciEduc. 2020;30(1):87–90. https://doi.org/10.1007/s40670-019-00885-0.
Dudeck T, McClure C, editors. Applied improvisation: leading, collaborating and creating beyond the theatre. Methuen Drama, an imprint of Bloomsbury Publishing Plc; 2018.
Hoffmann-Longtin K, Rossing JP, Weinstein E. Twelve tips for using applied improvisation in medical education. Med Teach. 2018;40(4):351–6. https://doi.org/10.1080/0142159X.2017.1387239.
Proehl GS, Kugler DD, Lamos M, Lupu M. Toward a dramaturgical sensibility: landscape and journey. Fairleigh Dickinson University; 2012.
Preis H, Dobias M, Cohen K, Bojsza E, Whitney C, Pati S. A mixed-methods program evaluation of the Alda healthcare Experience- a program to improve healthcare team communication. BMC Med Educ. 2022;22(1):897. https://doi.org/10.1186/s12909-022-03972-w.
Kolb DA. Experimental learning: experience as the source of learning and development. Prentice-Hall; 1984.
Rolfe G. Reflective practice: where now? Nurse Educ Pract. 2002;2(1):21–9. https://doi.org/10.1054/nepr.2002.0047.
Novak DA, Hallowell R, Ben-Ari R, Elliott D. A continuum of innovation: curricular renewal strategies in undergraduate medical education, 2010–2018. Acad Med. 2019;94(11S):S79–85. https://doi.org/10.1097/ACM.0000000000002909.
Creswell JW. Research design: qualitative, quantitative, and mixed methods approaches. 4th ed. SAGE; 2014.
Largent EA, Lynch HF. Paying research participants: regulatory uncertainty, conceptual confusion, and a path forward. Yale J Health Policy Law Ethics. 2017;17(1):61.
Ichimiya M, Muller-Tabanera H, Cantrell J, Bingenheimer JB, Gerard R, Hair EC, Donati D, Rao N, Evans WD. Evaluation of response to incentive recruitment strategies in a social media-based survey. Digit Health. 2023;9:20552076231178430.
Thorne S, Kirkham SR, MacDonald-Emes J. Interpretive description: A noncategorical qualitative alternative for developing nursing knowledge. Res Nurs Health. 1997;20(2):169–77. https://doi.org/10.1002/(SICI)1098-240X(199704)20:2<169::AID-NUR9>3.0.CO;2-I.
Thompson Burdine J, Thorne S, Sandhu G. Interpretive description: A flexible qualitative methodology for medical education research. Med Educ. 2021;55(3):336–43. https://doi.org/10.1111/medu.14380.
Guba EG. Criteria for assessing the trustworthiness of naturalistic inquiries. Ectj. 1981;29(2):7591.
© 2025. This work is licensed under http://creativecommons.org/licenses/by-nc-nd/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.