Content area
Purpose
This paper is one of seven in this volume elaborating upon different approaches to quality improvement in education. This paper aims to delineate a methodology called Implementation Science, focusing on methods to enhance the reach, adoption, use and maintenance of innovations and discoveries in diverse education contexts.
Design/methodology/approach
The paper presents the origins, theoretical foundations, core principles and a case study showing an application of Implementation Science in education, namely, in promoting school-community-university partnerships to enhance resilience (PROSPER).
Findings
Implementation Science is concerned with understanding and finding solutions to the causes of variation in a program's outcomes relating to its implementation. The core phases are: initial considerations about the host context; creating an implementation structure; sustaining the structure during implementation; and improving future applications.
Originality/value
Few theoretical treatments and demonstration cases are currently available on commonly used models of quality improvement in other fields that might have potential value in improving education systems internationally. This paper fills this gap by elucidating one promising approach. The paper also derives value, as it permits a comparison of the Implementation Science approach with other quality improvement approaches treated in this volume.
Introduction
Brief history of Implementation Science
Many experts suggest that Implementation Science arose in the field of healthcare in response to a persistent and documented form of service failure (Durlak and Dupre, 2008; Meyers et al. , 2012; Kelly, 2013). Promising and empirically tested interventions and programs were not delivering expected results or showing a demonstrable impact on desired outcomes. Even when they did, failures of transferability (i.e. failure to get interventions to work in different contexts) brought an increasing concern about the complex nature of the links between existing scientific evidence on programs and their actual application (Kelly, 2013).
In the field of healthcare, concerns arose as early as the mid-1940s, when evidence began to accumulate that interventions rolled out in clinical settings did not produce the outcomes promised through empirical rounds of testing in controlled settings (Kelly, 2013). Initially, inquiries focused on why these interventions and programs were not implemented effectively and with fidelity. In the 1960s and 1970s, researchers also found that the design and focus of policy had little to do with the successful implementation of programs, even when the policy in question prescribed "empirically tested" programs (Pressman and Wildawsky, 1984). Glasgow et al. (2012, p. 1,274) assert: "Despite demonstrable benefits of many new medical discoveries, we have done a surprisingly poor job of putting research findings into practice". The authors make the point that the discovery of new and improved interventions is important; but to realize the benefits of these interventions, greater attention needs to be paid to dissemination and implementation to enhance the reach, adoption, use and maintenance of these new discoveries.
There is a growing body of literature asserting that the nature of implementation processes actually influences desired outcomes (Meyers et al. , 2012; Kelly, 2012). Indeed, researchers have found a powerful link among the behaviors, beliefs and values of practitioners involved in implemented programs and the outcomes of that implementation (Aarons et al. , 2012). Practitioners should not carry sole responsibility for the act of implementing tested interventions; rather, accountability for the quality of program implementation should also extend to developers and researchers (Meyers et al. , 2012). Moreover, the role of intermediaries is emerging as a major requirement to ensure high-quality and sustainable implementation.
Despite these findings, disciplined attention to program implementation remains, for the most part, an optional consideration of the scientific enterprise. Practitioners' behaviors and beliefs, contextual variables and implementation fidelity, among others, are not routine considerations when striving for program effectiveness. Implementation Science, then, is a product both of the increasing realization that the characteristics and dynamics of implementation matter greatly for program effectiveness, and of the sobering realization that most efforts overlook these aspects of programs. In 2006, the Implementation Science Journal was launched to provide a scientifically rigorous platform for the discussion of these very issues. Today, platforms and networks are emerging with the objectives of encouraging understanding and application of the principles of Implementation Science.
Brief history of Implementation Science in education
While Implementation Science has a relatively short history in the field of education (the first Handbook of Implementation Science for Psychology in Education , edited by Kelly and Perkins, was published in 2012), researchers across various contexts have highlighted many of the same program implementation issues as being of key importance. For example, in healthcare, a body of literature has accumulated over the course of decades, suggesting that the implementation characteristics of educational interventions and programs (e.g. the individuals who implement, their beliefs about the program and themselves, the context in which it is implemented) hold a great deal of influence over program outcomes.
The well-known Change Agent Study, conducted by the RAND Corporation (Berman and McLaughlin, 1974, 1976), was a major turning point in educational research, orienting the field's attention toward understanding the process of implementation. The report series considered a number of federally funded programs ("change agents") to determine which of these supported educational change, especially within the instructional core of classrooms (the work in which schools and teachers engage), and which environmental factors in turn affected change agent programs. The focus of the reports was not evaluative, but rather constituted an attempt to describe the changes that occur during a program, how and why they occur and what impact this has on the operations of educational organizations.
Unlike healthcare, there are generally weaker links between scientific inquiry and education. Education programs have historically been created and disseminated without much concern for their potential effectiveness (Kelly, 2013). Instead, interventions have been instituted on the basis of social and political considerations, and have been terminated without recourse to findings from program evaluations. Slavin (2002) argues that federal education funding has rarely been linked to direct evidence of program effectiveness or implementation fidelity. In the late 1990s, Congress appropriated $150m per year to provide funds for schools to adopt proven, comprehensive reform models. This allocation was increased to $310m within two years. However, few of these "comprehensive" reform models showed strong evidence of effectiveness. Herman (1999) reviewed research on 24 of these programs and categorized them according to differing levels of evidentiary warrant: strong, promising, marginal, mixed, weak or no research backing. Herman found that of 2,665 comprehensive school reform (CSR) grants made, only one in five were rated as showing strong evidence, while nearly two-thirds had mixed, weak or no research backing.
Subsequent investigation of the few CSR designs that did have strong research backing found that the reforms did not necessarily produce the results intended, nor did they substantially alter instructional practices (Correnti and Rowan, 2007; Cohen and Moffitt, 2010; Rowan et al. , 2009). A key finding of this line of work was that "Although [program] adoption is quick and easy, implementation at local sites turns out to be difficult" (Correnti and Rowan, 2007, p. 299). Researchers also found that while there was a great deal of variation in program implementation and outcomes, this variation was not explained by variations in target populations, program variations or differences in evaluation techniques (Correnti and Rowan, 2007; Borman et al. , 2003). On the other hand, evidence was emerging that well-defined and well-specified instructional improvement programs that are strongly supported by on-site facilitators and local leaders who demand fidelity to program designs can produce changes in teachers' classroom practices (Correnti and Rowan, 2007).
This work outlined a paradox in implementation research in education, namely, that making changes in American schools is extremely difficult (Berman and McLaughlin, 1975; Rivlin and Timpane, 1975; Darling-Hammond and Snyder, 1992; Cuban, 1993; Elmore and McLaughlin, 1998), but that faithful implementation of externally designed innovations is still possible (Crandall et al. , 1982; Firestone and Corbett, 1988). Problems of implementation of educational programs were positioned in a large part as problems of professional learning; teachers began to be viewed as the key "delivery mechanism" in innovations. Program developers, therefore, needed to devise strategies for helping teachers to learn to use new instructional practices in their specific work environments successfully.
During the decades of the 1980s, 1990s and early 2000s, the groundwork for Implementation Science was laid in education and research slowly began to reveal factors found to be most closely associated with both professional learning and a high degree of instructional change. Cohen and Hill (2001) found that innovation focused on changing very specific curriculum-embedded elements of instructional practice (as opposed to generic concepts of teaching) was associated with instructional change. Elmore and Burney (1997) assert that a program should have clearly defined goals for change, detailing what exactly is to be changed and the steps necessary for that change. These goals should be further clarified by written material and other supports for trainers to teach the new design to teachers (Peterson and Emrick, 1983). The new practices should also represent an ambitious and marked change from current practices (Huberman and Miles, 1984; Datnow and Castellano, 2000).
Finally, change programs require a knowledgeable external facilitator whose job is to work closely with teachers while program implementation is ongoing (McLaughlin and Marsh, 1978; Crandall et al. , 1986). Research increasingly highlights that the teacher, or practitioner, is central to the intervention, powerfully impacting implementation quality, effectiveness and outcomes. Recent research allows us to access and promote the type of instructional approach most likely to foster good implementation of programs, interventions or curricula. Most recent findings suggest that an overview of theory underlying a given intervention or program needs to be taught to prospective implementers and practitioners in some detail. This allows the practitioner to understand why the program is designed as it is and also allows the practitioners to reflect on the purpose and values supporting it. This also offers the opportunity for new implementers to explore whether there is alignment of their own values and beliefs with the proposed intervention. If not, successful implementation is unlikely. As Aarons (in Kelly and Perkins, 2012) discovered, misalignment of values negatively affects all aspects of the implementation process. In addition, coaching models that offer direct support via modeling, observation and feedback are significantly more effective in terms of implementation quality, outcomes and sustainability than other instructional models (Neufield and Donaldson in Kelly and Perkins, 2012). However, these approaches cannot stand alone, and additional strands such as those mentioned above are required.
Thus, while Implementation Science is a relatively recent addition to the field of education, concerns regarding effective program implementation and the contextual factors associated with it have occupied the interests of educational researchers for the better part of four decades. The following sections describe how Implementation Science looks in one school-community-university partnership, and how the method identifies and solves problems.
A case applying the implementation science approach in education: promoting school-community-university partnerships to enhance resilience (PROSPER)
The goal of PROSPER is to use the combined efforts of prevention scientists, the Cooperative Extension system and local schools and community leaders to develop community partnerships that strengthen families and help young people avoid substance abuse and behavioral problems (Spoth et al. , 2008). In most countries, a central problem in prevention programming aimed at preventing youth from engaging in such high-risk behaviors is how to operate the programs with high quality implementation levels, sustain the same over time and demonstrate improved outcomes for targeted children and youth. One innovative attempt to solve this vexing set of problems is community-based coalitions of key leaders who bring together schools and agencies within a community to address needs of those at-risk. Figure 1 depicts the details of such an effort, with particular attention placed on the elements of program adoption and implementation.
With funding from the National Institute of Drug Abuse, Pennsylvania State University and Iowa State University developed such a model called PROSPER (promoting school-community-university partnerships to enhance resilience). PROSPER takes the novel approach of linking three important US infrastructure systems - the land-grant university, the Cooperative Extension System (outreach arm of land-grant universities) and the public school system - which together create a continuum from science to practice. By creating new linkages between these three systems, PROSPER's goal is to enact science-based community empowerment to demonstrate high-quality implementation and a long-term sustainability of prevention programs in communities.
Model description
Since 2001, the PROSPER model has shown that strong collaboration among communities, school districts and university-based prevention scientists can promote the uptake and sustainability of partnerships that yield measurable public health benefits. Indeed, outcome data collected as part of the PROSPER Model revealed this, as well as evidence of high-quality program delivery, long-term sustainability and the potential for PROSPER to expand into new areas of prevention.
The PROSPER multi-level partnership model is locally led by Cooperative Extension System educators and is intended to build community capacity to deliver evidence-based family and youth interventions (Spoth et al. , 2004). PROSPER has five core components, which together represent an effort to foster the translation of science into effective community practice:
1. The PROSPER community team.
2. A three-tier partnership structure based in the land-grant university system and supporting stable, proactive technical assistance.
3. A multi-phased partnership developmental process oriented toward sustainability.
4. Evidence-based interventions selected from a menu.
5. Ongoing process and outcome evaluation.
Model structure
As noted, the model involves local community teams that are provided with continuous, proactive technical assistance through state land-grant universities. PROSPER's main goals are to reduce rates of early substance use and problem behaviors, as well as to promote positive youth development and family competences (Spoth et al. , 2004). Structurally, PROSPER entails a three-tier community-university partnership model (Figure 4.2; Spoth and Greenberg, 2005; Spoth et al. , 2004) (Figure 2).
Community-based teams are led by local extension educators and co-led by school personnel. The Extension educators serve as linking agents between the local team and university-based prevention specialists and resources. Relatively small in size (8 to 12 people), these strategic teams include representatives from Cooperative Extension Services, local schools and community agencies, and include parents and youth. The PROSPER teams were designed to achieve a focused set of intervention goals across a number of developmental phases (Spoth and Greenberg, 2005). Following team formation, team activities included the selection of a universal family-focused intervention and a school-based intervention from a menu of evidence-based options. A second primary activity was the recruitment of families into the program.
Local PROSPER teams received resources and support from a state-level team, including technical assistance from a prevention coordinator (PC) who functions as a liaison between the university prevention team and local teams. The primary role of PCs is to provide proactive, solution-focused technical assistance to local PROSPER teams concerning issues of program adoption, implementation and sustainability. This proactive technical assistance entails frequent, often weekly, contacts with local PROSPER team members to engage actively in problem-solving[1]. Moreover, PCs attend local PROSPER team meetings, facilitating and documenting overall partnership functioning, as well as facilitating effective two-way communications with the university-based and state-level groups. In addition, PCs as a group meet regularly to work together to problem solve issues as they arise. This allows PCs to be able to learn from each other and provide consistency in the type of technical assistance they offer to local teams.
The primary functions of this state-level team are:
* scientific guidance concerning preventive intervention selection and implementation;
* proactive technical assistance to the local PROSPER teams through the PCs (through solution-focused problem-solving technical assistance);
* administrative oversight;
* input on data collection and analyses;
* school-based reports of local PROSPER teams; and
* project reports.
Phases within the model
Similar to other partnership models, community teams proceed through a series of four developmental phases (Chinman et al. , 2004; Hawkins et al. , 2008, 2002; Livit and Wandersman, 2004; Stevenson and Mitchell, 2003). The first phase in PROSPER is the organizational phase. It lasts for six to eight months, with major tasks entailing partnership formation activities, including recruiting key members, receiving training in the model, establishing program goals based on local needs and resources and coalescing as a team (Feinberg et al. , 2007a, 2007b). The operations phase is the second phase and it lasts two to three years. Its major tasks involve implementing chosen programs and/or policies; applying a monitoring system; and initiating sustainability training and planning. The length of this phase depends on the model, duration of initial funding and other considerations. The "early sustainability phase" is the third phase, and it overlaps with phase two. The focus of this phase is on sustaining the effective activities of the local community team, and it often involves engaging other community entities to create a permanent structure for the team's operations and sponsored activities (Feinberg et al. , 2007a, 2007b; Spoth and Greenberg, 2005). The fourth and final phase, ongoing operations and sustainability, involves the continued strengthening of the community teams' internal and external functioning, and maintaining quality implementation of programs.
Model impact on implementation
One of the central questions of the PROSPER research trial was whether quality in program delivery could be maintained over time. Typically, in real-world settings, programs are poorly implemented, often leading to diminished effectiveness or no effectiveness at all. In PROSPER communities, teams were charged with monitoring quality through regular program observations and with providing feedback to implementers if quality was low. Data from the process evaluation showed that PROSPER programs were consistently implemented with high quality. Over a five-year period, quality ratings averaged 90 per cent or higher (Spoth et al. , 2007). In the post-grant period, PROSPER teams continue to monitor their implementation quality, with many CES team leaders including this as one of their regular PROSPER-related tasks. After the first three years of the project, PROSPER teams were increasingly responsible for generating their own programming resources, and continue to do so now, nine years after the original grant funding expired. PROSPER showed that using proactive technical assistance and support, and instituting an ongoing quality monitoring system, are implementation factors that are clearly linked to rapid, effective and efficient translation of the science of preventative health intervention into community practice.
Elucidation of the implementation science approach
How are problems identified and considered within Implementation Science?
Implementation Science, by definition, is focused on understanding the adoption, implementation and spread of interventions. As such, the specification of problems has less to do with program effectiveness (what actions might solve some problem) than the adoption and implementation of interventions (thought to be effective) in complex learning environments (Glasgow et al. , 2012). This is not to say that Implementation Science does not consider program-related outcomes, but rather that it assesses the continued effectiveness of an intervention as it is spread to, and adopted by, populations in diverse contexts. The method therefore considers its problems to be associated with organizations' capacities to implement programs, barriers to effective implementation and the failure of program transfer (results cannot be replicated in other contexts).
A series of reviews of educational interventions has attempted to identify factors associated with program outcomes (Fixsen et al. , 2005; Greenhalgh et al. , 2005; Stith et al. , 2006; Durlak and DuPre, 2008; Blase et al. , 2012). The authors focused on associations between outcomes and the type of innovation, characteristics of program providers, community characteristics, the organization implementation system and the support system. While the reviews do not agree on all characteristics associated with program outcomes, they each found 11 characteristics of the organization implementation system that were associated with outcomes. These characteristics were: funding and resources, positive work climates, shared decision-making processes, coordination with other organizations, the formulation of tasks, leadership, program champions (supporters), administrative support, program providers' skill proficiency, training and technical assistance. These authors found that problems affecting program outcomes most often originate in the implementation process rather than within the program itself or in the system of supports structured around it.
Implementation Science also prioritizes barriers to implementation effectiveness as high-leverage problems (Kelly, 2012). While the method considers issues such as lack of fidelity to program design and specifications, lack of organizational capacity building within a program and low participation rates in the target population to be problematic, it considers the underlying problems for these phenomena as the problems to be solved. The method positions these phenomena as outcomes of processes operating in and around complex social contexts, stakeholder attitudes, perceptions and values, as well as resources and policy directives (Kelly and Perkins, 2012). These and other underlying issues may hinder educational interventions by causing, for example, a lack of program fidelity or low participation in implementation.
The failure to get program outcomes to transfer from one context to another is another important consideration in Implementation Science. While the method begins with evidence-based approaches tested in controlled settings, the method is concerned with successful replication while remaining sensitive and responsive to contextual factors (Kelly, 2013). Barriers to program transfer, according to the method, often have their roots in behavioral change, which depends on developing stakeholders' learning skills and confidence in using these skills in the new interventions. Problems in this regard are mainly due to the failure to anticipate and take into account factors and processes that cause variation in program implementation (Kelly, 2012). In turn, this variation is sometimes found to be related to the characteristics of practitioners (e.g. their beliefs, background knowledge and predispositions) and contextual factors. However, program characteristics can also play a role in the success of implementation. For instance, in terms of addressing the designated problem, as well as alignment with organizational goals and standards of operating, the fit of the program is sometimes related to transferability. In addition, the complexity of the program has been found to directly influence fidelity (particularly as it relates to what those who use the approach refer to as adherence to design, dosage, quality of delivery and participant's response to the program). According to Kelly (2013, p. 4), the:
[s]tate of the art [in Implementation Science] is in pulling together the critical steps in implementation processes and developing integrated, evidence-based approaches which act directly to counter negative effects of key aspects in design and delivery of implementation.
Where do solutions to these problems come from?
As Implementation Science is principally concerned with understanding and finding solutions to the causes of variation in program outcomes relating to implementation, solutions of necessity must come from both the research and practice sides of the partnership. Unlike other methods presented in this volume, Implementation Science does not prescribe how stakeholders and actors are to arrive at solutions. It does, however, specify that all roles should be involved in carrying out the functions of dissemination and implementation (Wandersman et al. , 2008). There are three levels or categories of roles in Implementation Science that should be involved in developing solutions (Wandersman et al. , 2008). First, the role of the synthesis and translation system (STS) is to synthesize and translate scientific theory and evidence into user-friendly interventions. The STS promotes evidence-based innovations that, in theory, can achieve the intended outcomes of the partnership. Second, the role of the support system (SS) is to work with those directly responsible for implementing the intervention to support quality implementation processes. The SS accomplishes this by building two types of capacities: innovation-specific (e.g. necessary knowledge, skills and motivation required for the effective deployment of the innovation) and generic (i.e. effective structural and functional factors). Third, the delivery system (DS) comprises the individuals, organizations and communities that carry out the day-to-day functions involved in the innovation (i.e. the front-line practitioners). The DS puts the innovation into practice to achieve the program outcomes.
Although the role of each of these systems in finding solutions to problems relating to implementation failures is different, the descriptions of Wandersman et al . suggest that all three systems, and the knowledge they possess, are essential to that activity. Practitioners, and the DS more generally, are intimately acquainted with issues relating to the context in which they operate, as well as the resource constraints they face. The SS understands what it takes to build practitioners' capacity to implement programs well and with integrity. Further, this system appreciates the importance of practitioners' beliefs, perceptions and background knowledge of program interventions in the act of implementation, and can plan for facilitating the evolution of practitioners' behavioral patterns accordingly. Having synthesized research findings into a usable intervention for the DS in a particular context, the STS has the knowledge and expertise to ensure the intervention, despite local adaptation, remains accountable to the research base.
Within a researcher-practitioner partnership in Implementation Science, solutions to problems of implementation or with the spread of innovations are worked out collaboratively and without a clear hierarchy. Many traditional researcher-practitioner partnerships position researchers are in a favored position as the "knowers" who have solutions to solve practitioners' (the "doers") problems. Implementation Science, on the other hand, asserts that all stakeholders in the partnership, whether in the STS, the SS or the DS, are at once responsible and accountable for the implementation of interventions and their eventual success. It is in all parties' interest, therefore, to work collaboratively toward quality implementation and positive program outcomes.
Interviews conducted for this chapter revealed several criteria for weighing the potential of competing solutions[2]. First, resource constraints in terms of start-up and ongoing execution (sustainability) should be considered. A second consideration is the probability of whether the solution will alleviate barriers to implementation and therefore enhance program outcomes. Third, solutions should be weighed against the possibility for unintended consequences in other areas of the system in which they are operative.
How does Implementation Science enact and warrant them as improvements?
Literature on Implementation Science suggests that there are four phases of enacting and implementing improvements. From a review of 25 Implementation Science frameworks, Meyers et al. (2012) discerned that "quality implementation" could be divided up into four temporal phases:
1. initial considerations about the host context;
2. creating an implementation structure;
3. sustaining the structure during implementation; and
4. improving future applications.
Although experts interviewed for this chapter warned that these phases are not as linear as Meyers et al . suggest, there does appear to be a great deal of agreement regarding the phases in general (Fixsen et al. , 2005; Stith et al. , 2006; Durlak and DuPre, 2008; Wandersman et al. , 2008).
The first phase (initial considerations regarding the host setting) includes pre-implementation assessments, decisions about adaptation and capacity-building strategies. Pre-implementation assessments involve needs and resource assessments, a contextual fit assessment and a capacity/readiness assessment. The purpose of these tools is to determine the resources and supports necessary to enact quality implementation; the extent to which the intervention "fits" with the implementation context, as well as with the organization's goals; whether an organization currently has the capacity to implement the intervention; and how the organization's capacity can be enhanced.
Considerations for the host setting also include making evidence-based decisions about adaptation. Decades of research findings suggest that the spontaneous adaptation of educational interventions is more or less inevitable, as practitioners modify programs to fit their current contexts (Berman and McLaughlin, 1976; Durlak and DuPre, 2008). In a survey of school-based programs, Ringwalt et al. (2003, p. 387) assert:
We can thus say now with confidence that some measure of adaptation is inevitable and that for curriculum developers to oppose it categorically, even for the best of conceptual or empirical reasons, would appear to be futile.
Bumbarger and Perkins (2008, p. 56) state:
If we know that absolute fidelity never occurs in natural conditions, arguing for it is futile. Conversely, if we know that most adaptation is not innovation, but rather program drift, then the call for implementer flexibility is based on an unsubstantiated philosophical argument. Viewing the question as "either/or" has, in our view, unnecessarily polarized the issue and has prevented the field from addressing fidelity and adaptation in a way that promotes best practice.
The implication of these statements is that researchers and support providers must work with practitioners to decide collectively what to modify in the intervention and how the modifications will be accomplished. Thus, as Bumbarger and Perkins (2008, p. 56) suggest:
We can promote the highest degree of fidelity possible by working with implementers to make informed adaptation decisions which permit flexibility (in areas of program delivery not hypothesized to be directly responsible for program outcomes) but do not detract from an evidence-based interventions (EBI's) theory of change.
This process will differ by program, but it may include pre-implementation discussion and consultation, or an iterative process of trial and testing modifications in the host context.
A final consideration regarding the host setting is the identification and development of capacity-building strategies. In essence, this means the support providers - with both researchers and practitioners - determine which capacities need to be developed on both sides of the partnership, and plan for how the support system might facilitate the development of these capacities. The method takes care to emphasize that capacity development is not only for practitioners, but that researchers need to develop their abilities too, first, by conducting ecologically valid tests of implementation and, second, by interacting meaningfully and effectively with individuals who do the work of implementing programs.
Another phase in enacting improvements in Implementation Science occurring before interventions are executed is the design and creation of a structure to support the act of implementation. The method prescribes two such structural features: implementation teams and an implementation plan. Implementation teams comprise researchers, support providers and practitioners, and are accountable for the implementation of the educational intervention in the host context. The implementation plan is the roadmap that guides the work of the implementation team as they execute the program. The plan should also draw on practitioners' knowledge of the host setting, as well as the support providers' knowledge of effective support strategies. The creation of structures that facilitate quality implementation through which researchers, practitioners and support providers collaborate is inherently necessary before implementation takes place.
The third phase in Implementation Science occurs parallel to implementation, and sustains the structural features of the partnership (i.e. implementation team and implementation plan) through support strategies. That is, support providers should assist practitioners and researchers as they enact (and if necessary, adapt) the improvement plan developed earlier. This support may take the form of technical assistance, coaching or the supervision of front-line practitioners. Two components of this support are process evaluation and feedback mechanisms. Process evaluation should be used to monitor ongoing implementation of the program in the host context. If adaptations to the program have been decided upon, the process evaluation should incorporate these and be designed to determine if they are being adhered to. Feedback mechanisms facilitate a common understanding among all parties on the current state of implementation and the extent to which problems are emerging during the enactment of the implementation plan. These feedback mechanisms are intended to be informative - guiding, supporting and informing adaptation - rather than purely for the purposes of accountability. As the implementation team learns more about the nuances of program delivery within a context, that learning can be incorporated into the implementation plan and communicated to all stakeholders in the team.
The fourth phase in deploying Implementation Science, which occurs after the enactment of the implementation plan, is the improvement of future applications of the program. The purpose of this phase is to cull learning from the experience of implementation for all stakeholders (i.e. practitioners, support providers and researchers). This develops from Phase three, but building on that, this phase includes a retrospective analysis of what worked, what did not work and what could be improved with regard to sustained, quality implementation. The data for this assessment should come from self-reflection, feedback mechanisms between stakeholders, empirical data collected as part of the implementation plan and adaptations that were monitored.
While these four stages may not occur in a completely linear process in Implementation Science partnerships, the above descriptions do point out that a great deal of work is done before program implementation even begins (Meyers et al. , 2012). The method attempts to ensure that quality implementation is well-planned, and that all stakeholders are on the same page before enacting the implementation plan. Important elements of Implementation Science partnerships, such as setting clear and specific standards for implementation, identifying adaptations to programs, developing an implementation plan and designing support structures, must all occur before program implementation begins.
How does Implementation Science provide for the spread of knowledge?
Implementation Science partnerships concentrate on two issues when thinking about knowledge spread: scaling up and sustainability (Glasgow et al. , 2012). In many other approaches, both of these issues are thought to be exogenous, or something to consider after the original programs have been implemented. Implementation Science, however, positions both scale-up and sustainability as important initial considerations during the development, design and execution of all programs. Indeed, spread is a concern even when the implementation team is initially selecting programs.
In Implementation Science partnerships, bringing programs to scale inherently involves thinking about and planning how to maintain high fidelity while making necessary adaptations to a program to fit particular and different host contexts. The diversity of implementation settings (e.g. districts and schools) creates a challenge for understanding how best to scale up successful interventions. While other methods tend to address issues relating to diverse implementation settings after the initial program has been executed, considering these issues is one of the main tenets in Implementation Science regardless of the stage of implementation. As has been discussed in the previous sections, the method emphasizes robustness of results and sensitivity to local contexts. Many of the initial assessments in the first phase of implementation (considerations regarding the host context) are designed to help the implementation team understand and incorporate characteristics of the local context into the implementation plan. Identifying and collectively deciding on program adaptations is also done with an eye toward ensuring that programs fit the local context. In short, the method has a number of built-in mechanisms that help identify where there is a good fit between the context and program, as well as an amenability to scaling up and working with local practitioners throughout the steps of implementation to facilitate transferring a program to multiple contexts.
With regard to sustainability, Implementation Science partnerships plan for the long-term integration of effective interventions within specific settings. Capacity building, led by support providers, plays a large role in planning for the eventual shift from the implementation team and external experts to local practitioners. Cultivating community ownership and participation in the process of program implementation is therefore part of the implementation plan. Shediac-Rizkallah and Bone (1998, p. 103) assert: "The literature overwhelmingly shows a positive relationship between community participation and sustainability". In addition, experts also point to the importance of reconsidering sustainability while the intervention is being implemented. Indeed, Greenberg et al. (2015) provide empirical evidence for supporting early function of implementation teams and its linkage to sustainability. Mechanisms within Implementation Science partnerships should allow for the re-examination of programs to determine how well they fit within the structure and workflow of the organizations in which they are delivered. While interviews conducted for this chapter revealed that Implementation Science partnerships last for a number of years, these formal mechanisms within the method prompt the partnership to consider and plan for program sustainability from the beginning.
A specific mechanism in Implementation Science partnerships for eliciting information about the status of program implementation in local settings is a feedback loop between practitioners, support providers, and researchers on the implementation team. As mentioned in the previous section, feedback loops are developed to open lines of communication between team members to ascertain the extent to which adaptations are being used, and to determine whether further adaptations are necessary in the implementation process. These feedback loops carry information about the local setting back to other members of the implementation team to facilitate quick adaptation and to maximize sensitivity to local needs.
Conclusion
Implementation Science has its genesis in the recognition that unwanted variability in the implementation of a program is largely responsible for that program's failure to realize the outcomes promised by the research base supporting the intervention. As such, the method comprises a set of procedures and routines designed to ensure fidelity in implementation. However, its definition is an enlightened one in comparison to rote conceptions of fidelity (e.g. "do exactly what the designers did; do exactly what the designers say to do"). It specifically raises questions about adaptations necessary to ensure effectiveness in context, and it presses for fidelity to this enriched conception of the program (albeit with processes embedded to reflect whether the adaptations, in fact, are working). Thus, the "problems" that the method addresses are those associated with implementation conceived in this way. It is less concerned with the problems of practice in themselves, deferring to more constitutive and translational forms of research to provide answers (or at least the suggestion of answers) to them. Beyond planning for implementation itself, particular strengths of the approach lie in its concern for scaling and sustainability, which it addresses from the very outset of engagement in the implementation. In summary, it is the concern for variability in performance as it derives from undesirable variability in implementation (along with its abiding concern for addressing it) that warrants its inclusion in the family of improvement models.
Notes
1. There is a question about the effectiveness of solution-oriented approaches to problem-solving in programs of this type - mainly that they avoid a full exploration of the problem components. Solution-oriented approaches have their roots in individual therapeutic interventions, postulating an avoidance of ruminating on past failure and focusing on future success. Formal problem-solving frameworks have evidence-based steps that are arguably more suited to the Implementation Science context than solution-oriented approaches (see Monsen and Woolfson in ).
2. According to interviewees, these criteria should not be considered in a linear manner (i.e. resource constraints first, etc.).
[Figures and tables omitted: See PDF]
References
[ref001]Aarons, G.A., Green, A.E. and Miller, E., (2012), "Researching readiness for implementation of evidence-based practice: a comprehensive review of the evidence-based practice attitude scale (EBPAS)", in Kelly, B. and Perkins, D.F., (Eds), Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York, NY, pp. 150-164
[ref002]Berman, P. and McLaughlin, M.W., (1974), Federal Programs Supporting Educational Change: I A Model of Educational Change, RAND Corporation, Santa Monica
[ref003]Berman, P. and McLaughlin, M.W., (1975), Federal Programs Supporting Educational Change: IV The Findings in Review, RAND Corporation, Santa Monica
[ref004]Berman, P. and McLaughlin, M.W., (1976), "Implementation of educational innovation", The Educational Forum, Vol. 40 No. no. 3, pp. 345-370
[ref005]Blase, K., Van Dyke, M., Fixsen, D.L. and Bailey, F.W., (2012), "Implementation science: key concepts, themes and evidence for practitioners in educational psychology", in Kelly, B. and Perkins, D., (Eds), Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York, NY, pp. 13-34
[ref006]Borman, G.D., Hewes, G.M., Overman, L.T. and Brown, S., (2003), "Comprehensive school reform and achievement: a meta-analysis", Review of Educational Research, Vol. 73 No. no. 2, pp. 125-230
[ref007]Bumbarger, B. and Perkins, D., (2008), "After randomised trials: issues related to dissemination of evidence-based interventions", Journal of Children's Services, Vol. 3 No. no. 2, pp. 55-64
[ref008]Chinman, M., Imm, P. and Wandersman, A., (2004), Getting to Outcomes 2004: Promoting Accountability through Methods and Tools for Planning, Implementation, and Evaluation, RAND Corporation, Santa Monica
[ref060]Cohen, D.K. and Hill, H.C., (2001), Learning Policy: When State Education Reform Works, Yale University Press, New Haven, CT
[ref009]Cohen, D.K. and Moffitt, S.L., (2010), The Ordeal of Equality: Did Federal Regulation Fix the Schools?, Harvard University Press, Cambridge, MA
[ref010]Correnti, R. and Rowan, B., (2007), "Opening up the black box: literacy instruction in schools participating in three comprehensive school reform programs", American Educational Research Journal, Vol. 44 No. no. 2, pp. 298-339
[ref011]Crandall, D., Bauchner, J., Loucks, S. and Schmidt, W., (1982), "Models of the school improvement process. A study of dissemination efforts supporting school improvement", Paper Presented at the annual meeting of the American Educational Research Association, New York, NY
[ref012]Crandall, D.P., Eiseman, J.W. and Louis, K.S., (1986), "Strategic planning issues that bear on the success of school improvement efforts", Educational Administration Quarterly, Vol. 22 No. no. 3, pp. 21-53
[ref013]Cuban, L., (1993), How Teachers Taught: Constancy and Change in American Classrooms 1880-1990, 2nd ed.., Teachers College Press, New York, NY
[ref014]Darling-Hammond, L. and Snyder, J., (1992), "Curriculum studies and traditions of inquiry: the scientific tradition", in Jackson, P.W., (Ed.), Handbook of Research on Curriculum, MacMillan Publishing Company, New York, NY, pp. 41-77
[ref015]Datnow, A. and Castellano, M., (2000), "Teachers' responses to success for all: how beliefs, experiences, and adaptations shape implementation", American Educational Research Journal, Vol. 37 No. no. 3, pp. 775-799
[ref016]Durlak, J.A. and DuPre, E.P., (2008), "Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation", American Journal of Community Psychology, Vol. 41 Nos no. 3/4, pp. 327-350
[ref017]Elmore, R.F. and Burney, D., (1997), Investing in Teacher Learning: Staff Development and Instructional Improvement in Community School District #2, National Commission on Teaching & America's Future, New York, NY
[ref018]Elmore, R.F. and McLaughlin, M.W., (1998), Steady Work: Policy, Practice, and the Reform of American Education, RAND Corporation, Santa Monica
[ref019]Feinberg, M.E., Greenberg, M.T., Osgood, D., Sartorius, J. and Bontempo, D., (2007a), "Effects of the communities that care model in Pennsylvania on youth risk and problem behaviors", Prevention Science, Vol. 8 No. no. 4, pp. 261-270
[ref020]Feinberg, M.E., Chilenski, S.M., Greenberg, M.T., Spoth, R.L. and Redmond, C., (2007b), "Community and team member factors that influence the operations phase of local prevention teams: the PROSPER project", Prevention Science, Vol. 8 No. no. 3, pp. 214-226
[ref021]Firestone, W.A. and Corbett, H.D., (1988), "Planned organizational change", in Boyan, N., (Ed.), Handbook of Research on Educational Administration, Allyn & Bacon, Boston, MA, pp. 321-340
[ref022]Fixsen, D.L., Naoom, S.F., Blase, K.A., Friedman, R.M. and Wallace, F., (2005), Implementation Research: A Synthesis of the Literature, The National Implementation Research Network, Tampa, FL
[ref023]Glasgow, R.E., Vinson, C., Chambers, D., Khoury, M.J., Kaplan, R.M. and Hunter, C., (2012), "National institutes of health approaches to dissemination and implementation science: current and future directions", American Journal of Public Health, Vol. 102 No. no. 7, pp. 1274-1281
[ref024]Greenberg, M.T., Feinberg, M.E., Johnson, L.E., Perkins, D.F., Welsh, J.A. and Spoth, R.L., (2015), "Factors that predict financial sustainability of community coalitions: five years of findings from the PROSPER partnership project", Prevention Science, Vol. 16 No. no. 1, pp. 158-167
[ref025]Greenhalgh, T., Robert, G., Macfarlane, F., Bate, P. and Kyriakidou, O., (2005), Diffusion of Innovations in Health Service Organisations: A Systematic Literature Review, Blackwell, Oxford
[ref026]Hawkins, J.D., Catalano, R.F. and Arthur, M.W., (2002), "Promoting science-based prevention in communities", Addictive Behaviors, Vol. 27, pp. 951-976
[ref027]Hawkins, J.D., Catalano, R.F., Arthur, M.W., Egan, E., Brown, E.C., Abbott, R.D. and Murray, D.M., (2008), "Testing communities that care: the rationale, design, and behavioral baseline equivalence of the community youth development study", Prevention Science, Vol. 9 No. no. 3, pp. 178-190
[ref028]Herman, R., (1999), "An educator's guide to school wide reform"
[ref029]Huberman, A.M. and Miles, M.N.B., (1984), Innovation Up Close: How School Improvement Works, Plenum, New York, NY
[ref030]Kelly, B., (2012), "Implementation science for psychology in education", in Kelly, B. and Perkins, D.F., (Eds), Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York, NY, pp. 3-12
[ref031]Kelly, B., (2013), "Implementing implementation science: reviewing the quest to develop methods and framework for effective implementation", Journal of Neurology and Psychology, Vol. 1 No. no. 1, pp. 1-5
[ref032]Kelly, B. and Perkins, D., (Eds) (2012), Handbook of Implementation Science for Psychology in Education, Cambridge University Press, New York, NY
[ref033]Livit, M. and Wandersman, A., (2005), "Organizational functioning: facilitating effective interventions and increasing the odds of programming success", in Fetterman, D.M. and Wandersman, A., (Eds), Empowerment Evaluation Principles in Practice, Guilford, New York, NY, pp. 123-142
[ref034]McLaughlin, M.W. and Marsh, D.D., (1978), "Staff development and school change", The Teachers College Record, Vol. 80 No. no. 1, pp. 69-94
[ref035]Meyers, D.C., Durlak, J.A. and Wandersman, A., (2012), "The quality implementation framework: a synthesis of critical steps in the implementation process", American Journal of Community Psychology, Vol. 50 No. no. 4, pp. 462-480
[ref036]Peterson, S.M. and Emrick, J.A., (1983), "Advances in practice", in Paisley, W. and Butler, M., (Eds), Knowledge Utilization Systems in Education, Sage Publications, Beverly Hills, CA, pp. 219-250
[ref037]Pressman, J.L. and Wildawsky, A., (1984), Implementation: How Great Expectations in Washington Are Dashed in Oakland, 3rd ed.., University of California Press, Berkeley
[ref038]Ringwalt, C.L., Ennett, S., Johnson, R., Rohrbach, L.A., Simons-Rudolph, A., Vincus, A. and Thorne, J., (2003), "Factors associated with fidelity to substance use prevention curriculum guides in the nation's middle schools", Health Education and Behavior, Vol. 30 No. no. 3, pp. 375-391
[ref039]Rivlin, A.M. and Timpane, P.M., (1975), Planned Variation in Education: Should We Give up or Try Harder?, The Brookings Institution, Washington, DC
[ref040]Rowan, B., Correnti, R., Miller, R.J. and Camburn, E.M., (2009), School Improvement by Design: Lessons from a Study of Comprehensive School Reform Programs, University of Pennsylvania Graduate School of Education, Consortium for Policy Research in Education, Philadelphia, PA
[ref041]Shediac-Rizkallah, M.C. and Bone, L.R., (1998), "Planning for the sustainability of community-based health programs: conceptual frameworks and future directions for research, practice and policy", Health Education Research, Vol. 13 No. no. 1, pp. 87-108
[ref042]Slavin, R.E., (2002), "Evidence-based education policies: transforming educational practice and research", Educational Researcher, Vol. 31 No. no. 7, pp. 15-21
[ref043]Spoth, R., Randall, G.K. and Shin, C., (2008), "School success through partnership-based family competency training: experimental study of long-term outcomes", School Psychology Quarterly, Vol. 23 No. no. 1, pp. 70-89
[ref044]Spoth, R.L. and Greenberg, M.T., (2005), "Toward a comprehensive strategy for effective practitioner-scientist partnerships and larger-scale community health and well-being", Journal of Community Psychology, Vol. 35 Nos no. 3/4, pp. 107-126
[ref045]Spoth, R., Greenberg, M., Bierman, K. and Redmond, C., (2004), "PROSPER community-university partnership model for public education systems: capacity-building for evidence-based, competence-building prevention", Prevention Science, Vol. 5 No. no. 1, pp. 31-39
[ref046]Spoth, R., Clair, S., Greenberg, M., Redmond, C. and Shin, C., (2007), "Toward dissemination of evidence-based family interventions: maintenance of community-based partnership recruitment results and associated factors", Journal of Family Psychology, Vol. 21 No. no. 2, pp. 137-146
[ref047]Stevenson, J.F. and Mitchell, R.E., (2003), "Community level collaboration for substance abuse prevention", Journal of Primary Prevention, Vol. 23 No. no. 3, pp. 371-404
[ref048]Stith, S., Pruitt, I., Dees, J., Fronce, M., Green, N., Som, A. and Linkh, D., (2006), "Implementing community-based prevention programming: a review of the literature", Journal of Primary Prevention, Vol. 27 No. no. 6, pp. 599-617
[ref049]Wandersman, A., Duffy, J., Flaspohler, P., Noonan, R., Lubell, k., Stillman, L., Blachman, M., Dunville, R. and Saul, J., (2008), "Bridging the gap between prevention research and practice: the interactive systems framework for dissemination and implementation", American Journal of Community Psychology, Vol. 41 No. no. 3, pp. 171-181
Lee E. Nordstrum: RTI International, Edina, Minnesota, USA
Paul G. LeMahieu: Carnegie Foundation for the Advancement of Teaching, Stanford, California, USA
Elaine Berrena: Bennett Pierce Prevention Research Center, Pennsylvania State University , University Park, Pennsylvania, USA
© Emerald Publishing Limited 2017
