Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.
Volume XV, Number 1, Spring 2010
Issue Topic: Scaling Impact
Promising Practices
Sarah-Kathryn McDonald of the University of Chicago describes a conceptual model designed to demonstrate the role of evaluation in the scale-up process.
Promising education interventions typically go through a developmental process that starts with implementation at a single site, and then works incrementally toward the goal of scaling up their successful practices to additional settings to impact larger, more diverse populations. Evaluation can help an intervention proceed from one stage to the next, with different evaluation strategies accompanying each step of the intervention’s development. The Data Research and Development Center, a research and technical center funded by the National Science Foundation as part of the U.S. Interagency Education Research Initiative, has developed a conceptual model that specifies the evaluation methods appropriate at different stages in the intervention’s development. The model describes five evaluation stages that culminate in an intervention’s widespread dissemi-nation and adoption. Ideally, an intervention should proceed through each stage in order.
Stage 1: Proof of Concept. Stage 1 involves determining whether an intervention is sufficiently promising to develop and scale (including making any necessary improvements to the intervention). The goal is to produce evaluation data that can demonstrate which parts of the intervention can accommodate flexibility and which parts are not negotiable.
Stage 2: Establish Efficacy. This stage determines whether the intervention can achieve its intended results under ideal circumstances. At this stage, it is crucial that the intervention be implemented (and evaluated) with the features and in the context that are seen as optimal for success.
Related ResourcesMcDonald, S. (2009). Scale-up as a framework for intervention, program, and policy evaluation research. In G. Sykes, B. Schneider, & D. N. Plank (Eds.), Handbook of education policy research (pp. 191–208). New York: Routledge Publishers (for the American Educational Research Association). This article discusses the five stages of evaluation that educational researchers commonly use as they design, conduct, and implement evaluation studies: proof of concept, establish efficacy, demonstrate effectiveness, scale-up and test again, and postmarketing research. The article emphasizes the value of considering the primary audiences for the evaluation (i.e., who is interested in the results and who can benefit from them) as well as what these audiences plan to do with the results and in what time frame. The article also suggests how evaluators can disseminate and communicate their findings in ways that are likely to increase the chances that the data they collect will enhance policy and practice. McDonald, S. K., Keesler, V. A., Kauffman, N. J., & Schneider, B. (2006). Scaling-up exemplary interventions. Educational Researcher, 35(3), 15–24. This article has three objectives. First, it articulates the goal of scale-up research in the field of education. Second, it discusses the importance of context in conducting scale-up research. Third, it offers practical guidelines for study designs that can help education researchers provide evidence that the impacts measured can be applied to additional settings. |
Stage 3: Demonstrate Effectiveness. Stage 3 aims to assess whether an intervention will achieve its objectives outside the ideal context measured in Stage 2. The objective is to establish whether an intervention “works” in a real-world setting, with all of its complications.
Stage 4: Scale-Up and Test Again. Stage 4 aims to demonstrate the intervention’s impact when it is implemented among larger numbers of individuals across many contexts. This stage also examines the contextual factors that may influence the intervention’s impact in different settings. In addition to identifying reasons for any observed discrepancies, these data can provide feedback to help refine the intervention or develop guidelines to ensure it operates as intended in particular contexts.
Stage 5: Postmarketing Research. The final stage explores the following questions: (a) If an intervention demonstrated to work at scale is then more widely adopted, what else do we need to learn about its effectiveness in additional contexts at larger scales? and (b) What can and should we seek to learn about the sustainability of its impact and relevance once a market has been saturated and the intervention becomes the status quo? Postmarketing research can enrich our understanding of how the impact varies among slightly different populations. It can also help identify the circumstances under which it might make more sense to adapt the intervention to the local context rather than adopt the original elements of the intervention.
Throughout the scale-up process, the context of the intervention, and both the original intervention and the new sites into which it is scaled, must be kept front and center. Because every setting has its own needs and circumstances, scale-up should not uncritically replicate the intervention. Rather, scale-up should be a context-based approach that is supported by evaluation at each stage along the way.
Sarah-Kathryn McDonald, Ph.D., MBA
Executive Director
Center for Advancing Research and Communication in Science, Technology, Engineering, and Mathematics
National Opinion Research Center at the University of Chicago
Email: mcdonald-sarah@norc.uchicago.edu
The author would like to thank Barbara Schneider (principal investigator of the Data Research and Development Center and the Center for Advancing Research and Communication in Science, Technology, Engineering, and Mathematics) for her many contributions to, and continued support for, this work. This material is based upon work supported by the National Science Foundation under Grants No. 0129365 and 0815295. Any opinions, findings, conclusions, or recommendations expressed are those of the author and do not necessarily reflect the views of the National Science Foundation.