You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


FINE Newsletter, Volume VII, Issue 4
Issue Topic: Evaluation and Improvement Science in Action

Voices From the Field

There are several signs that the field of education is paying attention to the idea of improvement in new and exciting ways. The federal government, through the Institute of Education Sciences, is funding new research to study continuous improvement and how educational programs develop over time rather than focusing solely on their end goals and outcomes after they have been implemented. Additionally, other educational organizations such as the Carnegie Foundation for the Advancement of Teaching have made the science of improvement part of their central mission as they work with practitioners to solve persistent problems in education. Although improvement science has long been used in other fields, such as manufacturing and health care, it is becoming more prevalent in education. I designed this graduate-level course that I teach at the Harvard Graduate School of Education to introduce students to theories and frameworks of improvement, which I link closely to the practice of evaluation as a way to support and assess the improvement of a program as it develops over time.

The content and design of the course, Learning From Practice: Evaluation and Improvement Science, emerged from my interest in how practitioners engage in inquiry and improvement. I wanted students to tackle a range of questions, including:

  • How do we know if a program is working in practice?
  • How do we gather information about programs that will be useful to affect change and growth?
  • How do people in these programs respond to this information?

It seemed that these questions went beyond courses in summative evaluation that focused on impact and final outcomes, or courses that conducted evaluations through large-scale experimental or quasi-experimental studies. My course was instead focused on formative and developmental evaluation—the ongoing, iterative assessment of progress and the process of working with the program to make recommendations for improvement.

To practice these skills, students in my course consider how education practitioners might learn from feedback, how evaluation data could support building organizational capacity, and how there are various roles and relationships built between evaluators and practitioners as they collaborate to formatively evaluate a program. In many ways, this course honors the work of renowned Harvard Graduate School of Education Professor Carol Weiss, who left a deep legacy in the evaluation world with her work on theory-driven evaluation and advocacy for knowledge utilization. The course is intended to support students to understand the theory behind programs or interventions, how to assess whether they are making progress, and what changes to recommend. As a primary pedagogical approach, students are expected either to be in a practicum setting or be willing to partner with a school, program, or organization in order to collect information about that program. Working closely with that program, which we call the “partner organization,” they come to understand the program’s goals and generate a focus for evaluation. As their culminating project, students write an evaluation proposal that contains a description of the program and key stakeholders, a diagram of the program’s theory or logic model, proposed evaluation questions, and potential sources of data collection to answer the evaluation questions. Students also have to justify what approach they would take to evaluation, given the program’s current infrastructure and readiness for evaluation. For example, a nascent program that does not have a fully articulated theory for how their activities will result in outcomes might be a good candidate for developmental evaluation. This branch of evaluation views the evaluator role as more of a reflective partner to support program stakeholders with documenting the shifting and uncertain changes as a young program is first getting started.

Through this course, I have had the opportunity to work with students and partner organizations as they explore the complexities of program evaluation. Along the way, my students have helped me learn three insights that I document below.

1. Improvement Without Evaluation Is Incomplete
Many times when we think about “improvement,” we think about moving forward—creating action plans that draw out what will happen next, step-by-step, to help us get to where we want to be. However, the issue in education is that many programs and initiatives are full of action plans but have no ways to evaluate their results. With formative and developmental evaluation, programs can spend some time looking backward—getting immediate information about the initial implementation of their model and how the people involved perceive the program from the beginning. This evidence is sometimes dismissed because it is seen as “too early to tell” what the impact of the program will be or based on “imperfect” conditions because the program is just getting started. Evaluators will argue that this is the perfect time to gather data about how the program is working. But instead of trying to judge success or failure, they can work with the stakeholders directly to understand how the process—in all its messiness—is unfolding. The purpose of gathering data at this stage is to support ongoing routines for improvement where both halves of the improvement cycle are represented, and organizations learn to look back in order to move forward.

2. Listening Is as Important as Data Analysis in Evaluation
After the end of the course, many of my students tell me how much they valued working with their practitioner partner organization, and they often reflect on how important it is to listen closely to the program stakeholders. This does not just include their main liaison or the program’s director, but as many stakeholders as they can contact. Students have learned that hearing multiple perspectives on how stakeholders see their roles in the program—and what successes and challenges they face—changes their understanding of how the program is supposed to work. Formative and developmental evaluators hone listening skills as they talk to stakeholders to draft program theory and develop data collection instruments that are sensitive to the local and cultural contexts. They also listen for the logical connections that are supposed to link program inputs to eventual outcomes—and for when those connections are missing. When we talk about Carol Weiss and theory-based evaluation in the course, we discuss how finding the gaps in logic within a program’s theory can be a fruitful place to focus an evaluation. In this way, the program stakeholders can get valuable information about whether their activities occur the way they assume, or whether those assumptions are simply wishful thinking.

3. The Act of Evaluation Builds Program Capacity
I have several evaluation colleagues who joke that their goal is to “put themselves out of work.” Although I’m sure they want to keep working, they are alluding to the notion of “building evaluation capacity.” If more programs had their own internal practice of understanding program theory, developing measures of progress, analyzing data responsibly, and applying their learning to next steps, there might be no need to hire external evaluators. Students have reported that partner organizations really value the time provided to think through their program’s goals and how they know if those goals are successful. Michael Quinn Patton, former President of the American Evaluation Association, has referred to this as “process use”—by engaging in the act of evaluation, programs become more aware of their own logic, how they expect their program to result in the outcomes they want, and what evidence they want to collect. Rather than advocate for a detached evaluator stance, in the course we discuss how evaluators help programs build an infrastructure and a process to conduct future evaluations. This requires working with stakeholders to develop a friendly relationship with evaluation, even when many contexts for evaluation are seen as punitive and threatening.

Ultimately, I hope students leave the module with a deeper understanding of improvement science; various models of evaluation that connect research, theory, and practice; and some tools for using feedback for program improvement. I also hope that students have a meaningful experience with the messiness of program improvement in authentic, real-life settings of practice. Working toward improvement is about understanding context, having information about the current status, and supporting people through the process of change—skills also essential for evaluation.

 


About the Author:
Candice Bocala is an adjunct lecturer at the Harvard Graduate School of Education, where she teaches a course on evaluation and improvement science and co-chairs the Data Wise Summer Institute, a program that helps educators improve their instruction by using data. She works on a variety of small and large scale evaluation projects as a Research Associate at WestEd. Bocala was first introduced to evaluation while completing her master’s thesis at Stanford University, a program evaluation of an afterschool initiative for youth in San Francisco. She has since earned an Ed.D. in Education Policy and Leadership from Harvard Graduate School of Education.

  


This resource is part of the November FINE Newsletter. The FINE Newsletter shares the newest and best family engagement research and resources from Harvard Family Research Project and other field leaders. To access the archives of past issues, please visit www.hfrp.org/FINENewsletter

 


1 Institute for Healthcare Improvement. (2015). “Science of Improvement.” Retrieved from http://www.ihi.org/about/Pages/ScienceofImprovement.aspx ;
Langley, G. J., Moen, R. D., Nolan, K. M., Nolan, T. W., Norman, C. L., & Provost, L. P. (2009). The improvement guide: A practical approach to enhancing organizational performance. San Francisco, CA: Jossey-Bass.

2 Scriven, M. (1991). Beyond formative and summative evaluation. In M. W. McLaughlin, & D. C. Phillips (Eds.), Evaluation and education: At quarter century (pp. 18‒64). Chicago, IL: The University of Chicago Press.

3 Weiss, C. H. (1995). Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. In J. P. Connell, A. C. Kubisch, L. B. Schorr, & C. H. Weiss (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts (pp. 65‒92). Washington, DC: The Aspen Institute.

4 Patton, M. Q. (2011). Developmental evaluation: Applying complexity concepts to enhance innovation and use. New York, NY: Guilford Press.

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project