You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


FINE Newsletter, Volume VII, Issue 4
Issue Topic: Evaluation and Improvement Science in Action

Commentary

Carolina Buitrago is a Senior Research Analyst at Harvard Family Research Project.

When considering program evaluation, staff members are faced with a number of challenges. As their busy day-to-day work takes priority, they question their internal capacity to conduct an evaluation and wonder if they have the resources, time, and technical knowledge needed. Paired with this question of capacity is the pressing need for demonstrating results. Since positive outcomes often serve as the basis for their organization’s sustainability, staff wonder which instruments will most accurately help them track the outcomes of interest. Then, there is the common dilemma of how to get started. This question is particularly challenging as programs aimed at improving outcomes for students and families deal with great complexity. In most cases, the programs have a number of components and multilayered strategies, and the question revolves around how to make strategic choices to get the evaluation rolling. In other words, where do they start?

We hear about these and other evaluation-related dilemmas often at Harvard Family Research Project (HFRP). We receive calls on a regular basis from educational and community-based programs seeking guidance on how to evaluate their initiatives. Regardless of where they are in the process, we provide a consistent answer: "Slow down, and think for a moment or two.” What do we suggest they think about? Below are some of the ideas we ask program staff to consider as they plan their evaluation.

1. Evaluation does not equal measurement.
A predominant paradigm in program evaluation privileges the tracking and measuring of program impact on expected outcomes. This has led the field to develop a sense of urgency for outcome measurement. At HFRP, many of the inquiries that come our way are related to specific instruments or tools programs can or should use to measure outcomes. While determining whether a program is making a difference is a fundamental task in evaluation, it is only part of the process.

At HFRP, we ask programs to temporarily hold off their urgency to think about measurement. We encourage them to step back and give serious consideration to how the strategies they are implementing relate to the outcomes they expect to achieve. We also suggest they examine their assumptions about why and how these strategies will work. In other words, we invite program staff to define the theory of change that underlies what they do every day. Why is this important? As Carol Weiss, a pioneer in the field of program evaluation, suggested more than two decades ago: “We want to know not only what the outcomes of a program are but also why those outcomes appear—or fail to appear.” This means that before programs measure outcomes and their impact, they need to spell out the mechanisms of change implied in their work to be able to test the results.  

Spending time and energy on defining a program’s theory of change is time well spent. It prevents an organization from embarking on and allocating resources to evaluation efforts that are not in alignment with what the program is intending to do. In some unfortunate cases, we encounter evaluation reports that provide results on whether a program has a significant impact on a series of outcomes in which the outcomes measured are not exactly the ones the program was aiming for. Thus, theories of change are not merely a means for conceptualizing assumptions and outcome modifications. They are instrumental in framing actual evaluation plans that yield usable and actionable information.

2. Evaluation is an ongoing cycle of inquiry.
Evaluations are neither one-shot deals nor are they periodic check-ins on program impact. Actual program improvements are the result of ongoing cycles of inquiry in which strategies are frequently tested and reevaluated.  As programs tinker with new strategies and interventions, their underlying theories of change will need to be updated. Programs involved in ongoing improvement cycles look back and examine whether what they are conceiving as their main means to achieve their goals still holds true after program adjustments are implemented. Again, the idea is to ultimately evaluate what programs are actually doing to generate the outcomes they intend to achieve. Since both strategies and outcomes might be revised, theories of change need to catch up to new conditions in order to remain valid and help frame next evaluation cycles. 

3. Evaluation is not just for evaluators to do: We can all learn from doing evaluation.
Program evaluators are often contacted to evaluate a program and to report findings to the organization. That is, evaluators are usually charged with developing and implementing evaluation plans. While issues around program capacity may require external support to conduct evaluations, we have learned that evaluation is also a collective enterprise. It can’t be fully delegated out to external evaluators. It is in the process of collaborative work―as program evaluators learn about the intricacies of the program and its context, and program staff reflect on what they are doing, why they are doing it, and what they want to solve―that true opportunities for improvement emerge.

Evaluators have the technical knowledge concerning design and methods, but the program itself has the local expertise that ensures that evaluation efforts are useful. Not only do evaluators and programs need each other, but the collaborative and dialogical evaluation they do together ultimately enables organizations to learn. Understanding the organization and its goals, hearing diverse perspectives from different program stakeholders, and formulating program modifications help organizations learn. The good news is that learning can happen whether a program conducts its own evaluation or hires an evaluator. As program staff conduct evaluations and wrap-up evaluation cycles, they often wonder how to utilize the information gained, alluding primarily to how to translate evaluation findings into program improvements. But an equally important dimension of evaluation is about the transformative learning1 that can occur in the process of evaluation.

This issue of the FINE Newsletter further explores these ideas and provides practical accounts of doing evaluation on the ground. We draw upon the work of Candice Bocala, a lecturer at Harvard Graduate School of Education. She has designed a course to help future evaluators use the fundamental principles of program evaluation and improvement science.  We also share experiences and lessons learned from the fieldwork that students in her course did with their partner organizations as they jointly conceived, developed, and conducted their evaluations.

 


 1 Preskill, H., & Torres, R. T. (2000) The learning dimension of evaluation use. New Directions for Evaluation, 2000 (88), 25‒37. doi: 10.1002/ev.1189

 


This resource is part of the November FINE Newsletter. The FINE Newsletter shares the newest and best family engagement research and resources from Harvard Family Research Project and other field leaders. To access the archives of past issues, please visit www.hfrp.org/FINENewsletter

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project