You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼

James Sanders of the Evaluation Institute discusses the utility of cluster evaluation as a way to examine multiple programs.

Cluster evaluation is an approach to program evaluation that was developed at the W. K. Kellogg Foundation during the late 1980s to address certain outcome questions. The need for a different way of doing program evaluation arose when multiple grants (sites) were awarded to address common problems, and grantees were allowed to operationalize the problems independently, develop strategies to address the problems they defined, and develop their own project evaluation plan. The types of general problems that were being addressed included improving science literacy of children and adults in Michigan, improving agricultural safety, improving health practices of adults in rural communities, increasing public policy participation of rural citizens, and improving public knowledge about environmental issues connected to ground water in Michigan.

Typically, the sites would modify their proposed project designs during the first year of their three-year grant and then continue to change them as they learned from their successes and failures. Each project adapted to its local environment. Its human resources, existing and past programs, priorities (including institutional commitment), and physical resources all shaped the ways in which they tried to achieve their defined outcomes.

The board of the Foundation rightfully wanted to know, given a sizeable investment, whether any progress had been made in the general problem area and, if so, what was it? In each case the question was, “What happened and why?”

The individual project evaluations were insufficient to answer this question holistically. Because of the intentional lack of standardization and control, aggregation of findings was not feasible. There was a sizeable collection of case studies (each cluster had anywhere between 5 and 50 funded sites), but if one wanted to draw conclusions from the total collective experience, he/she would have been hard-pressed to come up with a cost-effective way to do so.

Cluster evaluation was created to address the following questions:

  1. Overall, have changes occurred in the desired direction? What is the nature of these changes?
  2. In what contexts have different types of changes occurred and why?
  3. Are there insights to be drawn from failures and successes that can inform future initiatives?
  4. What is needed to sustain changes that are worth continuing?

The basic element needed to make cluster evaluation work is collaboration—working together across all sites as a team of evaluators with a common goal and learning from everyone's collective experiences to get answers to these questions. The key component of cluster evaluation has been networking conferences, during which information is shared and analyzed by all of the grantees. They learn together as if they were in a course together in which cooperative learning was the instructional model. This takes planning.

The steps in cluster evaluation are as follows:

  1. Grants are made based on proposals for projects addressing a general problem. Each proposal includes a project-level evaluation plan.

  2. A cluster evaluator is hired by the funder.

  3. The cluster evaluator visits each site, collects documents, becomes oriented to the local project and local evaluation plan, makes suggestions, and negotiates roles—a partnership involving the funder; the grantee and its staff, including evaluation staff; and the cluster evaluator.

  4. The first networking conference is dedicated to a search for commonalties and uniqueness in outcome objectives across projects, and to the development of categories of strategies being planned to achieve different outcomes. In general, it serves to develop a common understanding of mission and conceptual clarity.

  5. Networking conferences held at six-month intervals continue to refine operational definitions, develop common instruments, and share findings. Cluster-level questions and data collection plans evolve as data collection and analysis plans, and plans for reporting findings at future networking conferences develop. Outside consultants are invited to stimulate new ways of thinking about the general problem or about evaluation methods. Later conferences focus on discussions of findings and examine why they occurred. They also include a debate of tentative evaluative conclusions, with a look at missing documentation, which allows for outside interpretations of tentative evaluative conclusions.

Cluster evaluation is intrusive and affects the thinking and practices of all who are involved—project staff, funders, and evaluators. In the end, it supports or rejects claims that were made at the beginning in the form of intended outcomes. It accumulates a wealth of explanatory information from diverse settings. It involves many minds, leading to considerable certainty about the collective evaluative conclusions that are reached.

Important elements of cluster evaluation are:

  • Clear role definitions and understandings
  • Cooperation
  • Conflict resolution among the partners
  • Trust
  • Giving credit when it is due

There are both strengths and limitations to the cluster evaluation approach. On the positive side, cluster evaluation is an evolutionary approach to program evaluation; is participatory and collaborative; helps build capacity; produces information and understandings beyond those found in individual project evaluations; is outcome driven; pushes for clarity; and involves multiple perspectives. Its limitations come from the opportunities for bias and co-optation to occur. Cluster evaluation is also time-bound and depends on goodwill, cooperation, coordination, and stability of cluster membership. All partners should share the vision of the collective effort to bring about change and need to take their responsibilities to the cluster seriously.

If done well, a cluster evaluation can be exciting and productive, producing new understanding and clear results. It is a practice worth trying.

Parts of this article are based on Sanders, J. R. (1997). Cluster evaluation. In Chelimsky, E., & Shadish, W. R. (Eds.) Evaluation for the 21st Century. Thousand Oaks, CA: Sage.

James Sanders
Associate Director
The Evaluation Center
Western Michigan University

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project