You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼

Over the past three decades, many evaluators have become much more focused on process issues. Their motives for doing evaluation are often based on understanding which aspects of a program explain observed outcomes, whether programs are implemented as intended, and whether program theories—as implicitly or explicitly conceptualized—are valid or in some sense reasonable. With an emphasis on the conceptual or educational benefits of evaluation, they stress how important it is that systematic inquiry be responsive to the needs of program sponsors, developers, and implementors. At the same time, however, they note a need for a sufficient degree of technical rigor in planned inquiry so as not to mislead or misdirect efforts to improve programs.

We believe our form of participatory evaluation strikes an appropriate balance between the needs for technical rigor and responsiveness in evaluation. Our approach differs from other forms of participatory evaluation in that it does not hold explicitly the goals of emancipating oppressed groups, ameliorating social inequities, or redefining power relationships. While these are important goals, our primary purpose for using participatory methods in evaluation is utilization—rather than empowerment-focused. We seek to maximize the usefulness of evaluation data for intended users. Here, we summarize the defining characteristics of our approach, consider its potential consequences, and review what has been learned about participatory evaluation from a growing body of empirical research.

Defining Features
We see three ways in which participatory evaluation is an extension of the traditional stakeholder-based model. First, we assume that control of the evaluation project is jointly shared by researchers and practitioners. Second, we argue that there should be a limit to the number of stakeholders involved. Third, we seek an unusual depth of participation among non-researchers.

In the traditional approach, stakeholders are consulted early to define the focus of the evaluation, and later to help interpret data. Rarely are they involved during the research gathering and write-up phases. In our model, “jointly sharing control of the evaluation” means that the evaluation agenda is mutually determined and controlled during all research phases.

In the traditional approach, stakeholders participating in the evaluation might be a very diverse group: from program sponsors, developers, and managers to program beneficiaries and special interest groups. In participatory evaluation, stakeholder participation is limited to primary users—those with program responsibility or a vital interest in the program. This limit on participation is urged for practical reasons and because data are more likely to be used if those in a position to do something with them help to inform the evaluation.

Finally, in the traditional stakeholder-based model, participation is generally consultative, early and late in the process. In participatory evaluation, members of the program community are involved in defining the evaluation, developing instruments, collecting data, processing and analyzing data, and reporting and disseminating results. We believe that to the extent that practitioners are privy to and participate in making sense of raw data, their understanding will be deeper and more meaningful than if they were merely to process someone else's interpretation.

The primary benefit of utilization-focused participatory evaluation is that it enhances the usefulness of evaluation data for intended users. While discrete decisions about program structure or administration may result, deepened conceptual understandings of program components and their interrelationships and consequences are more common.

To the extent that participation in evaluation activities by practitioners is ongoing, and that lessons learned are shared with other program staff, organizational changes are likely. Specifically, participatory evaluation may establish evaluation as an organizational learning system that facilitates the development of shared understandings of organizational operations, and of cause and effect relationships. New functions and infrastructure to support evaluation activities, as well as continued development and application of skills of systematic inquiry, are organizational consequences that transcend the bounds of any specific evaluation.

Further Reading

Cousins, J. B. (1994, October). Consequences of researcher involvement in participatory evaluation. Paper presented at the annual meeting of the American Evaluation Association, Boston. (Contact author for copy.)

Cousins, J. B. (in press). Understanding organizational learning for educational leadership and school reform. In K. A. Leithwood (Ed.), International handbook of educational leadership and administration. Boston: Klewer Academic Publishers.

Cousins, J. B., Donohue, J. D., & Bloom, G. (forthcoming). Collaborative evaluation: Survey of practice in North America. Paper accepted for presentation at the joint meeting of the Canadian Evaluation Society and the American Evaluation Association, Vancouver, November 1995. (Contact author for copy.)

Cousins, J. B., & Earl, L. M. (Eds.). (1995). Participatory evaluation in education: Studies in evaluation use and organizational learning. London: Falmer Press.

Cousins, J. B., & Earl, L. M. (1992). The case for participatory evaluation. Educational Evaluation and Policy Analysis, 14(4), 397-418.

Cousins, J. B., & Leithwood, K. A. (1993). Enhancing knowledge utilization as a strategy for school improvement. Knowledge: Creation, Diffusion, Utilization, 14(3), 305-333.

Lessons Learned
A recently published collection of empirical studies (Cousins & Earl, 1995), provides some interesting insights into the viability of participatory evaluation partnerships.

  1. Partners derive a powerful sense of satisfaction and professional development from their participation.
  2. Data are used in program decision-making and implementation, but are often subject to political influences within the program context.
  3. Evaluation is sometimes established as an organizational learning system. Still, an extended timeframe to accommodate repetitions of evaluation and sensitivity to organizational cultural change is needed to explore this more fully.
  4. Evaluations which limit participation to those with limited clout sometimes meet with frustration and disappointment when trying to make program changes based on data. Participation of primary users with the organizational authority and power to act on data once generated is essential.
  5. Close involvement by evaluators can create unrealistic expectations for what the evaluations can accomplish. Evaluators may operate more effectively as technical resources and consultants.
  6. Organizational support for participatory evaluation is vital. Time required to carry out an evaluation is often severely underestimated, leading evaluation participants to neglect primary responsibilities, which, if unanticipated, can cause stress among organization members.
  7. Highly technical activities such as quantitative data analysis are often best handled by evaluators or consultants. Training practitioners to do these may not be time well spent in light of the demands on their time.

Participatory evaluation may stimulate the program community to learn about and improve their programs, and enhance the capacity of organizations to learn. Still, these consequences likely depend on the degree to which organizations already know how to learn and change as a result of systematic inquiry. Since few organizations are equipped to do this, evaluators need to be aware of the limitations of the organization to learn as the result of one evaluation. Educating practitioners about the power and potential of evaluation, and developing realistic views of its challenges and promises, is time well spent by evaluators committed to both participatory evaluation and organizational change.

J. Bradley Cousins
Faculty of Education
University of Ottawa
145 Jean-Jacques Lussier
Ottawa, Ontario
Tel: 613-562-5800 x4088
Fax: 613-562-5146

Lorna M. Earl
Scarborough Board of Education

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project