You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Ernest House, Emeritus Professor at the University of Colorado, argues that democratic evaluation calls for more ingenuity than other forms of evaluation and that as a result its methods can take many forms.

Traditionally, evaluation has been shaped by an “expert” model of decision making. The government initiates programs and sponsors their evaluation, while evaluators take direction from the decision makers, conduct the studies, and report back to the government, which takes action as it sees fit. Democratic evaluation is an attempt to make evaluation more democratic by:

  • Representing a wide array of views and interests in evaluation studies
  • Encouraging stakeholder participation in evaluation processes
  • Providing opportunities for extended deliberation

By incorporating a broader array of views, values, and interests (often via direct participation), democratic evaluators expect studies to be better informed, accepted, and used. I favor an approach called deliberative democratic evaluation that includes all relevant stakeholders, promotes dialogue with and among stakeholders, and involves stakeholders in extended deliberation processes.

Its three key components are inclusion, dialogue, and deliberation. Inclusion means working with underrepresented and powerless groups as key stakeholders in the evaluation, not just the sponsors and powerful stakeholders; extensive dialogue increases the chances of evaluators understanding stakeholders and stakeholders understanding each other; and extended deliberation is careful reasoned discussion of issues, values, and findings by all concerned.

A Range of Methods
Democratic evaluation methods take many forms. For example, in a social work program in Sweden, Ove Karlsson¹ interviewed stakeholder groups—politicians, program professionals, parents, and involved youth. He presented findings of the interviews from each group to each of the other groups, collected their reactions, and presented those in turn to the others again. Finally, he gathered representatives from all groups to discuss the findings face to face. The deliberations of the meetings became part of the study.

I conducted an evaluation of a court-ordered bilingual education program in Denver that was extremely politicized. I convened a panel of lawyers and school administrators representing the contending parties in the case, designed the evaluation with their feedback, and collected data. I then presented my preliminary findings to them in face-to-face meetings twice a year, giving them chances to react to the findings and to each other, as well as to refocus the study as it progressed. Although some meetings were rancorous, the acrimony and distrust diminished over time, due to the transparency and joint deliberation.

In Britain, Ian Stronach² concluded an evaluation by stating his tentative findings in a survey sent to all participants. Participants were asked to respond whether they agreed with the findings and to give reasons where they did not agree. Based on this preview, Stronach revised the conclusions he presented to the sponsor in the final report. These studies featured iteration, feedback, involvement, discussion, collaboration, argument, dialogue, and deliberation, even while employing traditional social research methods.

The Importance of Ingenuity
How one democratizes an evaluation is open to the ingenuity of the evaluator, though participation should be structured carefully, in my view. For example, Jennifer Greene³ held an open public forum on a controversial high school science program, and she reports the meeting was completely unsuccessful. Unstructured meetings are unlikely to lead to careful deliberation. Over time we hope to develop a set of procedures to complement traditional research methods. (We have provided a checklist on the website of the Evaluation Center at Western Michigan University to prompt evaluators about some things to consider. www.wmich.edu/evalctr/checklists/dd_checklist.htm)

Finally, democratic evaluation may pose a challenge to current policies. For example, in principle there is no conflict between the mandate du jour from Washington—randomized experiments—and democratic evaluation. One can enlist stakeholders to identify issues, select outcome variables, and help interpret findings in randomized studies. However, I suspect that one purpose of the mandate is to control studies in terms of who is involved, what variables are examined, what issues are considered, what outcomes are sought, and what interpretations are presented. (My suspicion is not an argument against randomized studies, which are valuable.) By contrast, democratic evaluation moves away from information control toward making evaluations more open and transparent.

¹ Karlsson, O. (1998). A critical dialogue in evaluation: How can interaction between evaluation and politics be tackled? Evaluation, 2, 405-416.
² Stronach, I., & MacLure, M. (1997). Educational research undone: The postmodern embrace. Philadelphia: Open University Press.
³ Greene, J. C. (2000). Challenges in practicing deliberative democratic evaluation. New Directions in Evaluation, 85, 13-26.

Ernest R. House
Emeritus Professor
School of Education
University of Colorado at Boulder
856-F Walnut Street
Boulder, CO 80302
Email: ernie.house@colorado.edu

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project