You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Picture of Jennifer Greene
Jennifer Greene

Jennifer Greene is a professor in the Department of Educational Psychology in the College of Education at the University of Illinois at Urbana-Champaign. Her research focuses on the intersections between social science and social policy and emphasizes evaluation as a venue for democratizing dialogue about critical social and educational issues. Through her work in educational and social program evaluation, she seeks to advance the theory and practice of alternative forms of evaluation, including qualitative, participatory, and mixed-method evaluation.

You write about evaluation as a force for democratizing public conversations about important public issues. What do you mean by this?

I refer primarily to the inclusion of more stakeholder perspectives, interests, concerns, and values than are normally included in public conversations. While I don't think there are efforts to exclude people, the policy and academic conversations—or access to these conversations—are usually dominated by people in positions of authority and power. The public conversations tend to reflect these interests and are not necessarily inclusive of the interests of other people who are normally not part of the conversation.

I use the phrase democratic pluralism to capture three leverage points for being as inclusive as possible in an evaluation. First is inclusion in setting the evaluation agenda, that is, the questions and concerns to be addressed in an evaluation. Second is establishing criteria for judging quality. How are we going to know if this is a good program? I think that these types of judgments are often defined by default and that we need to include more people in the conversations about what constitutes a good program. A third point of leverage is interpreting results and whatever action implications might follow from them.

How does the theory of democratic evaluation connect with practice?

Overall, there's a lot more theorizing about, than implementation of, approaches that could be specifically labeled democratic. I think one of the important reasons for this is that the democratic approaches to evaluation are less about particular methods or strategies or tools—the usual technical aspects of evaluation—and more about evaluator roles, stances, and value commitments. I've grouped these other aspects of evaluation into two clusters: one on the positioning of evaluation in society (for example, issues about whose interests are addressed, what's the purpose of the evaluation) and the other about the character of evaluation practice—meaning the kinds of relationships that are established in the field between the evaluator and stakeholders and the kinds of interactions and communications the evaluator strives to have among stakeholders.

If you take the House and Howe approach of democratic deliberative evaluation as an example, they feature processes of dialogue and deliberation.¹ These are processes about communication, about learning from other people, about voice, about positioning. They are not methods. What democratic evaluation is endeavoring to accomplish is much harder to make concrete, put into practice, and be reflective about.

How does participatory evaluation differ from democratic evaluation?

In participatory approaches, there's an effort to involve the people being studied as co-evaluators, as a way of enabling them to assert their voice and views and as a capacity-building effort. Democratic evaluation is not a participatory process in the sense that members of the setting are involved as co-evaluators. That's an important distinction. Democratic evaluation is inclusive in the points of leverage I mentioned: setting the agenda, ascertaining quality, and interpreting results. Those are parts of an evaluation process in which one needs to be as pluralistic and inclusive as possible. But that's not the same as having people work alongside the evaluator as co-evaluators.

What do you think are some of the challenges of implementing democratic evaluation?

One big challenge is just the acceptability of an approach to evaluation that is more explicitly values-engaged or ideological in its orientation. This contrasts with the general image of evaluation as an objective, fair, neutral, impartial process that renders judgments of truth. So there's a considerable challenge to the acceptability of this approach as a viable way of doing evaluation. It's a challenge to persuade people who commission evaluations that it's in their interest to share power more equitably with program staff and beneficiaries. It's also difficult to persuade pro-gram staff who have very full agendas as well as people who are the intended beneficiaries of a program that it's in their interest to participate. There are many instances of offers of help to people who are on the margins of our society, and then that help is not forthcoming.

Democratic evaluation is a very demanding process for participants and the evaluator. I wonder whether it's feasible in large-scale, multisite evaluations, and even in a statewide evaluation of a program. I see it as challenging in terms of resources and time. There are lots of lofty words and concepts thrown around, but making them concrete and enacting them in a particular context remains challenging.

I also think there is a tension between commitments to inclusion, pluralism, participation, and democratization of agendas on one side, and methodological quality on the other side. Many years ago, I set up a very inclusive process in an attempt to establish evaluation questions with people in the community, and the questions that were generated through this highly democratic process were not the kinds of questions that would serve the program or any stakeholder's interests very well. I learned a lot from that one! There's a tension between being democratic and trying to do good work in terms of methodological soundness. It's an inherent tension that just perhaps needs to be acknowledged as part of the practice. Not necessarily resolved, but acknowledged.

So how do you prepare evaluators to apply democratic principles in their work?

I don't think our typical evaluation training programs have anything in them that would help us do this. Evaluators typically don't have in their background engagement with theories of democracy, considerations of what it means to be democratic and the challenges thereof, and issues of social justice. I don't think we can carefully use democratic approaches without serious thought for ourselves about what that means.

Democratic evaluation also draws on all the group process skills, such as facilitation and communication. The evaluator is expected to run a deliberative forum where people bring their own concerns and interests to a common place—and it needs to be a place where people feel safe to express what's on their mind. Evaluators don't have the training background that gives them the expertise to do this, and facilitation becomes an enormously difficult challenge.

So, democratic evaluators need reading, contemplation, and practice with the weighty concepts associated with democracy and with the process skills so central to the actual practice of this approach.

Related Resources


Greene, J. (2000). Challenges in practicing deliberative democratic evaluation. New Directions for Evaluation, 85, 13–26.

Greene, J. C., Millett, R. A., & Hopson, R. H. (2004). Evaluation as a democratizing practice. In M. T. Braverman, N. A. Constantine, & J. K. Slater (Eds.), Foundations and evaluation: Contexts and practices for effective philanthropy (pp. 96–118). San Francisco: Jossey-Bass.

Greene, J. C. (in press). A value-engaged approach for evaluating the Bunche-Da Vinci Learning Academy. New Directions for Evaluation.

What can be done to advance the practice of democratic evaluation?

I think evaluation always advances some particular set of values and somebody's interests and not others'. A pluralistic, inclusive set of values, as a minimum definition of democracy, is the most defensible. I definitely think it's important to advance democratic evaluation.

First of all, it needs to be legitimized. Any approach to evaluation that is explicitly values-engaged in some way remains on the fringe of practice. That was true for qualitative approaches to evaluation for quite some time, and we had long debates about interpretive methodologies and different approaches to data gathering and interpretation. I would say that, within the field as a whole, both in theory and practice, qualitative methodologies and qualitative approaches have been legitimized. That doesn't mean everybody in the community accepts them, but certainly, they're widely legitimized. I think that the same legitimization is entirely possible for democratic evaluation approaches. It will just take time.

The second thing that I think is extremely important is to provide some empirical evidence for the claims of these theories. Brad Cousins has written a wonderful empirical summary of the evidence about participatory evaluation and the claims in that tradition of the link between stakeholder participation and utilization of the evaluation results.² That kind of empirical work needs to be done in democratic evaluation so that all those gaps between theory and practice are narrowed. We don't have an empirical literature, just a few scattered studies. I think what's needed is an extensive effort to try these theories in practice, do good data-gathering about what is happening, and build a body of evidence that can support, or not support, the theories in practice.

¹ For more information on this approach see the article by Ernest House in this issue.
² Cousins, J. B. (2003). Utilization effects of participatory evaluation. In T. Kellaghan & D. Stufflebeam (Eds.) International handbook of educational evaluation (pp. 245–267). Boston: Klewer.

M. Elena Lopez, Senior Consultant, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project