You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Saville Kushner of the Centre for Research in Education and Democracy at the University of the West of England suggests ways that an evaluation's participants can make evaluations more democratic.

It is 25 years since Ernest House started writing about social justice in evaluation¹ and 30 years since Barry MacDonald promoted his model of democratic evaluation.² These men were linked—not only by their close personal friendship but by the interconnection of their themes. Social justice has to do with how we distribute social goods; democracy is about the way we agree on the criteria for fair distribution of social goods—basically, who gets a say and what clout their word carries.

House and MacDonald argued that social justice and democracy define both the goals of evaluation and its roles, i.e., what evaluation is designed for and what it is expected to do in particular situations. House went on to argue that all evaluation methodologies represent choices about how to see and differentiate the world, and, as such, methodologies embody assumptions about social justice. The choice of how to measure people's actions and accomplishments is more than the application of appropriate method; it is the mobilization of certain assumptions about people and their right to access to evaluation. MacDonald argued that each program evaluation is a case study of political culture in that each program enacts societal structures of power and authority and raises questions about general societal rights. His model of evaluation defined a democratic practice of information exchange and inclusion, regulated by democratic principles of procedure. He modeled evaluation on democratic civic life.

At the time, neither House nor MacDonald sparked debate other than among a few like-minded evaluation theorists. Nowadays it is hard to sustain a conversation about program evaluation without addressing democracy and social justice. Evaluation has been overtaken by social change. From its earliest days as a practice and a discipline, evaluation has positioned itself in relation to power—government is its main sponsor. It has served government, though also sought to act as its conscience. Now we live in Western societies characterized by a crisis in public trust; there are growing gaps in mutual understanding and tolerance between government, corporations, and citizens; more assertive governments—progressive technocracies, perhaps—are preoccupied with exerting sufficient control to enforce loyalty to their policies, often irrespective of the priorities of their electorate.

There are signs that evaluation, in response, is seeking to reposition itself and draw closer to communities and corporations in collaborative relationships—constrained, of course, by the fact that both lack resources and often organization. If evaluation can help rebuild public trust and improve understanding between government, corporations, and citizens, it is best off starting at the electoral base. Part of this repositioning is its claim to be democratic. What, then, might be your expectations of evaluation, claiming to be democratic, when it is them coming to evaluate you? What part do you play? Where possible, I suggest that you:

1. Insist that we evaluators negotiate our ethics. Don't be content with a consent letter or certification from an ethics review body. Insist that we evaluators express your views and priorities as you choose to frame them. But insist, too, on your confidentiality until you are content to release your data. Remember, you are the sole owner of data on your life and work.

2. Assert your rights to help shape the evaluation agenda. Your questions should have the same status as anyone's. If it's about your life and work, the evaluation belongs to you as much as to our sponsors. Remember, your viewpoint (as a professional or a citizen) may not be liked, but is undeniable.

3. Make methodological demands on us to portray you and your work as you feel it needs to be represented. Don't just accept our (or our sponsor's) choice of method.

4. Insist on seeing and having the right to comment on a draft of our report. However, don't worry about seeing a transcript of your interview.

5. Ask us to be explicit about the criteria against which we or those who we report to make judgments about your work or your program. Question the criteria that are unfair to or ignore your accomplishments.

Above all, remember that democratic evaluators have aspirations which go beyond our constraints. We have a tendency to promise more than conditions allow us to deliver; we often appear more democratic when negotiating access than we are ultimately able to be when it comes to reporting. The more support democratic evaluators have from you, the more leverage we can exert against the constraints of our contracts.

¹ House, E. (1980). Evaluating with validity. Thousand Oaks, CA: Sage.
² MacDonald, B. (1976). Evaluation and the control of education. In D. Tawney (Ed.), Curriculum evaluation today: Trends and implications (pp. 125–136). London: Macmillan.

Saville Kushner
Director
Centre for Research in Education and Democracy
University of the West of England
Coldharbour Lane
Bristol, BS16 1QY
UK
Email: saville.kushner@uwe.ac.uk

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project