You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Katherine Ryan, Associate Professor of Educational Psychology at the University of Illinois, describes three approaches to democratic evalua-tion and argues that they can provide field-tested methods for addressing equity and inclusion issues in evaluations of programs for children, youth, and families.

Democratic evaluation has been conducted successfully in such diverse domains as education, healthcare, and public-sector training. It particularly fits well with programs where there are issues about equity and inclusion for children, youth, and families and about how these groups might be engaged in dialogue about policy and program change. This article presents an overview of three democratic evaluation approaches, emphasizing methodology and evaluator and participant roles, and includes a brief discussion about the role of democratic evaluation approaches in pro-grams for children, youth, and families. The three approaches are democratic evaluation, deliberative democratic evaluation, and communicative evaluation. Each of them has a set of strategies for addressing issues such as who should make policy, how to find out and represent various stakeholder groups' values, and the relative social ranking and power of stakeholders.1

Democratic Evaluation2
The democratic approach to evaluation addresses critical evaluation issues such as dealing with power relations among stakeholders, including stakeholders' perspectives, and providing useful information to programs. The primary methodology is a case study. Using this methodology, the program is viewed as a case with the evaluator representing the groups involved. The evaluator brings to the table practitioners' definitions, language, and theories about the program.3 In addition, the case study involves a form of critical reflection whereby practitioners learn to understand themselves and others (including policymakers).

The role of the democratic evaluator as an independent knowledge broker is key to leveraging a redistribution of power relationships in an evaluation. With a commitment to serve the public (i.e., ordinary citizens, who all have different viewpoints, interested in or affected by the concerns the program addresses), the evaluator is a go-between for all groups, sharing information with them about the program. Power redistribution is accomplished by “democratizing knowledge” and holding all groups, including the client, mutually accountable. Local program knowledge is as valued (or more so) than other kinds of knowledge such as social science-based knowledge. In this procedure, program knowledge is more equally distributed than power is.4

Deliberative Democratic Evaluation5
Deliberative democratic evaluation holds that evaluation should contribute to advancing democracy in a democratic society. This approach is characterized by three principles: deliberation, inclusion, and dialogue. Deliberation is defined as reasoning reflectively about relevant issues, including identifying preferences and values of all stakeholder groups. Inclusion is defined as including all relevant interests, stakeholders, and other citizens with specific concerns. The approach is also dialogical, engaging stakeholders and evaluators in dialogues during the evaluation process. Through dialogue, stakeholder interests, opinions, and ideas can be portrayed more accurately and completely.

Dialogues among and within stakeholder groups are potentially valuable for such important activities as defining evaluation questions, interpreting evaluation findings, making judgments, and planning responses to the evaluation findings. Key to this process is a commitment by stakeholders and other groups to put aside narrow self-interest and address issues among themselves through respectful, reciprocal, conversation-enabling deliberation. In this way, “legitimate knowledge” is broadened to incorporate views of less powerful stakeholder groups.

In deliberative democratic evaluation there is particular emphasis on finding stakeholder groups and including more of their perspectives than in a typical evaluation and ensuring these groups are represented in conversations and decisions. This work may be done in committees, focus groups, one-shot or multiple public forums, and surveys. The evaluator employs an array of mixed methods, including the methods just mentioned and others (e.g., assessments), to draw a conclusion about the program. The procedure enables issues to surface in the evaluation that may remain covered or undisclosed when implementing other evaluation approaches. This methodology requires the evaluator to promote a democratic sense of justice, be skilled at mediation, conflict negotiation, and managing groups, and be prepared to take positions on vital political and moral issues.

Communicative Evaluation6
Communicative evaluation is still in the early stages of development. The primary purpose of communicative evaluation is to create spaces for communication about critical issues and themes emerging from the program and its context. That is, the evaluation can be considered as a space for conversation among local stakeholder groups who also form a network for communication and ideas about evaluation issues. These locally created ideas are intended to circulate in the public discourse, such as private and public conversations, print media, organized forums (e.g., town meetings), televised reporting, and websites. Such widespread communication of ideas has the potential to lead to new or different public policies or programs.

Communicative evaluation is related to participatory action research (PAR) or self-evaluation.7 It is characterized by shared ownership of the evaluation, a community-based view and analysis of the educational or social problem, and an orientation toward community action. There is a different kind of relationship between stakeholders and the evaluator in communicative evaluation. Using mixed methods, stakeholders become co-evaluators with the evaluator and evaluators become collaborators and co-participants with the stakeholders in improving some educational or social problem (e.g., improving family literacy). The evaluator role is no longer authoritative. In contrast to the more traditional evaluator role as “information carrier,” the evaluator enables and extends communication among participants and groups. Further, communicative evaluation is not intended to replace traditional evaluation where the evaluation is designed to meet the needs of decision makers. Each of these evaluation types are best considered different lenses, each offering a unique, in-depth view of issues by considering two different images of a program from distinct perspectives.

Evaluating Programs for Children, Youth, and Families
Federal government policies encourage scientifically based evaluation and research that use experimental and quasi-experimental designs. The premise of scientifically based evaluation is that it creates evidence-based practices for programs that serve children, youth, and families and that these practices will translate to effective field-based practices. A claim of what works based on scientifically obtained knowledge for this diverse set of programs (such as family support, family involvement in education, early literacy, youth development, and out-of-school time programs) is a forceful argument in this climate where social and educational programs compete for scarce resources.

Scientifically based evaluation certainly provides an important understanding about what works in programs. On the other hand, historically, these kinds of evaluation results were intended for large-scale policymaking, not local decision makers and stakeholders. While the role that stakeholders' perspectives play is continually increasing in experimental design,8 implementing a comprehensive evaluation strategy that includes other approaches is critical to increasing the knowledge base about what works in various family- and child-serving programs. Results from different kinds of program evaluations will provide different understandings about program practices.

Democratic evaluation approaches are likely to provide just this kind of different understanding. They extend the knowledge base from the lab to the field and from the experts to children, youth, families, and program practitioners. Democratic evaluation makes it possible to collect important information that scientifically based evaluation does not, including promising practices that lack an evidence base, adaptations of proven program models to local contexts, and innovations from the field.

Democratically oriented evaluations provide a means for children, youth, and families to be involved in considering what kinds of practices are effective and in evaluating these practices. They provide a set of field-tested resources for addressing equity and inclusion issues and engaging children, youth, families, and practitioners in dialogue and deliberation about programs and policies.

1 Ryan, K. E. (2004). Serving the public interests in educational accountability. American Journal of Evaluation, 25(4), 443–460.
2 MacDonald, B. (1976). Evaluation and the control of education. In D. Tawney (Ed.), Curriculum evaluation today: Trends and implications (pp. 125–134). London: Macmillan.
3 Simons, H. (1987). Getting to know schools in a democracy: The politics and process of evaluation. New York: Falmer.
4 Simons.
5 House, E., & Howe, K. (1999). Values in evaluation and social research. Thousand Oaks, CA: Sage Publications.
6 Niemi, H., & Kemmis, S. (1999). Communicative evaluation. Lifelong Learning in Europe, 4, 55–64.
7 Kemmis, S., & McTaggart, R. (2005). Participatory action research: Communicative action and the public sphere. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative research (3rd ed.). Thousand Oaks, CA: Sage.
8 Cook, T. D. (2002). Randomized experiments in educational policy research: A critical examination of the reasons the educational evaluation community has offered for not doing them. Educational Evaluation & Policy Analysis, 24(3), 175–199.

Katherine E. Ryan
Associate Professor
Educational Psychology
University of Illinois at Urbana-Champaign
260B Education Building
1310 S. 6th St., MC 708
Champaign, IL 61820
Email: k-ryan6@uiuc.edu

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project