You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


On November 3, 1994, Harvard Family Research Project hosted Michael Q. Patton as its third speaker in the Evaluation Seminar Series. Patton is a member of the national faculty of the Union Institute and well known for his work evaluating Minnesota’s Early Childhood and Family Education (ECFE). What follows is a summary of his talk, “Sneeches, Zax, and Empty Pants: Processes in Developmental Evaluation.”

Professor Michael Q. Patton is known for his developmental approach to evaluation, a technique which uses evaluation to empower program participants and staff. It forces the role of the evaluator to change from judge to facilitator, and allows program participants and staff the time to develop their own criteria for what constitutes a good evaluation. “Evaluation is essentially about how we know whether what we have done is good,” Patton explained.

What Patton calls developmental evaluation is alternatively called empowerment or participatory evaluation by others. Essentially, this new approach to evaluation involves the following four factors:

  • Data collection is built into the program intervention itself and often involves teaching program staff and participants how to collect data.

  • Evaluation supports the intervention—it is used to enhance and strengthen the program, rather than just summarize its strengths and weaknesses.

  • The evaluator works as a facilitator, engaging the staff, administrators, and participants in the actual work of assessment.

  • Developmental evaluation must fit the values of the program it seeks to assess (e.g., if a school-based health care clinic wants to assess whether students find its services helpful, an evaluation that looks at utilization data but does not interview students will not reflect the values of the program or answer its question).

The primary hurdle to involving program staff and participants is the term evaluation. It frightens people. “It often conjures up memories of third grade math tests and times spent in the assistant principal’s office,” Patton claimed. Thus, most of the time, evaluators are not warmly embraced. In working with those who are not accustomed to combining evaluation with their program work, Patton uses a variety of methods to break down barriers and stereotypes.

First, Patton explains that any work that seeks to answer “how do you know?” or “what are your criteria?” is evaluation. “I try to help people overcome their incredible ability not to see the world the way it is,” he said. He does so by showing them that they owe it to themselves—and to their hard work—to find out whether or not they are doing what they think they are doing.

Next, Patton engages new evaluators in a type of cultural immersion. He explains evaluation’s language, hierarchy, and way of thinking by using Dr. Seuss stories as metaphors or case studies for engaging people in this new way of working. “Dr. Seuss is rich in wisdom about the human experience. Evaluation is, after all, about human experience. It’s hard for people to see me as arrogant when I stand up and read Dr. Seuss. By reading Dr. Seuss, I’m able to engage evaluation that comes from one culture (the culture of academia and science) into the culture of another group. I help them see that the wisdom that comes from their own lore is a way of connecting to this new material,” he elaborated.

Patton has found that this approach to evaluation has led to staff assessing their programs more rigorously than outside evaluators would. He told a story of how family support program directors generated a list of outcomes that they wanted to see, collected data on those outcomes, and then analyzed the data. They were, Patton noted, much tougher on themselves than a traditional evaluator would have been, and this partnership between program and evaluation ultimately had programmatic benefits. Program directors were compelled by the data to continue developing their program and approaches. In particular, they chose to focus more intentionally on soliciting parents’ experiences and using that as the basis for program change.

Further Reading

Patton, M. (1986) The utilization-focused evaluation. Newbury Park, CA: Sage.

Patton, M. (1990). Qualitative evaluation and research methods. Newbury Park, CA: Sage.

Patton, M. (1994). Developmental evaluation. Evaluation Practice, 15(3), 311–320.

Developmental evaluation can also bring about conflict among staff who discover they disagree on how best to serve families. It is easier for an external consultant to evaluate a program without engaging staff in messy disagreements about basic assumptions. Yet, since a developmental or participatory approach to evaluation often makes obvious very important and previously hidden differences of opinion, programs can benefit from the opportunity to work through these problems.

True to his form, Patton’s method of encouraging cooperation among staff is to read Dr. Seuss’ story of two stubborn “Zax” who must each learn to give a little so that they may both move ahead. “The Dr. Seuss approach to evaluation has served us very well,” Patton said.

Elaine Replogle, M.T.S., Research Assistant, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project