You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Michael Quinn Patton
Michael Quinn Patton

Michael Quinn Patton directs an organizational development consulting practice based in St. Paul, Minnesota. He is also a professor with the Union Institute & University Graduate School, a nontraditional university offering interdisciplinary doctoral degrees in applied fields. Patton is a widely known and prolific contributor in the evaluation community, and is the author of many books, including "Utilization-Focused Evaluation" and a new 3rd edition of "Qualitative Research and Evaluation Methods" (2002). Prior to his current pursuits, Patton had an 18-year career on the faculty at the University of Minnesota. He is the recipient of numerous evaluation awards and served as president of the American Evaluation Association in 1988. Patton has a Ph.D. in Organizational Development and Sociology from the University of Wisconsin.


Taking advantage of Michael Quinn Patton’s long-standing position in the evaluation community and interdisciplinary perspective, we asked him four questions about historical and emerging trends in evaluation practice.

 

What do you consider to be three or four "breakthrough ideas" over the last 10 to 15 years that have really helped define and influence modern evaluation practice?

At the top of the list is process use. Historically, the discussion about evaluation utilization has focused on findings and what happens with an evaluation’s findings or who uses them after the evaluation is over. Utilization-focused evaluation, in contrast, says that an evaluation’s use should be considered throughout the evaluation process. Use is determined by how real people in the real world apply evaluation findings and what they gain from being a part of the evaluation process.

In line with this idea, process use focuses on the ways in which being engaged in an evaluation can have an impact on those involved. It is about how people who participate in the evaluation learn to use and apply the logic, reasoning, and values that underlie the evaluation profession.¹ In this sense, we have seen more attention in our profession on process use. We now see evaluations being used for learning and capacity building, rather than just to generate findings.

A second important change in the evaluation field is that we have developed an extensive and diverse menu of evaluation choices. We have moved well beyond the idea that there are only formative and summative evaluations. In fact, in the last revision of my book, Utilization-Focused Evaluation², I identified over 100 possible evaluation approaches. This diversity means that both the evaluator and the user have the opportunity to figure out the kind of evaluation that is most appropriate for their particular situation and information needs.

A third breakthrough is the end to the qualitative-quantitative debate. Evaluators increasingly recognize that mixed methods have much to offer - and that the primary methodological issue is determining what methods are appropriate based on the particular evaluation situation and the needs of those who will use the findings. Quantitative and qualitative methods do different things. Each, however is valuable. And each set of methods continues to develop.

I just completed the third edition of Qualitative Research and Evaluation Methods.³ That gave me the opportunity to review the last decade of changes in qualitative methods. Following the end of the qualitative-quantitative debate we’ve seen the emergence of a rich variety of qualitative evaluation approaches, including some that are very creative, such as more artistic representations for both collecting and reporting data.

And finally, there is a recognition that we need to adapt evaluations to cross-cultural environments. With the globalization of evaluation and increasing attention to cross-cultural diversity, evaluators are being challenged to adapt their practices in ways that are culturally sensitive and that respect diversity. This is enriching the variety of evaluation options available and pushing us to examine the culture-based assumptions that undergird traditional evaluation methods and models.

For example, with groups whose cultures are highly oral and story-oriented, like Native American groups, native groups in Latin America, and indigenous groups in Africa, the language and approach evaluators use in logic modeling, which uses flowchart-like diagrams to show relationships between strategies and outcomes, can be very off-putting. Logic models typically are linear and formal and the language of theory is fairly intimidating. Instead, we can adapt logic modeling to culturally preferred modes of communication to capture the understandings and experiences of people in oral cultures. With these cultures we can use storyboarding, where you arrange stories into a sequence of images, rather than the arrows and boxes that people who draw flowcharts prefer.

In family support, we have seen a trend toward programs that help families and communities access or collect data to secure the supports they need (e.g. jobs, child care, safety). How does the notion of assisting families and communities to get and use data in this way fit with the theory and approach of utilization- focused evaluation?

The centerpiece of utilization-focused evaluation is that one must define the primary intended users of an evaluation and work with them to identify and achieve their intended uses. Identifying families and community members as primary intended users and working with them to articulate their needs and desired uses is completely compatible with utilization-focused evaluation. In addition to generating relevant findings for users, utilization-focused evaluations allow evaluators to facilitate the evaluation process in ways that strengthen the users’ capacity to engage in evaluative thinking and undertake evaluations. This is an example of process use that I referred to in answering the first question. So, for example, family or community members who participate in an evaluation will not only help generate relevant findings about whatever is evaluated, but they will also learn to think evaluatively and may learn or deepen skills in interviewing or making sense of data.

Let me give you an example that I cite in the new edition of my qualitative book. Rainbow Research in Minneapolis studied the feasibility of developing a transitional housing program for prostituted women. To assist in data collection they recruited five women who had been prostituted, trained them in focus group facilitation, and had them interview women leaving prostitution. For them the experience was empowering and transformational. They were excited about learning a new skill, pleased to be paid for this work, and found it rewarding. In pilot testing they critiqued the interview guide, and as a result the evaluators edited the language, content, order, and length. During interviews, it was clear they had rapport with their peers based on shared discourse and experience. For example, during a group simulation, the interviewers loudly and provocatively bantered with one another as they might have on the street. They gathered information others would have been hard pressed to secure.

The interviewers were proud of their contribution. At project’s end they requested certificates acknowledging the training they received and the interviews successfully performed. For all concerned it was a positive experience, with findings that most definitely shaped the final report recommendations.

There seems to be an emerging trend toward the use of "scientifically-based research" in defining acceptable policy and practice on issues that include family support, early childhood, and education. What do you think "scientifically-based research" means and what, if any, are the implications for evaluators?

I think that this language is primarily political. It is a corollary to other terms that are a part of the trend you’ve observed like "evidence-based practice," "best practices," and "lessons learned." Most blatantly political is this phrase "scientifically-based research." It pretends that scientists agree about findings and what constitutes good research when, in fact, these are the very things scientists debate. Consider the current debate in the medical community about the effectiveness of mammograms. Within the social sciences there are very few findings of any kind that one could pound one’s fist on the table and say, "This is scientifically true." The world is more complex than that.

Asserting that some preferred approach derives from "scientifically-based research" is usually an effort by people with political agendas to wrap their own preferred policies in the mantle of science. In this so-called "information age" we find that people use the jargon of knowledge and the trappings of science in order to give more credibility to what remain fundamentally value-based proposals. They attempt to give their value and ideological preferences the cachet of being scientific. It’s not that research doesn’t offer direction. But findings have to be interpreted and adapted to the new cultural and societal contexts in which they are introduced. You can’t just plop these ideas - no matter how "scientifically-based" they supposedly are - from one setting into another without adapting those ideas to the new setting.

I think one has to proceed quite cautiously and help people examine the actual claims on which a proposal is based. It means that evaluators and the people we work with have to critically and thoughtfully examine the evidence that is purported to be "scientific" and draw their own conclusions. Let me hasten to add that, as an evaluator, I support evidence-based practice and I support using scientific methods to examine effectiveness. But I’ve also seen these phrases politicized and abused. Consumers of evaluations need to attend to "truth in packaging." Look beyond the label or assertion that some proposal is "science-based" to examine where the evidence came from and what it really shows.

Thinking about your comments to the previous questions, what do you feel are the main skills or competencies that evaluators need to have or develop in order to keep pace with emerging trends?

My colleague Jean King at the University of Minnesota and her students have developed a helpful taxonomy of essential evaluator competencies.4 I would commend their work, which was published in the Spring/Summer 2001 issue of the American Journal of Evaluation.

But to answer your question, evaluators need to be not only methodologically competent, they also need to be skilled at situational analysis, bring cultural sensitivities to their work, be politically sophisticated about the ways in which data and methods are used in the knowledge age, and understand how evaluation intersects with politics at all levels. Also, evaluators need to be very good communicators and facilitators, at least if they are doing utilization-focused evaluation and aiming to work with families and community members in mutually respectful and helpful ways.

Julia Coffman, Consultant, HFRP

 

¹ See also Chapter 5 in Utilization-Focused Evaluation (1997) for a discussion of process use.
² Patton, M. Q. (1997). Utilization-Focused Evaluation: The New Century Text, 3rd. ed. Thousand Oaks, CA: Sage.
³ Patton, M. Q. (2002). Qualitative Research and Evaluation Methods, 3rd. ed. Thousand Oaks, CA: Sage.
4 King, J., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a Taxonomy of Essential Evaluator Competences. American Journal of Evaluation, 22 (2): 229-247.

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project