You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


David Chavis, President of the Association for the Study and Development of Community, outlines the “best of the worst” evaluator practices when it comes to building relationships with evaluation consumers.¹

Being an evaluator is not easy. I’m not referring to the technical problems we face in our work, but to how people react to us and why. Telling someone that you’re an evaluator is like telling them you’re a cross between a proctologist and an IRS auditor. The news evokes a combination of fear, loathing, and disgust, mixed with the pity reserved for people who go where others don’t want them to go.

I have developed this perspective through providing hundreds of evaluators and evaluation consumers with technical assistance and evaluation training, through overseeing the “clean up” of evaluations by “prestigious” individuals and institutions, and through conducting many evaluations myself.

In each case I heard stories about those evaluators that make evaluation consumers look at all of us with contempt, suspicion, and, on good days, as a necessary evil. I began to consider, who is this small minority messing things up for the rest of us?

Just about every evaluator I spoke with said it was the previous evaluator that made his or her work so difficult. I realized that, to be occurring at this scale, these bad experiences either grew out of an urban myth or were the work of a band of renegade, number crunching, ultra-experimentalist, egomaniac academics.

Then it hit me—it’s all of us. Maybe we all are contributing to this problem. As Laura Leviton said in her 2001 presidential address to the American Evaluation Association (AEA), “Evaluators are not nice people.” As nice as we might be privately, we generally don’t know how to build and maintain mutually enhancing and trustful relations in our work. Threads on EvalTalk, the AEA listserv, frequently demonstrate the difficulties we have getting along.

Many evaluators think consumers react negatively to us out of fear we will reveal that their programs aren’t working as well as they think, and because they believe we have some special access to The Truth. Think about how you’d feel if someone talked to you for five minutes and then told you how to improve yourself. Who has the ability, or the nerve, to do that?

I’ve seen evaluators borrow from telephone psychics to help consumers overcome these fears—we provide insights no one can disagree with, like: “Your funders don’t understand what you are trying to accomplish,” and follow with the clincher: “I am really excited by your work.”

But are we? Or are we excited about what we can gain from their work? Are we fellow travelers on the road to the truth about how to improve society or are we just about the wonders of the toolbox (i.e., methods)?

Bruce Sievers and Tom Layton² recognize that while we focus on best practices we neglect worst practices, even though we can learn a lot from them—especially about building better relations. In the interest of learning, the following are some of the worst evaluator and participant relationships my colleagues and I have seen.

Not Listening or Not Acting on What We’ve Heard
Stakeholders often tell us that although they want something useful from their evaluation—not something that “just sits on a shelf”—all they get is a verbal or written report. When challenged we say they should have paid for it or given us more time. We also often hear that practitioners are afraid negative results will affect their funding. We assure them we’ll take care of it, while thinking to ourselves there’s nothing we can do. After all, we aren’t responsible for negative results—we just tell the truth.

In most of these cases the evaluator simply hasn’t listened and thought through how to deal with the situation. Often the stakeholders’ real question is: Will you struggle with us to make this program better, or will you just get the report done?

It is essential to conduct active and reflective interviews with stakeholders. We need to agree on how we can improve their program as part of the evaluation process. Even if there is a small budget, that relationship-building time must be considered as important as data analysis.

Branding: Evaluation As a Package Deal
In a world where the label on your jeans says who you are, it’s not surprising evaluators sell “brands” of evaluation. At the recent AEA meeting, six leaders in the field presented their brands. They recognized some overlap, but emphasized the uniqueness of their approaches. For me, each approach reflected a different decision I might make on any given day, but I couldn’t see a big difference. What I fear is having to describe to evaluation consumers what I do based on these brands: “I’m doing a theory-driven, responsive, transformative, utilization-focused, empowerment, goal-free, collaborative, participatory, outcome-focused evaluation.” Would I still have anybody’s attention?

Often the brands represent rigid packages that become more important than their appropriateness within a specific context. In one recent situation, when the program operators told the evaluator that his method wasn’t working for their organization, he responded that they didn’t understand his approach and shouldn’t have agreed to it and that, after all, it’s working just fine all over the world—just read his book.

As evaluators, we must get beyond our brands and labels, except for one: multi-method. The needs of the context must be the driving force, not the promotion of our latest insight or evaluation package.

Keeping Our Methods a Mystery to Maintain Our Mastery
David Bakan,³ a noted philosopher of science, described the mystery-mastery effect as one that psychologists use to maintain their power over people. When we present our methods, analysis, and our ability to be objective in a way that’s above the understanding of the public, we create power over the consumer. Did we take a course in keeping our opinions to ourselves and making “objective” judgments? No, but we would hate to dispel the myth of our objectivity—without it, what would make us special? We need to dedicate ourselves to educating the public on the diversity of our methods, our approach to knowledge development, and the limitations and profound subjectivity of our work.

Thinking We Can Hold Everyone Accountable But Ourselves
Many evaluators think we should be allowed to do what we see fit—that we need neither monitoring nor review of our work. Many consumers think we do exactly what we want to do. As evaluators, we are getting what we want, although it may not be well liked by others. Many other professionals have systems of accountability, including physicians, accountants, lawyers, and architects. Even if these systems are flawed, their mere existence shows that these professionals and the public hold their work in high esteem.

Contracting problems are plentiful in the evaluation field. Evaluators still frequently enter relations without contracts specifying the deliverables. There are widespread misunderstandings over the end results of the evaluator’s work. How do we hold ourselves accountable? Is it just driven by the market? (I.e., as long as someone is paying you, you’re cool?) We need to evaluate our own work in the same manner we profess to be essential for others.

Going It Alone While Overlooking Our Limitations
I have my own pet theories. All professions can be divided up into dog professions and cat professions. Law, for example, is a dog profession—lawyers work well in packs and they bark. Evaluation is a cat profession—independent, aloof, sucks up to no one. Plus, we evaluators know how to get out quick and hide when there’s a loud noise.

There is great pressure on us to know and do everything. We are asked to facilitate, conduct strategic planning sessions and workshops, produce public information documents, and give advice, frequently without much training or direct experience ourselves. Rarely do I see us working in teams, let alone with other “experts.” We tend to go it alone, giving it the ol’ educated guess. We need to develop relations with other experts with complementary practices.

Forgetting We Are Part of the Societal Change Process
The work we evaluate exists because there are people out there who have a deep passion to change society, or their little piece of it. Often we see those with passion as more biased, more motivated by self-interest, and less knowledgeable than ourselves. When practitioners criticize the sole use of traditional experimentalism for determining effectiveness we consider them misguided. We think their attitude stems from self-interest. We don’t see our own conflict of interest: Who is going to benefit immediately from the requirement of performing random trials? Us. We see stakeholders as having conflicts of interest, but not ourselves.

We can’t ignore that we are part of a larger struggle for societal change. We need to acknowledge the ramifications of our actions and make sure the information we provide is used responsibly.

Moving Forward—Building Better Relations
I have great hopes for our profession. Some may write off this article as self-righteous rambling, but that is a symptom of the problem—we think it’s always others causing the problems. The problem of how to relate to consumers does exist. While many call for reflection, a symposium, or a special publication on the topic, I would suggest that we look more structurally. The first step is to recognize that we are accountable to the public. Accountability and respect go together. On this front we need large-scale changes, like voluntary certification or licensure.

The next step is to recognize that we want to have a relationship with evaluation consumers. We should think about how can we get along and mutually support each other’s needs—and apply what we learned in kindergarten: to be nice, to share, to not call each other names, and to play fairly.

¹ Any similarities to individual evaluators are unfortunate, but coincidental. I make gross generalizations that apply to all of us, though admittedly to some more than others.
² Sievers, B., & Layton, T. (2000). Best of the worst practices. Foundation News and Commentary, 41(2), 31–37.
³ Bakan, D. (1965). The mystery-mastery complex in contemporary society. American Psychologist, 20, 186–191.

David Chavis
President
Association for the Study and Development of Community
312 S. Frederick Avenue
Gaithersburg, MD 20877
Tel: 301-519-0722
Email: dchavis@capablecommunity.com

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project