You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Evaluator, educator, and author Carol Weiss shares her thoughts about what the next century might mean for the field of evaluation, the training of evaluators, and the connection between evaluation and policymaking.

Over the years, the field of evaluation has changed, in response to both growing knowledge and expertise as well as changes in programs and policies. As we move into the 21st century, the evaluation field will continue to face new challenges as well as new opportunities. We asked Carol Weiss to discuss some of her views on evaluation in the next century. A prominent evaluator, Dr. Weiss is a professor of education at Harvard University and has published 11 books, the most recent of which is Evaluation: Methods of Studying Programs and Policies. Although noting that “social scientists do not do a good job of predicting,” Dr. Weiss did agree to share some of her thoughts and observations with us.

What challenges do you think face the field of evaluation in the next century?

One of our greatest challenges is that our methods are really not up to the questions that we seek to answer. We want to know not only what the outcomes of a program are but also why those outcomes appear—or fail to appear. When we deal with community-based programs, where many sets of actors collaborate, cooperate, plan, and implement activities, our methods are not adequate to understand the processes and outcomes. Community-based approaches often have multiple dimensions (health, education, job training, transportation, etc.). Many of them are opportunistic, in the sense that they address whatever community problems become susceptible to intervention as circumstances allow. The programs employ a range of strategies and reach a shifting array of recipients. Under these conditions, evaluation doesn't have the appropriate tools and techniques to understand fully what is going on.

Theory-based evaluation is one approach that has a great deal of promise. But trying to use theory-based evaluation is difficult when programs do not have any explicit—or even implicit—theories, when programs are amorphous, or when they shift significantly over time. Foundations and governments will exert pressure on evaluators to develop new ways of evaluating these complex multi-sector programs, and the need to answer harder questions will force us to develop new approaches.

One of the issues that persistently arises in a discussion of evaluation methods is the utility of experimental design. Some people have been much too ready to reject random-assignment designs. I recognize that in many cases experimental design is not workable. But comparison groups are often feasible, and some comparison is usually better than none at all.

Concepts from experimental design, such as “threats to validity,” need to be taken seriously. Mosteller and Tukey identified four factors that are necessary to establish a causal link. First, there has to be responsiveness of the outcome; in other words, the outcome has to follow the intervention. Second, you have to eliminate plausible alternatives that could explain the results. Third, you need to identify the mechanisms that lead to the results. Finally, you have to replicate the results. Note that the authors do not discuss random-assignment design as the only way to establish that the program was the cause of the observed results. They lay out the logical requirements for establishing causality. That last step, replication, is important. One evaluation is not going to settle an issue. Rather, we need to have multiple evaluations of the same kind of program using different techniques. If the different evaluations converge on the same results, we have pretty strong evidence that we know how successful that kind of programming is. I think that this is one way that evaluation should go. We can do this with qualitative and quantitative work; we can do it with small programs and large programs. But it takes a lot of evaluation capacity.

Another direction in which evaluation is moving is toward greater appreciation of the need to understand a program and how it works. Evaluators are used to switching from one field to another—evaluating a preschool program one day and a delinquency prevention the next and a nutrition program the third. They rely on their methods. But to do a first-class job, they need to become savvy about the mechanisms of change in the different fields. Such knowledge can come from specializing in a single field long enough to gain profound knowledge of the ways in which interventions produce effects and the conditions under which efforts are most likely to appear. If evaluators move from field to field, they need to be sure that they gain as much knowledge as possible, either from other team members, consultants, reading, or program practitioners. Evaluators cannot rely solely on their expertise in research methodology any longer. They have to understand the program field.

The other missing piece is on the program side. Practitioners need to be much more aware of theory and research. Programs need to be planned more systematically, with close attention to evidence, rather than solely on the basis of unexamined intuition. I don't mean to downplay the wisdom of practice. Practitioners learn a great deal from experience, and experienced practitioners are wonderfully knowledgeable. But they have to examine their knowledge, their assumptions, and their beliefs about why a program should be done in a certain way and how it is going to attain the results they want. They need to look at research evidence as well as the experiences of other programs. There is probably more of a tradition of rational planning in public health programs than in areas such as education, criminal justice, or social work. These fields need to catch up. We have lost a lot of knowledge about effective programming because practitioners and program planners have not systematically examined their experience and the evidence or brought the knowledge to bear on the development of new programs.

What do you think this implies for how evaluators and others are educated/trained?

Very few places offer comprehensive training for evaluators. Most university departments offer one or two courses in evaluation to go along with the courses in research methods and statistics. If that is the case, there should be more opportunity for supervised apprenticeship activities. I think that apprenticeship is the best way to learn. Most of us got some education and then went out into the working world. If we were lucky enough to work with good people in our early jobs, we learned a lot.

Practitioners also need a basic understanding of evaluation—what I might call “evaluation appreciation.” They need to understand what evaluation is all about, what it takes to do a good study, how to recognize a good study when they see it, what they should look for, and what to do with evaluation results.

As this implies, we need to train more than evaluators. Colleges and universities should train people who will be requesting evaluations, contracting for evaluations, reviewing evaluations, and applying the results of the work to organizational problems. The use of evaluation results is an obviously important issue. We can demand a lot of evaluators when it comes to supplying useful evaluation reports, but there have to be informed and willing “users” on the other end. Evaluators, of course, have a significant responsibility. They have to address the right issues, they have to provide a quality study, they have to communicate it well, and they have to communicate it to multiple audiences. But there has to be receptivity on the receiving side. Those in program and policy positions have to want to know what evaluation can tell them and how it can help them do a better job and make policy and practice more effective. And they have to have the will to do it.

Influencing and educating the media and policymakers are other challenges. In my research, I have found that the way federal policymakers tend to hear about research is not through journals or books but through the New York Times or Newsweek. That means that research has to be news. Reporters have to find it newsworthy for one of several reasons: it is on a topic that attracts many readers, it contradicts taken-for-granted assumptions, it is counterintuitive, or it has human interest. Evaluators need to learn how to present their findings in ways that meet journalistic criteria. These days policymakers tend to be well-educated, and many have familiarity with the social sciences. I think they have greater understanding of evaluation than previous generations. But they, like journalists, need to have the evidence convincingly communicated to them. We also have to pay more attention to ways of making evaluation results visible, beyond the words on the page. With all the marvels of computers and the World Wide Web, we ought to be able to think of ways to get the evaluation messages out.

What do you think this implies for how evaluation can better contribute to policymaking?

First, we need to be very good evaluators. Evaluators should not undertake a study if the conditions for good evaluation are not there. Evaluation takes time, resources, and skill. Evaluators should not take on studies when they know they cannot do a good job. I recognize that this is easy for me to say sitting here in a university. But evaluators do not have to be passive in accepting whatever conditions the sponsor sets. They can argue back, explain that the time is too short, the requisite data are unavailable, appropriate comparisons are missing, the money is insufficient for the size of the task, or whatever the problems may be. It is not easy to do. Evaluators want the contract. But I believe that many sponsors, including government sponsors, would sometimes (although not always) be willing to listen to good arguments that what they are asking for is not going to lead to a credible study. They may be willing to change the request if they realize that the kind of study they are asking for is going to yield unpersuasive evidence and will easily be discredited by anyone who dislikes the way the results come out. This is not to say that most evaluators do not do a good job. Most of them do, and many of them do a very good job. But the field is tarnished by the clunkers.

A big issue is improving the influence of evaluation on policy. To have a sustained influence, evaluations have to be well-designed and conducted. And there has to be an accumulation of evidence. We shouldn’t think of evaluation as a set of one-shot studies. We should think of it as a continuing effort. In some areas, such as job training, there has been a conscientious accumulation of evidence since the 1960s. Authors have collected and summarized results critically, and they have drawn conclusions about the kinds of programming that work well. In some areas we have not done these kinds of critical reviews to help inform practice and policy. Evaluators should view themselves as part of an ongoing enterprise to develop knowledge for action.

Several things can be done to develop this stock of knowledge. Meta-analysis has been extremely useful in getting together the quantitative evidence and examining effect sizes, and we need to do more of that. We might take this a step further and set up groups that systematically look at all the evidence that has accumulated about mechanisms of change in particular practice fields and the environmental conditions that are conducive to success. Lee Cronbach once suggested the formation of Social Problem Study Committees to take an ongoing look at such social issues as teen pregnancy or school dropouts. Such groups would study all the evidence that becomes available on their topic and periodically summarize what they have learned and what still needs to be known. If the people in these groups were well respected and had access to policymakers and the media, their words might carry considerable weight. Such work would help us address our greatest concerns as society moves into the next century.

Karen Horsch, Research Associate, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project