You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


In this issue, we speak with Michael Scriven, professor of psychology at Claremont Graduate School and immediate past president of the American Evaluation Association. Dr. Scriven has written or edited books, periodicals, and articles in the areas of philosophy and psychology, word processing, turbine engines, artificial intelligence, critical thinking, and evaluation. In this article he shares with us some of his insights about the challenges facing evaluation, about evaluation as a distinct discipline, and about links between evaluation and practice, including organizational learning.

What do you see as some of the major challenges facing evaluation and evaluators in this new century?

I think the biggest challenge we have is developing recognition of evaluation as an autonomous discipline. We know that in evaluation, there is a body of knowledge, a common logic that people will have to master to do this work well. We have increasingly been recognizing the highly specialized skills that are also required, but we need to go beyond that.

For example, knowing when to use what type of evaluation approach is a skill—and it’s a skill that takes some knowledge of the underlying logical distinctions between grading, ranking, and apportioning. On the other hand, cost analysis is a skill that has been developed very well by specialists, although it is still not widely practiced in a sophisticated form by most evaluators. So there is room for improvement in the basic skills as well as the logical skills. Another skill, one that is still not well developed anywhere, is knowing how to set standards after you determine performance levels. Increasingly, however, we are realizing that evaluation is different from the standard scientific paradigm in that in evaluation, we rely on investigative skills rather than on hypothesis testing. Finally, evaluation has a set of ethical standards unique to its work.

I think another challenge we have is that we must break from the widespread attachment to the idea of interactive or collaborative evaluation as the standard model. While collaborative approaches to evaluation are important, some very good evaluations are done completely separately from the program being evaluated—they are not in any sense collaborative. In some cases, this is out of sheer necessity (e.g. in historical evaluation), but in other cases it may be a more deliberate choice (e.g., in contexts where the preservation of independence is crucial).

Should evaluation be considered a separate discipline?

I think the importance of evaluation as a separate discipline is slowly being recognized. The field of statistics evolved out of mathematics in the same way. But there are as yet no universities that are developing evaluation departments or even chairs in evaluation. It is important, I think, that we went from 9 to 23 national associations of professional evaluators worldwide in the past year. To get greater recognition for the field of evaluation we still have to improve our professional training. For in-service training, there are now certificates of advanced study and summer institutes in which evaluators can upgrade their skills. We need to improve on that system with graduate degrees that involve a major in evaluation, not just in educational evaluation or policy analysis.

Partly because we are still treating evaluation as simply a “tool” discipline, we have so far virtually overlooked two other important areas in evaluation: the use of evaluation in other disciplines (such as physics) and meta-evaluation (evaluation of evaluation). Physicists have to evaluate everything they deal with: theories, data, instruments, scientific papers and proposals for funding, candidates, and students. They learn how to do this as part of their scientific training; but, unlike everything else in that training (e.g. the mathematics they use), it is never explicitly addressed as a logical discipline. The history of science makes it clear that very large improvements in practice occur from explicating implicit principles. We already see this in the improvements in proposal evaluation that have been shown to be possible in the sciences.

Some argue that evaluation needs to be better linked to the policymaking process. What is your opinion about this?

I do agree that program evaluation needs to be better linked with policymaking about the program evaluated. But our duty in this area is to produce valid, comprehensible, and appropriate evaluations. The evaluator’s business is simply that of determining the merit,worth, or significance of what he or she is looking at. The service we provide is telling what is or is not working and, sometimes, finding the reasons why. I don’t think we should be in the business of telling policymakers what they should be doing next—that is the role of policy analysts. These people look at the broader picture—taking into account the decision makers as well as the many other variables involved in a political decision. These variables are not addressed in program evaluation. I do think the program/policy analyst needs better evaluation skills, but this is still the person who has the responsibility and the knowledge to make policy recommendations.

For example, I was sole evaluator for a community foundation. They asked me to look at their youth leadership program, a program that trained youth to be leaders in the community. In my evaluation, I found several things: there was no evident need to train youth in the community; the approach they used had no basis in theory; and there was no data from three years of work that the program had any effect. My tendency was to say, “Shut it down.”However, in reality the situation depended on some political matters. The board of trustees for this foundation is completely unpaid. As a result, the members each implicitly get a “wildcard”—a program they like and want to fund because they think it is a good thing, without debate. That was the case with the youth leadership program—no matter what the evaluator says, the program is not going to be abandoned. This is the reality of the political process, of real decision making.

That is not to say that evaluators can have no influence or should not seek to inform the policy process. Evaluation has a major contribution to make, but we need to reach the right audience and we need to be realistic in our ambitions. What we should be good at is servicing policymakers who need answers about what works and why. We can also inform policymakers through legislative evaluators, those state and federal legislative offices that do program evaluation or policy research for legislators. These are the unseen beavers who get the dam built. They respond to committee and legislator requests for information on particular topics. If evaluators want to get their stuff out there and see it better used, these are the folks that they ought to be reaching—and these are the folks who are often more accessible.

Some argue that evaluation needs to be better linked to on-the-ground practice. What is your opinion about this?

I think we are a remarkable discipline in that the usual academic versus application distinction has been almost entirely absent. It’s still true that there is room for improvement in codifying best practices. There need to be mechanisms (e.g., research secretaries on each project), to link evaluation findings to methodological work in a field. Some are trying by linking with training and higher education institutions. But we also need to take some action on our own—publish in other journals beyond academic ones, make sure our findings reach program/policy analysts, and attend congressional hearings.

How do you think evaluation (and accountability requirements) can be better used for organizational learning and continuous improvement of programs?

People involved in “organizational learning” reforms need good program evaluation skills and good personnel evaluation skills. The detail of this learning is tricky—how do you really decide what the lesson was? How do you design a study to learn results? What is the methodology involved? These are tough evaluation questions, and the skills are not taught in business schools.

Organizations have to develop a plan to integrate policymaking and programming—what questions they want answered, how data will be reviewed, how data will be used. A good evaluator helps with these questions. One of the most recent changes has been the focus on performance measurement. I don’t think this focus on outcomes is bad—I’ve seen programs take years to get up and running, and then have nothing to show for their work. Performance measurement is important—it is the bottom line. However, performance measurement as it has become lately is inflating one aspect of evaluation into the whole. We used to make exactly the complementary error, in the guise of process evaluation. I think you need to balance the two; you cannot ignore how you got to the outcomes. The most important way to do that is to negotiate expectations early on in the contracting process.

Karen Horsch, Research Associate, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project