You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Picture of Paul Light
Paul Light
Paul Light is a Senior Fellow of Governance Studies, Director of the Center for Public Service, and a Senior Adviser to the Presidential Appointee Initiative at the Brookings Institution in Washington, D.C. He is also an instructor at the John F. Kennedy School of Government at Harvard University. Previously he was Director of the Public Policy Program at the Pew Charitable Trusts. He has written 14 books, including most recently Pathways to Nonprofit Excellence.¹ Light received his Ph.D. from the University of Michigan.

 

Q: Your new book, Pathways to Nonprofit Excellence, asserts that continuous improvement is at the core of what it means to be a high-performing organization. How can evaluation help nonprofits to be high performers?

A: There are three questions that all organizations must address, whether they’re nonprofit, private, or governmental:

  1. Why do they exist?
  2. Whom do they serve?
  3. How will they know they are successful?

I believe that rigorous measurement should be at the center of a nonprofit’s mission. If you don’t have some sort of measurement system for tracking activities, outputs, and hopefully outcomes, you’re not going to know if you are on the right track. Every nonprofit that I’ve come to admire over the years has started with a commitment to rigorous measurement as part of their core ethic. When I ask how they will know if they are successful, they are able to articulate clearly the outcomes and measures they use to assure they are headed down the right path.

Also, a commitment to careful measurement and evaluation is essential to the task of innovation. Most innovation does not involve “mad-scientist thinking.” Innovating organizations correct themselves over time and fine tune their programs through careful evaluation. They have an extraordinary commitment to measurement because they know that: a) if they are going to challenge prevailing wisdom through innovation, they better be prepared for the prevailing wisdom to push back and challenge them to prove that their new ideas work, and b) their new ideas are not going to work unless they commit themselves to trial and error, which is guided by careful evaluation.

Q: How is the “outcomes movement” affecting the performance of nonprofits?

A: I think the outcomes movement, with its focus on measuring real outcomes rather than activities or short-term outputs, is very favorable, as long as it is not focused only on “compliance.” I think the best outcomes measurement is developed by the nonprofit sector in collaboration with the party requesting it, rather than imposed from the outside.

Also, outcomes measurement should not be another paperwork exercise that doesn’t matter in any tangible way to the allocation of dollars and resources. For the last 10 years outcomes have been pushed very hard at the federal level with, I think, mixed results because in some cases it has not really mattered for the things that federal agencies pay attention to—budget, personnel, financial rewards for personnel, etc.

We interviewed federal employees over the last six months about morale. One federal employee talked at length about how his agency has done such a good job at producing strategic plans and developing very detailed metrics for assessing impacts, and that all that has occurred is that Congress has more ammunition with which to pummel away at his agency. The reality is that you can have very high performance and not get a budget increase, and you can have very poor performance and get a significant budget increase.

The outcomes movement is going to have the same results in the nonprofit sector if it is not intimately tied to things that matter to nonprofits—meaning dollars for programmatic activity—and if it is not done fairly.

Q: Can you give examples of federal agencies that have used outcomes in a meaningful way?

A: The Department of Transportation (DOT) is considered the best-run federal department in terms of outcome measurement. The Federal Highway Administration (FHA) within DOT in particular has set very broad strategic goals for itself that deal with things like reducing highway fatalities. That’s a very difficult outcome to control, but the FHA has embraced it and said, “If we are going to do our job, we’re going to reduce highway fatalities, and we are going to do that by increasing the use of seatbelts, getting better at enforcing speed limits, etc.”

DOT is setting a very high mark in saying, “You’ve got to take responsibility for some outcomes that may be just beyond your reach.” Traditionally, the agency would say the number of interstate highway deaths is not something they could control. But DOT felt their rules on seatbelt use, crash survivability, airbags, etc., were designed to make accidents more survivable and so they should hold themselves accountable for that. This approach can really energize an agency and a workforce. Every day that FHA staff go into work they should be thinking, “We are about saving lives in this agency.”

Q: Many nonprofits feel they do not have the time, staff, and other resources for evaluation. What would you say in response?

A: I understand the pressures that nonprofits face, especially small ones for whom the choice to evaluate may mean cutting back on desperately needed staff or technology. But I think evaluation is a wise investment. I’d say to small, struggling nonprofits, “If you’re coming to work every day because of the mission, you need to have some understanding of whether you’re actually making headway. And that’s got to be more than hunch and intuition.”

Funders can help by not cutting their evaluation and capacity-building budgets in times of retrenchment. Capacity building and evaluation are usually the first things to go. Foundations say what their grantees say, “If we have to make a choice between evaluation or capacity building and saving trees, for example, we’re going to focus on saving trees.” I think that could be, even though it’s not a proven fact, a terrible investment decision.

It denies the possibility that investments in evaluation and capacity more generally can improve productivity and increase efficiencies in ways that will stretch those dollars much further than they would otherwise go. It could be that during periods of retrenchment, the most important investment foundations can make is in organizational capacity building, precisely because that’s what many organizations cut first.

There was a survey of California nonprofits last fall in the wake of September 11. Most nonprofits contacted were already starting to cut back on staffing, MIS investments, pay raises, bonuses, and new benefits programs. I spent some time in California over the last two months and you can see the effects of those cutbacks. Organizations became more stressful places to work, started to burn out their employees, and saw higher turnover. That can’t be good for saving trees in any respect.

Q: How should capacity-building efforts be evaluated?

A: We are working very hard on that question right now. The traditional view has been to evaluate capacity building in terms of its outputs. It’s relatively easy to look at capacity building on fundraising, for example, and measure the number of letters that went out, the number of successful contacts, and the amount of money brought in. Periods of retrenchment that focus on outputs alone almost encourage funders to reduce their capacity-building support because it’s something that they believe can be delayed.

The real challenge in evaluating capacity building is to demonstrate its impact on actual programmatic outcomes. It’s very difficult to do. The evaluation community has not been very effective at designing evaluations that ask how capacity building improves the efficiency or productivity of an organization in actually doing things like saving trees, educating children, feeding the hungry, supporting HIV/AIDS patients, and so forth.

We need to know, “What are the linking mechanisms that come out of capacity-building efforts that produce more trees saved?” For example, in theory, an MIS should increase productivity, and productivity is a linking mechanism that should increase programmatic impact. Efficiency, strategic alliances, morale, and job satisfaction are other linking mechanisms. Even though this makes intuitive sense, we haven’t done a good job of proving it.

Q: What is currently missing in evaluations of the nonprofit sector? What might be future directions?

A: I think evaluators generally neglect organizational variables as possible causes of programmatic failure. Program evaluators tend to look at program design as the most important factor in whether a program succeeds or fails. Evaluations should also look at staff training, organizational configuration, access to technology, and everything from supplies to strategic planning.

We’ve got a bunch of evaluators who do program evaluation; we’ve got a very small number of evaluators who do capacity-building evaluation. I’d like to see more interaction between the two, because I suspect their interaction could result in some methodology to figure out how to attribute program failure and success to program design and organizational capacity.

Also, a barrier to evaluation is that many organizations think it’s a very costly enterprise that’s going to take a long time. And we all know better. But perhaps the evaluation community can come up with a different, deeper set of products that can help organizations at different points in their lifecycle, and help organizations with varying amounts of money. This is not to be a substitute for full-blown, empirically anchored evaluation, but is to get organizations to think about measurement more effectively. Evaluators need to think about different levels of evaluation and ways to get organizations to think of evaluation as something that’s a constant benefit rather than an eventual trial.

Julia Coffman, Consultant, HFRP
M. Elena Lopez, Senior Consultant, HFRP


¹ Light, P. C. (2002). Pathways to nonprofit excellence. Washington, DC: Brookings Institution Press.

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project