You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Ann Dykman of MPR Associates illustrates that an organization's culture and mindset are important factors in the success of using evaluation for continuous improvement.

Evaluation is essential to any organization’s continuous improvement, but eliciting the full cooperation of staff who are unfamiliar with the concept can require more than simply providing top-notch tools and training. Before getting down to the nitty-gritty of performance measures and data collection, evaluators may first have to help create a workplace culture that values inquiry and critical self-review.

In effect, they may have to be cheerleaders for the process itself.

This is a key lesson that evaluation consultants Georgiana Hernández and Mary Visher¹ learned from their involvement in the two-year Working on Workforce Development (WOW) project, sponsored by the James Irvine Foundation in 1998. The San Francisco-based foundation theorized that if nonprofit organizations know how to gather and use performance data, their services will improve. WOW was created to test the theory by sending in expert evaluators to help six Bay Area employment and training organizations refine their performance measurement systems.

Together with Irvine, the evaluators crafted a training plan around several performance measurement workshops and conducted follow-up visits to help agency staff apply the concepts within their own workplaces. Soon after the first workshop, however, Hernández and Visher sensed a challenge to their well-laid plans. Not only were the participating agencies at different levels of readiness for evaluation, they also had varying views about whether the effort was even worthwhile. And, as it happened, motivation had the most impact on an agency’s progress.

“It takes a conscious effort to foster agency-wide shifts in mindset, norms, and practices,” wrote Hernández and Visher in Creating a Culture of Inquiry, the Irvine monograph that details the WOW project. “It takes organizational changes in belief systems about the value of grounding decisions that affect clients in hard data.”

In the end, the experiment successfully increased the evaluation capacity of most of the nonprofits, and both the consultants and the foundation felt it yielded valuable lessons for other funders, evaluators, and technical assistance providers.

Lesson One – Take Time to Build Relationships and Trust
Hernández and Visher visited each agency twice in the year before the training was set to begin. They interviewed managers, collected data where possible, toured the facilities, and talked to other staff to aid their understanding of the agencies’ capacity for self-evaluation. These initial assessment visits gave the evaluators time to build rapport with participants and provided a baseline against which to measure progress. It was during this stage that Hernández and Visher first realized their training plan had to change.

“We thought we could go in cold,” without much of an understanding of each agency’s culture, Visher recalls. “We learned that it takes time for an organization’s employees to trust you.” For example, agency employees were concerned about opening their performance up to scrutiny in case that might influence their relationship with the foundation. “Early on we had to ensure them that the outcome data reported in their performance measurement systems would not be used as a basis for funding,” Visher says.

In retrospect, both the consultants and the foundation also questioned whether offering grants was a good idea, given that the most important determinant of commitment and progress turned out to be an agency’s enthusiasm for the project rather than grant size, organization size, stability, finances, or reputation.

Lesson Two – Involve More Staff Throughout the Process
The foundation and the consultants had already decided on the structure for the technical assistance before they first visited the project participants. Soliciting more upfront involvement from the participants would have moved the project along much faster, say Hernández and Visher. Instead it took many months to get everyone on board.

The WOW organizations that were most successful in developing their performance measurement plans seemed to be those that rallied the broadest participation of staff, while keeping midlevel and top managers active throughout the process.

Lesson Three – Clarify Roles, Responsibilities, and Expectations
“We should have laid our cards out from the beginning and scaled back on the more grandiose, overreaching goals [such as expecting to see improved performance at the end of two years],” says Martha Campbell, Irvine’s director of evaluation. “Instead, what you need are shorter-term, mutual expectations.”

Both the foundation and the consultants suggest that these steps be followed in any performance measurement project:

  • Ask each organization to set up a project team, with a lead person, to spearhead the planning process.
  • Develop a joint memo of understanding that articulates the project purpose, the expected “deliverables,” the project timeline and time requirements, and the roles and responsibilities of designated agency staff and technical assistance providers.
  • Make sure everyone agrees on the level of staff time to be invested.

Hernández stresses the need for “constant reality checks” that restrain overly enthusiastic agency staff who want to take on too much. For example, three directors decided to develop plans that encompassed not only employment and training services, but their entire agency operations—including facilities and operations, administration and fundraising, and for-profit business divisions for those who had them. Others wanted to document and measure every single action case managers took every day.

“There can be an excitement about what can be measured, but it can be exhausting to do it all,” Hernández says. “In our follow-up conversations we learned that some of the agencies have dropped half of the indicators they came up with.”

Lesson Four – Create a Schedule That Works
Because of potential staff turnover and the differing levels of commitment among agencies, the evaluation consultants found they got better results when they worked intensively with participants over shorter periods instead of visiting them less frequently throughout the year. But this is not a rigid rule. Ultimately, the organization that is trying to measure and improve its performance should determine the pace that it feels comfortable with, and then develop a schedule and commit to it.

Indicators of Inquiry-Minded Cultures


  • Organizations that create new lines of communication across and within their divisions.
  • Organizations that move from using performance measurement in one department or division to using it throughout.
  • Organizations that revise their performance measurement plan to adapt to new information, conditions, ideas, or policies.
  • Organizations that use performance findings as a springboard for discussions about their values, mission, and hidden assumptions in their practices.

Source: Hernández, G., & Visher, M. G. (2001). Creating a culture of inquiry. San Francisco: The James Irvine Foundation. Available at www.irvine.org.

Lesson Five – Nurture Deep Cultural Changes
To their surprise, Visher and Hernández found they had to spend a good deal of time nurturing change in the organizational mindset around the value of self-evaluation as they were helping organizations improve their technical capacity to use data. They had to get agency staff interested in thinking about how to quantify their day-to-day work and how to ground programmatic decisions in hard data. This was a new approach for many and for some it felt quite threatening.

Visher suggests an initial brainstorming session that teases out the types of work staff members do each day with clients. These are the practices that will be linked with desired outcomes. Once staff begin to see their work being valued, Visher says the change in mindset is so tangible “you can almost hear the click.” As the evaluation consultant continued to work with the agencies on their performance measurement system, they began to identify the changes in mindset and to foster them (see box).

Continuous improvement depends on the adoption of performance measurement as a core value, says Hernández. “I believe wholeheartedly that the process helps staff chart and then move along the same path within the organization, so in the end, managers don’t have to spend so much time handling crises and putting out fires.”

Ann Dykman
Senior Associate for Communications
MPR Associates, Inc.
77 Warren St. #5
New York, NY 10007
Tel: 212-608-5567
Fax: 212-608-2557
Email: adykman@mprinc.com


¹ Hernández is president of Hernández & Associates and Visher is associate director of program evaluation and planning at MPR Associates.

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project