You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


The After-School Corporation of New York (TASC) and Big Brothers Big Sisters (BBBS) are large out-of-school time organizations with a demonstrated track record for using evaluation for both accountability and learning. We asked four experts affiliated with these organizations to reflect on what they have learned about using evaluation for program improvement. The panel includes TASC president Lucy Friedman and TASC evaluator Elizabeth Reisner, who is a principal at Policy Studies Associates, Inc., as well as Rebecca Fain, director of agency development for BBBS of America, and BBBS evaluator Jean Grossman, senior vice president for research at Public/Private Ventures.

Lesson 1: Develop a Mindset in Favor of Evaluation for Program Improvement

Rebecca Fain: A true learning organization is driven by a desire for quality and continuous improvement. An “If it ain’t broke, don’t fix it” attitude needs to be converted to one of “How can we do what we do better?” Big Brothers Big Sisters of America’s (BBBSA) structure as a federation is a key component of our identity as a learning organization. We have a piloting process for new ideas, directions, and program improvements that enables us to build agency support for change. New ideas are rigorously tested to determine the what and how of achieving success and the degree to which success can be replicated. This approach allows us to prove the worth of an idea prior to rolling it out to all agencies. Agencies hear directly from those in the field why and how a measure must be taken to achieve outcomes before they have to implement it.

Jean Grossman: A promising strategy for developing an evaluation mindset is to be sure that staff members understand the value of evaluation. Don’t go from zero to 60. Start small and start with things that are useful to staff. For example, collect basic participant satisfaction information that staff can discuss in meetings. Use evaluation results as internal benchmarks, comparing results from one quarter to the next and reflecting on the differences for program improvement. Then, gradually expand your questions to address how participation is impacting outcomes.

The organizations with which Public/Private Ventures (P/PV) has had successful evaluation partnerships have been more mature organizations like BBBS. They have extra staff to do non-direct service, such as helping to write proposals, thinking through programs, attending conferences, participating in evaluation meetings, and getting other people to collect and consider data. The extra staff enables organizations to thoughtfully absorb evaluation data. Another factor critical to developing an evaluation mindset is organization members’ belief that evaluation is useful. An organization’s members should think, “I can do a better job if I know what’s going on.”

Elizabeth Reisner: TASC’s commitment to evaluation stemmed from its understanding of evaluation as a source of feedback for program partners and public agencies, and for program improvement. TASC and its board made it clear to Policy Studies Associates (PSA) from the beginning that they were interested in a lot of feedback, and the more reporting we at PSA did, the more the demand grew. TASC made sure the community-based organizations that applied for grants understood what they were getting into in terms of evaluation requirements and how the evaluation would benefit them. And then it made sure that grantees and project-level staff received regular feedback from the evaluation.

Lucy Friedman: Lead organizations need to play a cheerleading role in order to promote an evaluation mindset. When we at TASC started evaluating our programs in 1998—when no one was evaluating after school—we told organization administrators that running the programs wasn’t just about serving the 300 kids in their program, but that they were pioneers in a larger national effort. It was a mobilizing message that was critical to ensuring participation and cooperation with PSA’s evaluation. And it worked!

Lesson 2: Build Evaluation Into Program Design

Lucy Friedman: Building evaluation into TASC from the outset served to strengthen the TASC model. We understood that if we wanted to learn something across all the sites, we needed a program model that had characteristics common across sites, but also was sufficiently flexible to respond to the needs of individual schools and communities. So we developed the set of core elements that have become the basis for the entire TASC model. If we hadn’t been thinking about evaluation at the very beginning, we might not have developed such a strong program model. We made it clear to grantees from day one that evaluation was an integral part of receiving TASC funds, so grantees were never taken by surprise when PSA asked them for cooperation with the evaluation.

Rebecca Fain: Out-of-school time organizations should build evaluation into all aspects of their programs, including getting funding to cover costs. Program designers and leaders need to understand the importance of demonstrating results for ongoing support and funding, for staff reinforcement, and for engaging other clients, families, and stakeholders.

Jean Grossman: Keep in mind that there are opportunity costs in conducting evaluation, especially for small programs that could be trading off service delivery for evaluation. These programs need some staffing breathing room before they can really tackle evaluation questions. I recommend a phased approach where evaluation starts small and grows with the program. But even little programs can and should collect some basic implementation data that may feed into a larger evaluation and can be used to strengthen the program. My advice is to collect a few things. It’s much better to get three high quality measures than 10 to 15 measures that mean nothing. A measurement of participant attendance, for instance, is an important piece of data all programs should collect.

Lesson 3: Manage Data Burden With Staffing, Expectations, and Relationships

Elizabeth Reisner: For large-scale evaluations, to the extent possible, place the burden of data collection on the external evaluators. Also, in some cases, other agencies have already collected data that can inform a program’s evaluation, so it is important to develop good relationships with agencies that control the data you need.

Rebecca Fain: Performance metrics, which BBBSA created, led to a culture of performance management that has become critical for managing data for program improvement. BBBSA employees have a clear idea of what they need to do, and at what level, to achieve outcomes. Agencies know what data points to look at regularly to determine if they are on track or if adjustments are needed.

Lesson 4: Be Intentional About the Logic of Your Program

Jean Grossman: All program operators should think through, step-by-step, how they expect participants to be changed by their programs. Then they can figure out a few things to measure that will help them assess if these changes occur. I like to collect data to capture three phases of a program: early, middle, and late. Early data tell something about participant characteristics. Middle phase data capture something about dosage and about whether the program is reaching its intended participants. Late phase data offer information about in-program changes or outcomes.

Lucy Friedman: Having a core set of elements across all TASC programs helped PSA build consistency into the evaluation across sites. Even with flexibility within programs, TASC’s basic theory of change¹ does not vary from site to site and therefore we can use it as a framework for evaluation.

Lesson 5: Disseminate Evaluation Results Strategically

Elizabeth Reisner: Incorporate strategic communications into evaluation and disseminate evaluation results both within an organization and to the public.

Jean Grossman: Sharing results broadly within the organization is a key to promoting high quality implementation.

Rebecca Fain: I make a distinction between internal and external dissemination. BBBSA’s appreciation for the importance of program evaluation began, in part, with the release of the first P/PV evaluation report both inside and outside of the organization.² Our funders, who supplied an external perspective, rewarded our evaluation efforts; their response reinforced an internal attitude of appreciation for evaluation. Internally, the taste of impact data made us understand its tremendous value and the need for more. Sharing the results within BBBSA created a compelling case for using data for continuous improvement.

Lucy Friedman: To ensure the success of after school programs nationwide, we must get key evaluation messages to policymakers. From its inception, the mission of TASC has been to shape public policy. We knew in the beginning that if we wanted to get the attention of policymakers, we would need data. So from the very beginning we partnered with PSA to ensure that we got the data we needed. We had a vision that the TASC effort would grow beyond providing after school programs in specific communities—that it would become part of a new thrust to develop and expand high quality after school programs nationwide.

For more information about the BBBS and TASC evaluations, visit the HFRP Out-of-School Time Program Evaluation Database. For both summaries and full text of the most recent interim reports of the TASC evaluation, visit www.policystudies.com.

¹ A program’s theory of change explicitly articulates how the activities provided lead to the desired results by linking service receipt to intermediate changes (such as changes in attitudes or in-program behaviors) to short- and longer-term outcomes in participants.
² Grossman, J. B., & Tierney, J. P. (1998). Does mentoring work? An impact study of the Big Brothers Big Sisters program. Evaluation Review, 22(3), 402–425.

Priscilla M. D. Little, Project Manager, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project