You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Casey Morrigan from Foundation Consortium for School-Linked Services describes her organization's 2-day meeting which included roundtable dialogue between evaluators and funders and the issues raised in local program evaluations of some of California's comprehensive, integrated supports and services initiatives.

What do we know about the approaches and methods of evaluating comprehensive, integrated supports and services (CIS) for children and families? What lessons can be shared from statewide and local evaluations of these initiatives? What are the roles of evaluators and stakeholders in CIS evaluations? These were questions discussed at a two-day meeting of evaluators, funders and practitioners sponsored by the Foundation Consortium for School-Linked Services in Burlingame, CA on June 20–21, 1996. Day one of the meeting consisted of a roundtable dialogue between evaluators and funders; day two focused on issues raised in local program evaluations of some of California's CIS initiatives.

A “theory of change” approach to evaluating CIS guided the discussion on both days. As defined by Carol Weiss in a 1995 paper, this approach requires stakeholders to specify a set of propositions as to “why and how an initiative works.” The process involves determining long-term as well as intermediate outcomes, the types of activities that can make them happen, clear definitions and measures of activities and outcomes, the resources to support these activities, and individual and collective accountability.

Several challenges and lessons of evaluating CIS were highlighted, such as:

  • Evaluations must be designed flexibly because the object of analysis is “a moving target.” CIS initiatives are complex and changing. Consequently, theories of change and evaluation designs must be dynamic. Initial indicators and typologies may not be relevant as the program evolves and should be discontinued. As new services are added to a CIS, new measurement indicators must be developed.

     

  • Stakeholders have different informational needs that require different types of evaluation designs. It would be unrealistic for a state-wide multi-site evaluation to develop theories of change at each site and evaluate the entire initiative against them all. Aggregate data would not be as useful as site-specific information for local sites seeking to use information for program improvement. As with the California Healthy Start initiative, resources must be allocated for both statewide and local evaluations.

     

  • Educating funders about realistic expectations from evaluating CIS is an important task of evaluators. Foundations want to know the impact of their initiatives on the community but need to consider the impacts of multiple data collection efforts on local sites, the need for flexibility in measurement, and a foundation's own theory of change.

Further Reading

 

Wagner, M., & Golan, S. (1996, April). California's Healthy Start school-linked services initiative: Summary of evaluation findings. Menlo Park, CA: SRI International.

Weiss, C. H. Nothing as practical as good theory: Exploring theory-based evaluation for comprehensive community initiatives for children and families. In J. P. Connell, et al. (Eds.), New approaches to evaluating community initiatives: Concepts, methods, and contexts. Washington, DC: The Aspen Institute.

Evaluators also need to reassess their relationships with various types of stakeholders, such as policymakers, programs, and communities, and balance distance and closeness to an evaluation. Evaluators at the roundtable described their roles in relation to stakeholders as “enabler,” “coach,” and “critical friend.” They seek to enlighten and convince stakeholders about evaluation issues, not impose evaluation designs. Doing this is not easy and requires both technical and interpersonal skills.

While different perspectives enriched the discussion, participants agreed that evaluation is a continuous process of developing and utilizing knowledge for program and policy improvement.

The Foundation Consortium for School-Linked Services is a collaboration of 18 California-based foundations. Its mission is to work with public and private sectors to improve the well being of children, youth and families through the provision of comprehensive, community-based services.

Casey J. Morrigan
Senior Program Analyst
Foundation Consortium for School-Linked Services
1321 Garden Highway
Sacramento, CA 95833

 

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project