You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


The following are excerpts from an evaluation panel at the conference, “Nurturing Strong Full Service Schools: Building Bridges with Communities,” that took place on May 20, 2002. It was the fifth in a series of national conferences about full service schools organized by Margot Welch and the Collaborative for Integrated School Services at the Harvard Graduate School of Education. Panelists shared their evaluation findings and lessons learned.

Joy Dryfoos, Moderator
Writer and Researcher, New York, New York

The Collaborative has invited four of the country’s leading researchers, each of whom is looking intensively at different components of community schools, particularly after school programs and family resource centers. They have been asked to comment on the state of the art in school-community-partnership programs and address the questions, “What works and what doesn’t? How can practitioners approach evaluating their own programs?”

Robert J. Illback, Executive Director
R.E.A.C.H., Louisville, Kentucky

In evaluating the Kentucky Family and Youth Service Centers, we learned the importance of getting people to focus on self-evaluation and program improvement, as opposed to merely gathering data for self-justification or accountability purposes. One of the challenges evaluators face is that in the process of “selling” the initial program concept, large and hard-to-measure goals are often specified to show the importance of the intervention. When program people feel pressure to quickly “prove” that the program works (“because the government is spending all this money”), this can lead to an unrealistic and infeasible evaluation strategy. The net result can be a focus on measures no one thinks are sensitive to what the program is actually doing, with the potential to grossly misrepresent the program’s effectiveness. In some ways, you can set yourself up for failure.

Nonetheless, it is important at the outset to consider the information policymakers consider important, because evaluators have a responsibility to funders to report whether the program is moving in the intended direction. But it is also important to stave off the summative questions for as long as possible. Oftentimes we sell programs based on big concepts like, “we want all children to learn at high levels” or “we’re going to improve family life or transform communities.” Even if we were to accomplish these things, we couldn’t prove it had anything to do with the program.

As evaluators, our challenge is to focus program planners on evaluating more proximal outcomes. So, rather than talk about those big issues, let’s reframe our focus to consider what we expect will happen in the classroom when a child comes to school, and how we will know whether change has occurred, and how we can measure that reliably.

My tips for designing your evaluation are:

  • Invest upfront in understanding the logic of your program. A successful evaluation needs consensus at the beginning about what you’re trying to accomplish. Through logic modeling, you can arrive at consensus among various stakeholders.
  • Be aware of the political context and what the issues are. Programs are changing and evolving, the climate is always shifting, and legislators and administrators come and go. Recognize that there needs to be a core consensus about what information is being sought from the evaluation—this can change over time.
  • Be realistic. Many researchers and psychologists are enamored with the idea that we can have controlled experiments. In community-school partnership evaluations, you need to drop these pretenses. The evaluation is about helping people gather data so that they can achieve understanding.
  • Manage information wisely. It’s not about the right software program—it’s about conceptualizing the information at the outset.
  • Have clear questions. We have lots of questions, but we need to hammer down what’s doable.
  • Look at proximal, not distant, outcomes.
  • Keep your eyes on the prize. You have a limited amount of evaluation capital to spend so focus on core questions.

Related Resources


Kalafat, J., & Illback, R. J. (1999, December). Evaluation of Kentucky’s School-Based Family Resource and Youth Services Center, part 1: Program design, evaluation conceptualization, and implementation evaluation. Louisville, KY: R.E.A.C.H. of Louisville, Inc.

Harvard Family Research Project’s Out-of-School Time Evaluation Database. 

Grossman, J. B., et al. (2002, June). Multiple choices after school: Findings from the Extended-Service Schools Initiative. Philadelphia: Public/Private Ventures. Available at www.ppv.org/ppv/
youth/youth_publications.asp
?section_id=8
.

University of Minnesota, College of Education and Human Development, Center for Applied Research and Educational Improvement. Use of continuous improvement and evaluation in before- and after-school programs: Final report. (2001). Minneapolis, MN: Author. Available at www.education.umn.edu/
CAREI/Reports/
summary.html#MF-ContImprove
.

Jean Baldwin Grossman
Private/Public Ventures, Philadelphia, Pennsylvania

Public/Private Ventures and the Manpower Demonstration Research Corporation have just completed an evaluation of the extended service school initiative that began in 1997. Twenty communities that participated in this initiative adopted one of four nationally recognized extended school models: Beacon Schools, Bridge to Success, Community Schools, or West Philadelphia Improvement Corporation.

We found that these after school programs can be put in place quickly and matured over the three years. The programs became much better at figuring out what their core goals were and how to develop activities to meet them. They got better at strategically including children and also became an accepted partner in the schools by the second year.

The after school programs provided developmental supports for children and youth. What mattered for the strength and quality of the program was not the topic or activity or the skill taught; what mattered was the ability of staff to engage and stretch the children.

It takes three or four semesters of participation before you see program effects, especially the academic ones. Staff had to make continuous efforts to attract and hold onto older children, starting with those in the fourth and fifth grades. The choices that programs make affect who shows up. Older children and youth need their own activities that they are interested in.

Children and youth who participated in these programs were likely to show improved school attitudes and behaviors and to stay out of trouble. Children were more likely to handle anger appropriately, to pay attention in class, and to be proud of school. They were also less likely to start skipping school and to start to drink alcohol. However, we cannot know with all certainty whether these effects were due to the program or whether the children who were better behaved and better academically were the ones participating in the first place.

Mark Dynarski, Senior Researcher
Mathematica Policy Research Institute, Princeton, New Jersey

When full-service school programs are funded by short-term grants, an evaluation’s results can be fed back to the community for support. The school district is listening to its customers, which are the parents. Community organizations have local boards and are listening to residents. These parents and community groups are the ultimate client who can push for the stability of grant-funded programs. If you don’t have a basis for support among the ultimate client, then programs will be short-lived.

The risks in evaluation are that you might not show improvement early on. Then you risk having your client base eroded, or you will be asked to show improvement too early and the same thing will happen. My advice is not to overstate the potential gains of the partnership in the work plan. Statements of possible outcomes might sound tremendous.

For example, one health program set a goal of 50% decrease of the infant mortality rate. The only time you see this impact is in developing countries where the water system is so bad that any improved water system reduces diarrhea and other diseases and contributes to dramatic decreases in infant mortality. This is not the case in the U.S. The health program did not meet its goal; it was too large an outcome to go after.

Set realistic and sensible goals, and acknowledge that impacts will be incremental. Don’t strive to see the highest thing you’ve seen in the research. Think about outcomes to be average rather than the extreme.

Heather Weiss, Director
Harvard Family Research Project, Cambridge, Massachusetts

We need to move away from a model where we do an evaluation and if it works and shows success we use it with our funders and if it doesn’t we put in a drawer and hope that nobody asks for it. We need to move from evaluation as a one-time effort to evaluation as a continuous learning process. Programs that get ahead of the game will get and use data in an ongoing way to show that they are delivering value to their community over time.

Evaluators and practitioners need to think in terms of continuous improvement and building a learning system for the field of full service schools. Our next steps should focus on framing the full service school research and evaluation agenda to be pursued individually and collectively. Evaluators and other stakeholders have to tap and deliver on what people want to know. As we answer questions, new ones open up for inquiry; as we accumulate knowledge, we begin to connect the information to deepen our understanding, and also to identify the gaps that need to be investigated in future research and evaluation. For example, we have to better understand and demonstrate the direct or indirect relationship between after school programs or support services for children and families and educational outcomes.

As part of the learning system, it is important to build a knowledge base that community and school programs can get access to and contribute to. There have been very limited investments in evaluation and knowledge development, and even more limited investments for getting evaluations broadly disseminated so that people can get their hands on them and discuss their implications.

In addition, there’s a growing demand for evaluator-practitioner forums that promote cross-sector dialogue and learning. The time has come for funders to invest in virtual and in-person meetings where evaluators talk with practitioners about the evaluation findings and their implications for the continuous improvement of programs and for learning and field building.

This is what it means to build a learning system for a field. I encourage the dialogue to continue about what this system should look like for the field of full service schools.

Past articles on school-linked services can be found in The Evaluation Exchange Vol. III, No. 2, 1997. Click here to view archived issues.

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project