You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


This special report offers expert commentary on the implications of When Schools Stay Open Late: The National Evaluation of the 21st Century Community Learning Centers Program, First Year Findings1 for future evaluation and research. It includes a short summary of evaluation findings from that report (box below) and we encourage our readers to read the full report, available at www.ed.gov/pubs/21cent/firstyear.

On February 3, 2003 the U.S. Department of Education released the first year findings from the national 21st Century Community Learning Centers (21st CCLC) program evaluation. Conducted by Mathematica Policy Research, Inc., the evaluation examined the characteristics and outcomes of typical 21st CCLC programs. Citing the “disappointing initial findings from a rigorous evaluation of the 21st Century Community Learning Centers program,” the President’s Fiscal Year 2004 Education Budget Summary and Background Information, also released February 3, decreased the request for funding for the 21st CCLC program by 40%.2 According to the budget summary, “the evaluation indicates that the centers funded in the program’s first three years are not providing substantial academic content and do not appear to have a positive impact on student behavior.”3

The report’s release, coupled with the decision to cut the 21st CCLC program by 40%, has sparked fervent debate among researchers, evaluators, advocates, practitioners, and others about the merits of evaluating first year programs, the ability to make generalized statements about program effectiveness given that this is one evaluation report, and the “fairness” of holding the original set of 21st CCLC grantees to the new standards of scientifically based research (SBR) set forth in the No Child Left Behind Act a full three years after many of the evaluated programs were operational. In response to this rising debate, HFRP sought commentary from seven experts from the field to help shed light on the following question: Given the recent push for science-based research, coupled with the release of the first year evaluation findings from the 21st CCLC programs, where do we go from here to use research and evaluation to support the development of high quality out-of-school time programs?

The National Evaluation of the 21st-Century Community Learning Centers Program, First Year Findings*


Research Design
The quasi-experimental research design included a nationally representative sample of 4,400 middle school students, made up of 21st CCLC participants and matched comparison students. The experimental component of the study included 1,000 students from seven elementary school districts, randomly assigned to either a treatment or a control group. Baseline and follow-up data were collected in the 2000–2001 school year.

Selected Implementation Findings
Attendance – Participants attended the programs an average of less than two days per week. Many centers’ policies allowed students to participate on a drop-in basis.
Program Staff – About one-third of coordinators and three-fifths of other staff members were teachers. Survey data showed that middle school teachers believed that, as a result of working with students at the centers, they improved their teaching skills and had better relationships with some students.
Collaboration – In general, centers contracted with community agencies to provide specific after school sessions rather than as partners with shared governance or combined operations.

Selected Impact Findings
Academic – Overall, programs did not tend to have any effect on homework completion, grades, or test scores. However, teachers reported slightly increased classroom effort by middle school participants compared to nonparticipants. However, African-American and Hispanic participants showed some significant academic successes compared to their nonparticipant peers, especially for gains in math achievement.
Supervision – The program changed the type of care students were in and shifted care from parents and older siblings to program staff.
Safety – Participants’ feelings of safety did not differ significantly from nonparticipants.
Parent Involvement – Parents of middle school participants were more likely to be involved in their child’s schooling than parents of nonparticipants. Centers serving elementary students significantly increased the percentage of parents helping their child with homework at least three times in the last week, as well as increased their involvement in after school events.
Youth Development – Programs had little influence on developmental outcomes (e.g., students’ ability to plan, set goals, or work with a team).

* Presentation of these summary findings is based solely on information from the first year report and does not reflect HFRP’s interpretation or endorsement of Mathematica’s methods or presentation of results.

 

Steve Gunderson
Manager, Washington, DC Office, The Greystone Group, Inc.
Former Congressman from Wisconsin

In 1996, as part of the Congressional reauthorization of the Elementary and Secondary Education Act, I introduced legislation to create “community learning centers.” Our goal was to find ways to more efficiently use school resources, especially in rural and inner-city areas, for all citizens all year. The Clinton administration strategically directed this broad language to create today’s after school program, funded at $1 billion annually in FY 2003.

In the No Child Left Behind Act the law transitioned to a state grant program. Now, the administration seeks to add new standards to all federally funded programs which I call the “three A’s”—academics, access, and accountability. We’d be wise to positively respond to this new focus. After school programming is an important and growing component in the development of today’s youth. Yet we need to target these programs to those most in need, in ways that will enhance a student’s academic progress, assuring limited public dollars meet the test of accountability.

New research and evaluation is desperately needed to improve federal support for this program. Certainly one study (Mathematica’s) does not justify ending the program. But with limited resources and the new focus on academics, we must learn what works—especially for at-risk students. Then, we must restructure our programs to best achieve this goal. So, let’s get on with improving a good idea rather than defending the status quo. To do anything less is to contribute to the death of the most significant expansion in federal support for any K–12 education program in recent years.

Kathleen McCartney
Professor, Harvard University Graduate School of Education, Cambridge, Massachusetts

Hard-won lessons of evaluation research4 have been lost in the administration’s response to the Mathematica evaluation of the 21st CCLC program. To evaluate the administration’s response, ask yourself these five questions.

1. Were the findings used as part of an ongoing innovation cycle? The answer is clearly no. Many child advocates had hoped that this evaluation would be used to promote continuous improvement. Instead, the administration has acted based on first year data, collected during the implementation phase of the study.

2. How were the effect size data interpreted? The Mathematica researchers highlight in their executive summary that the small effect sizes were most likely due to the low attendance rates, the length of the follow-up period, and the lack of sustained, substantive academic support in most programs. Although it is easy to dismiss the effects as small, this conclusion is no doubt premature, especially in light of the fact that this is an ongoing evaluation.

3. Were the findings from the Mathematica study synthesized with existing data on after school programs in order to make an informed decision? No, again. Instead, the administration embraced the Mathematica report as providing the only relevant information with which to inform funding considerations.

4. Did the administration have fair and reasonable scientific expectations? Scholars agree that no one should expect the 21st CCLC program evaluation to yield short-term effects on tests scores, echoing Zigler’s early warnings concerning Head Start.5 By what criteria are the findings “disappointing?”

5. Were the findings subjected to professional scrutiny? Given that the administration’s recommendations coincided with the release of the report, the answer is no. This is the most troubling aspect of the administration’s response. Policy recommendations should not precede reactions from the scientific community.

Accountability efforts and SBR can either be used to generate knowledge that informs effective practices or to serve as a political lever to cut programs and expenditures on child and family services. Here we have a sad example of that latter—another case of death by evaluation.6

Karen J. Pittman
Executive Director, Forum for Youth Investment, Baltimore, Maryland

The administration’s proposed cut to the 21st CCLC budget is not surprising. It is a rare elected official who expands rather than downsizes the pet programs of a predecessor from the opposing party. Dozens of social programs will suffer cutbacks in the next budget. What is surprising is that the administration has broken its own rules for bringing science into policy discussions. By announcing the cuts just as it released the report, opportunities for the research and policy community to conduct a “rigorous, objective and scientific review” (Elementary and Secondary Education Act, Title IV), discuss the findings, and debate responses in light of findings from other equally scientific studies were effectively cut off.

Research should play a more central role in decisions to expand, redefine, or reduce programs. When used correctly, it can be a powerful counterweight to limit the big pendulum swings frequently associated with popular programs, to accelerate the growth of effective programs, and even to curtail the expansion of popular but ineffective programs. The Mathematica report includes promising findings and valuable lessons that can inform both practice and policy. This and other studies should serve as platforms for much needed conversations about how to augment program quality and encourage longer and more intense participation. By using the study to justify cuts, the administration has curtailed conversation about a range of responsible strategies for improving the program, given these and other findings. Our concern should be the same if the proposed program budget had been doubled.

For a more detailed version of this commentary, visit the Forum for Youth Investment website at www.forumforyouthinvestment.org/resspeech.htm.

Mindy DiSalvo
Program Director, Family Technology Resource Center, Decatur, Georgia

Because we want to know if our after school programs are the best that they can be, we eagerly welcomed the opportunity to be a part of the national evaluation of the 21st CCLC program. From the onset of the evaluation we were candid about our strengths and weaknesses, providing honest information about a program in its infancy. As a participant in the evaluation, I have two concerns related to the emphasis and use of the evaluation results.

First, there were very positive findings in the 21st CCLC evaluation—findings that could serve as a road map to improve existing and new programs, not close the doors to them. Based on our concurrent evaluation data, we expanded our curriculum, developed a parent/teacher/student homework completion policy, translated materials into two languages, extended hours of operation, and hired a nurse. We learned how to make our program better by using data. Therefore, it seems that the emphasis should be on how we can learn from the new evaluation report, and not what the report told us about academic impact.

Second, student achievement isn’t, and never will be, solely a result of after school programs. Student achievement isn’t a result of textbooks either, but we spend a fortune on them and no one is talking about cutting them from a budget! Improved student achievement is a result of a combination of components in a child’s life, including how they spend their nonschool hours. Before student achievement becomes a priority for many of our after school programs, a safe place with a caring adult, friends, a healthy snack, and a promise of security comes first.

Our metropolitan Atlanta programs provide safe, high quality after school programs where children in over 10,000 families have developed both academic and social skills. Our evaluations, both internal and external, show an increase in student and family participation, parent involvement, perceived feelings of safety in the school and community, student attendance, student behavior, and student achievement. It is these victories we should celebrate. We should place the new 21st CCLC report in the larger context of successful after school programs nationwide and continue efforts to learn what works.

Tiffany Berry
External Evaluator, LA’s BEST, Los Angeles, California

LA’s BEST uses evaluation data by transforming program outcomes into organizational tools for program improvement. Since the program’s inception in 1988, LA’s BEST has placed a high priority on evaluation and we encourage feedback from the program’s diverse stakeholders. Monitoring of program quality has been accomplished by leveraging internal sources (e.g., random site visits by the board of directors, site activity logs, opportunities to communicate between field and management staff, etc.), as well as data from external sources (e.g., the Center for the Study of Evaluation at UCLA). These data sources have yielded valuable insights, which have been fed back into program operations.

One of the most robust findings of the LA’s BEST program relates to the duration and intensity of participation. Our evaluation reports indicate that when compared with nonparticipants, LA’s BEST participants have fewer days of absences from their regular school, higher achievement on standardized tests in mathematics, reading, and language arts, and higher language redesignation rates to English proficiency.7 In addition, we have found that the relationship between participation intensity in one academic year and academic achievement was mediated by regular school attendance. This suggests that participating in LA’s BEST resulted in better school attendance, which in turn related to higher academic achievement.

The implications of these findings are important—provide a stimulating environment that makes children want to attend, and then keep it fresh and exciting to keep them coming back. The challenge is to create a cognitively stimulating environment that enhances the curriculum of the regular school day, cultivates children’s creativity, and entices all children to return every day. One way LA’s BEST meets this challenge is by using evaluation data to help identify the educational tools that field staff need to develop programs around the personal interests, individual talents, and academic needs of all children. The administration could follow the example of how LA’s BEST uses its evaluation data to improve, rather than cut funding for, its programs.

James P. Connell
President, Institute for Research and Reform in Education, Toms River, New Jersey

Common sense tells us that public investment in programs serving youth should start with a research-based rationale, or theory of change. This theory should tell us how and why the proposed activities, in this case after school programs, can reasonably be expected to produce the designated academic and social outcomes.

In the absence of alignment between program activities and expected outcomes, the failure of the 21st CCLC program to produce its desired outcomes was virtually preordained. Two remedies present themselves: start with the outcomes you want and change program activities to those with a reasonable shot at achieving the outcomes, or start with the activities you have and adjust your expectations to outcomes they can achieve.

Either course makes some sense. The 21st CCLC evaluators call for remedy number one—enriching after school programs with more research-based activities tied to the desired outcomes. Many commentaries on the evaluation make energetic pleas for remedy number two—holding programs accountable simply for providing positive activities for young people while many of their parents are at work.8

The following steps could lead to better alignment between after school programs, academic outcomes, and evaluation: (1) develop educational and recreational activities to help students meet a small number of broad academic standards that reflect the schools’ goals for their students, (2) give staff the resources to actively engage young people in these activities in different ways, and (3) assess the quality of implementation of these activities, their intended outcomes, and the connection between the two. We have seen such a theory of change approach help bring both realism and accountability to the work of changing public education. We expect it could do the same for after school programming.

Jacquelynne S. Eccles
McKeachie Collegiate Professor of Psychology, Women’s Studies and Education, University of Michigan

As Chairperson for the National Research Council (NRC) committee that produced Community Programs to Promote Youth Development,9 my comments reflect my concern over the administration’s decision to cut the funds for the 21st CCLC program based on one evaluation report. This seems a very strange decision for an administration that stresses both the need for evidence-based practice and the importance of supporting healthy adolescent development. Our comprehensive report outlined the characteristics of many programs shown scientifically to have positive effects on many different aspects of adolescent development and provided examples of many high quality programs with rigorously demonstrated effectiveness.

Related Resources


The Afterschool Alliance, a nonprofit organization dedicated to raising awareness for and advocating on behalf of after school programs, has compiled reactions to the administration’s 21st CCLC budget decisions from voices across the nation, including their own Executive Director, Judy Samelson. To view this, as well as other relevant after school evaluation information, go to www.afterschoolalliance.org/
voices_budget_cut.cfm
.

Harvard Family Research Project is preparing an Issues and Opportunities in Out-of-School Time Evaluation brief on this same topic, with expanded evaluation resource information. Check for it on our website at www.hfrp.org in early May 2003.

We also discussed what is needed for adequate evaluation to improve these programs and make sound policy decisions. We proposed that the real challenge for the field is to increase the availability and sustainability of high quality programs, especially in the context of unpredictable funding streams. We concluded that increased funding for the 21st CCLC program was one step the federal government could take to help increase the predictability of funding.

Consequently, I was appalled at the decision to cut these funds based on one quite limited evaluation. Very little attention was paid in this evaluation to the characteristics of the programs being evaluated. Instead great attention was paid to the quasi-experimental and experimental evaluation designs used. As we discussed in the NRC report, these designs are powerful methodological tools, but they are not particularly useful if we do not know the quality of the programs being evaluated. If members of this administration truly value evidence-based practice, then they should pay more attention to the evidence-based reports that are so carefully put together by the NRC rather than use the results of one report to justify funding cuts. The existing evidence suggests to me that the administration should increase the funds for 21st CCLC, but require better specification of the exact characteristics of the programs eligible for funding.

Where Do We Go From Here?
Heather Weiss, Director, Harvard Family Research Project, Cambridge, Massachusetts

We are now playing in a “new evaluation game” with new players and new rules. The game is different because not only is research and evaluation helping to shape policy, but the reverse also is true. Evaluation has always been played out within a political frame, but the No Child Left Behind Act helped define that frame by setting new rules or standards for research and evaluation with its five principles for SBR in education.

Not everyone has agreed that the new evaluation game is being played with the right set of rules. Some would argue that even if the end goal is to comply with this set of rules, there needs to be a learning curve where programs are held incrementally accountable for implementing the new scientifically based standards. For example, the Mathematica impact evaluation, underway well before the new SBR principles were signed into legislation, was prematurely subjected to the new rules and the result has had potentially dire policy consequences. As Pittman, Eccles, and McCartney point out in their commentaries, using one evaluation report to justify a policy decision runs counter to the new rules of SBR—in fact, they argue that subjecting the impact report to the new rules violates some of those rules’ very premises. Further, as Connell points out, in order to accurately assess program impact, there must be alignment between program activities and desired outcomes. In the case of the 21st CCLC programs, old programs were held accountable for new outcomes, thereby almost “preordaining” failure. Moving forward, all players must strive for alignment between desired outcomes and program strategies.

Despite disagreement over the rules, we are now in a position where, like it or not, the new rules are in play and we have to learn how to “get in the game.” So how do we play in this new game? There are at least two approaches.

First, as Gunderson points out, “we must learn what works,” and we must acknowledge that the new game rules require after school programs to be accountable, demonstrate results, and improve their quality. However, as noted evaluator Mark Lipsey points out, “individual evaluation studies, however useful they may be to sponsors and stakeholders, yield approximate estimates of intervention effects and the relationships of those effects to the features of the program under assessment.”10 Further, he points out that perhaps the most useful and informative contribution to program managers and policymakers alike may be the consolidation of our piecemeal knowledge into broader pictures of the program and policy spaces at issue, rather than individual studies of specific programs.11

Second, we need to shift from a system of “gotcha accountability” to a system of learning for continuous improvement. This means that the new evaluation game requires new players or new skills of old players. Our two practitioner commentators (Berry and DiSalvo) agree that, even in the context of heightened accountability, to be good evaluation players requires a commitment to using data for continuous improvement as well as to show impact. A thorough reading of the Mathematica report reveals many promising implementation findings that need to be brought into the light and used for program improvement.

Years of evaluation research have taught us lessons that are too expensive to learn again, such as don’t make large-scale investments in evaluation unless you are learning about program implementation along the way and don’t evaluate a program until it is proud.12 Moving forward, our responsibility as evaluators is to take advantage of the unintended window of opportunity provided to us by the administration to engage in a dialogue about how to apply these lessons to the new game to ensure that future research and evaluation of after school programs is used to improve the overall quality of after school, not only to justify program reduction.

This special report was compiled by Priscilla Little, Project Manager at HFRP.

1 U.S. Department of Education, Office of the Under Secretary. (2003). When schools stay open late: The national evaluation of the 21st-Century Community Learning Centers program, first year findings. Washington, DC: Author. Available at www.ed.gov/pubs/21cent/firstyear.
2 U.S. Department of Education. (2003, February 3). Fiscal year 2004 education budget summary and background information. Retrieved March 24, 2003, from www.ed.gov/about/overview/budget/budget04/summary/edlite-section2a.html#clcs
3 Ibid.
4 McCartney, K., & Weiss, H. (2003, March). Data in a democracy: The evolving role of evaluation in policy and program development. Paper presented at the Festschrift in honor of Edward Zigler. In Child development and social policy: Knowledge for action. Georgetown University, Washington DC. Publication forthcoming.
5 Zigler, E., & Muenchow, S. (1992). Head Start: The inside story of America’s most successful educational experiment. New York: Basic Books.
6 Datta, L. (2001, January). Avoiding death by evaluation in studying pathways through middle childhood: The Abt evaluation of the Comer Approach. Paper presented at the MacArthur Invitational Conference on Mixed Methods Research, Santa Monica, CA.
7 Huang, D., Gribbons, B., Kim, K. S., Lee, C., & Baker, E. L. (2000, June). A decade of results: The impact of the LA’s BEST after school enrichment program on subsequent student achievement and performance. Los Angeles: University of California Los Angeles, Center for the Study of Evaluation.
8 See, for example, commentaries posted on the Afterschool Alliance website at: www.afterschoolalliance.org/voices_budget_cut.cfm.
9 Eccles, J., & Gootman, J. A. (Eds.). (2002). Community programs to promote youth development. Washington, DC: National Academies Press.
10 Lipsey, M. (1997, January). What can you build with thousands of bricks? Musings on the cumulation of knowledge in program evaluation. New Directions for Evaluation, 76, 7–23.
11 Ibid.
12 For a review of these and other lessons learned from evaluation, see McCartney, K., & Weiss, H. (2003, March). Data in a democracy: The evolving role of evaluation in policy and program development. Paper presented at the Festschrift in honor of Edward Zigler. In Child development and social policy: Knowledge for action. Georgetown University, Washington DC. Publication forthcoming.

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project