You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Thomas J. Kane, professor of policy studies and economics at the University of California, Los Angeles, distills lessons for future research from his review of four recent after school program evaluations.¹

Over the past decade, after school programs have moved from the periphery to the center of national education policy debate. Understandably, policymakers are eager to see evidence that investments in these programs are paying off. To draw the lessons learned so far, I recently reviewed the results of four recent evaluations of after school programs: the first-year results of the national evaluation of the 21st Century Community Learning Centers Program, the After-School Corporation’s After-School Program (TASC) evaluation, and evaluations of the Extended-Service Schools Initiative and the San Francisco Beacons Initiative. This article highlights four lessons for future evaluation designs to consider.

Focus on Participation
When programs operate on a drop-in basis, attendance is often sporadic, with the average participant attending only one to two days per week. (The major exception was the TASC program in New York, where participants attended three to four days per week.) With such irregular attendance, programs are unlikely to yield lasting benefits. As a result, programs should rethink their strategies for increasing enrollment. One approach would be to require parents to commit to regular attendance as a prerequisite for participation. This was one of the strategies used by the TASC program to raise enrollment. But such requirements risk excluding the most disadvantaged youth, whose parents may be unable to make such a commitment.

Another strategy would be to devote additional resources to actively recruiting participants and following up with parents whose children are not attending regularly. As programs experiment with alternative ways to increase attendance, future evaluators should collect evidence on the impact of different strategies for increasing attendance.

Experimental Versus Non-Experimental Designs
Many of the existing non-experimental evaluations have relied on comparisons of participants and nonparticipants attending the same schools. Given the strong presumption that participants—who have chosen to attend the after school programs— likely differ from nonparticipants in their access to alternative after school care arrangements, in their feelings of safety after school, or other outcomes, it is unlikely that such studies will ever be persuasive.

At the least, evaluators need to be able to demonstrate that, at the baseline before the program existed, the statistical method being used—either regression adjustment or matching—is able to eliminate any prior difference between participants and nonparticipants on critical outcomes such as after school care arrangements and academic achievement. Unfortunately, few of the studies collected baseline data on such outcomes or, when such outcomes were available, differences often remained between participants and nonparticipants.

Given the selection bias problem in after school participation, to evaluate program impacts, future evaluation studies should consider random assignment in oversubscribed programs (as was done in the 21st Century elementary school evaluation) or other non-experimental strategies (such as comparing students at schools with and without after school programs).

Think Harder About Magnitudes of Academic Impacts
The 21st Century Community Learning Centers evaluation was designed to detect a .2 standard deviation impact on Stanford 9 reading scores. This has become common practice in the field of education evaluation for a wide range of different types of interventions. Rather than continue to use such arbitrary benchmarks, we need to be more realistic about what it takes to create discernible effects on achievement test scores. For example, in the national samples used to norm the Stanford 9 achievement test, fifth grade students scored only one-third of a standard deviation higher than fourth graders on reading and one-half of a standard deviation higher on math.

In other words, everything that happens to a student between the end of fourth grade and the end of fifth grade—a whole school year of full-day classroom instruction, interactions with family, conversations with friends, and homework—is associated with an important but not huge gain on an achievement test.

With this as a backdrop, consider the typical after school program with youth attending one to two days per week for two to three hours per day. While it is reasonable to expect that after school activities can affect performance as measured by achievement tests, it is likely that such effects will be small. This is particularly true for reading scores, since they are traditionally less responsive than mathematics scores.

Therefore, even if the programs are helping, effects on achievement tests are likely to be hard to detect statistically. We should balance a focus on test scores with an examination of the intermediate effects, for instance, of more parental involvement in school-related activities, more diligent homework completion, more school attendance, and better grades—efforts that may pay off in improved test performance over time. Although the studied programs often did not have statistically significant effects on achievement test scores, some programs did have promising effects in these other areas.

Collect Better After School Care Data
After school programs are often justified to policymakers and the public as a way to reduce the number of children caring for themselves after school. Despite the prominence of so-called latchkey children in the rationale for after school programs, it is surprising that only one of the four recent evaluations kept track of students’ after school care arrangements. Even more surprisingly, the researchers in that one evaluation found that many of the program participants would not have been on their own, but with a parent or sibling if the programs were not available. Reducing sibling care is perhaps a good thing, but it is less obvious that time spent in an after school program would be more worthwhile than time spent with a parent after school. 

As noted above, it will be difficult to discern the impact of after school programs on academic achievement. It may be even more difficult to identify impacts on hard-to-measure outcomes such as the risk of committing or being the victim of a crime, or leadership skills. But if programs truly are reaching a large number of youth who would otherwise be on their own after school, policymakers and the public will be more willing to give the programs the benefit of the doubt.

Accordingly, evaluators must document the nature of the impacts on after school care arrangements. Future studies should collect detailed time diary data on students’ whereabouts after school, and keep track of any net impacts on the proportion of children under adult supervision after school.

Conclusion
After school programs may be unaccustomed to holding center stage in the national education policy debate, but that is unlikely to change anytime soon. Some of the evidence so far is forcing a reconsideration of the magnitude of impacts we might reasonably expect. Other evidence, such as the failure to find much impact on adult supervision after school, is even more surprising. Such surprises, and the reassessments they provoke, are to be expected early on. It is still too soon to determine whether the impacts of after school programs are sufficient to justify the costs. But it is probably not too early to know that greater effort should be devoted to raising attendance and reaching the children who would have been on their own after school.

Moreover, it is not too early to know that evaluators need to do a better job of measuring the impacts the programs are having on after school care arrangements. Policymakers and the public may be willing to be patient until more of a consensus develops. But the evaluation community and after school programs will need to demonstrate that they are willing to learn from the evidence that is forthcoming.

To read evaluation profiles of the four programs discussed in this article, visit the HFRP Out-of-School Time Program Evaluation Database.

¹ Some information in this article was drawn from two previously published articles: Kane, T. J. (2004). The impact of after-school programs: Interpreting the results of four recent evaluations (www.wtgrantfoundation.org/usr_doc/After-school_paper.pdf [Acrobat file]), and Granger, R. C., & Kane, T. J. (2004, February 18). Improving the quality of after-school programs. Education Week, p. 76, 52.

Related Resources


Granger, R. C., & Kane, T. J. (2004, February 18). Improving the quality of after-school programs. Education Week, p. 76, 52.

Grossman, J. B., Price, M. L., Fellerath, V., Jucovy, L. Z., Kotloff, L. J., Raley, R., et al. (2002). Multiple choices after school: Findings from the Extended-Service Schools Initiative. Philadelphia: Public/Private Ventures.

Reisner, E. R., Russell, C. A., Welsh, M. E., Birmingham, J., & White, R. N. (2002). Supporting quality and scale in after-school services to urban youth: Evaluation of program implementation and student engagement in the TASC After-School Program’s third year. Washington, DC: Policy Studies Associates.

U. S. Department of Education, Office of the Under Secretary. (2003). When schools stay open late: The national evaluation of the 21st Century Community Learning Centers Program. First year findings. Washington, DC: Author.

Walker, K., et al. (2003). [San Francisco Beacons Initiative final report]. Draft.

Welsh, M. E., Russell, C. A., Williams, I., Reisner, E. R., & White, R. N. (2002). Promoting learning and school attendance through after-school programs: Student-level changes in educational performance across TASC’s first three years. Washington DC: Policy Studies Associates.

 

Thomas J. Kane
Professor of Policy Studies and Economics
University of California, Los Angeles
3250 Public Policy Building
Los Angeles, CA 90095-1656
Tel: 310-825-9413
Email: tomkane@ucla.edu

 

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project