You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Christopher Wimer from HFRP describes three promising methodological approaches to studying out-of-school time program quality.

Increased attention to program quality in the out-of-school time (OST) field requires researchers and evaluators to use evaluation methods that generate reliable and valid information about what constitutes program quality. This article highlights three methods for generating such information—mixed-methods approaches, cross-site participant-adjusted approaches, and intraprogram randomized trials—and provides examples of how these methods have been used to study OST program quality.

Mixed-Methods Approaches
As accountability and science-based research movements take hold in the evaluation community, researchers and policymakers often call for rigorous quantitative indicators of program benefits through quasi-experimental or experimental research designs. While these methods allow more confidence in inferring whether programs are working, relying on such methods alone can leave sizable gaps in our knowledge about why a program may or may not have worked. One way to keep sight of program quality is to use mixed-methods approaches, whereby quantitative results are enriched and expanded through qualitative inquiry (Rossman & Wilson, 1994).

In recent evaluations of the After-School Corporation’s (TASC) programs (Reisner, Russell, Welsh, Birmingham, & White, 2002; Welsh, Russell, Williams, Reisner, & White, 2002), evaluators combined quasi-experimental impact estimates with interviews, focus groups, reviews of program documents, and in-depth site observations. This approach enabled evaluators to identify both likely program impacts (e.g., increased math performance and school attendance) and strong program components that seemed likely to have contributed to these impacts (e.g., intensity of activities and integration with host schools). Mixed-methods approaches provided a more holistic picture of the program and of how features of program quality might lead to youth outcomes.

Cross-Site Participant-Adjusted Approaches
Another issue arises when evaluators compare outcomes across multiple program sites. It is tempting in such situations to assume that the “best” sites are those that appear to achieve the greatest gains in youth outcomes (e.g., sites where 50% of participants graduate high school versus sites where only 30% graduate). Some sites, however, may serve substantially different populations than others. Adjusting the magnitude of outcomes across sites by statistically accounting for features of the populations served is a useful methodological approach for addressing this issue.

For example, one can estimate what expected outcomes for different participants should look like (e.g., by looking at gains across all sites for disadvantaged students from high poverty schools), then calculate what a particular site’s outcomes should be given observed characteristics of its participants. The highest quality sites would then be those that achieved the best outcomes compared to estimates of what their outcomes should look like.

Ferguson and Clay (1996) take this approach in their evaluation of YouthBuild, a national youth and community development program for older youth. Evaluators found that the strongest sites were not those with the largest apparent gains in selected outcomes, but rather those with strong outcomes relative to the populations they served. The evaluators were thus able to identify program features of these sites likely to have resulted in such gains, such as strong recruitment, screening, and selection criteria for youth.

Intraprogram Randomized Trials
Experimental OST evaluations typically assign youth randomly to either a treatment or control group. While this research design can be powerful for identifying program impacts overall, it is less helpful in identifying program elements leading to such impacts. One alternative is to randomly assign participants to different aspects of a program rather than to the program as a whole or to no program at all.

Evaluators have used this method in two ways. One way involves randomly assigning youth to multiple treatment conditions (in addition to a control condition) to identify the value-added of a particular constellation of program components. For example, in the multi-component program Across Ages, LoSciuto, Rajala, Townsend, & Taylor (1996) randomly assigned youth either to a control group, a treatment group receiving a standard set of services, or a treatment group receiving standard services plus an older mentor.

A second method entails taking a new program component (e.g., Friday arts classes) and randomly assigning participants from a preexisting program to either receive or not receive this new component, thus identifying whether the new component is a worthwhile addition to the program. A Youth Opportunities Unlimited summer program in Arkansas used an intraprogram randomized trial to examine a new problem-solving training component (Glascock, 1999) and found that this new component led to increases in youth’s math performance.

Measuring program quality involves significant methodological complexity. Though none of the methods described here are foolproof, and though some may entail more costs for programs, each presents unique opportunities for uncovering what works in OST programs for youth.

References
Ferguson, R. F., & Clay, P. L., with Snipes, J. C., & Roaf, P. (1996). YouthBuild in developmental perspective: A formative evaluation of the YouthBuild program. Cambridge, MA: MIT Department of Urban Studies and Planning.

Glascock, P. C. (1999). The effects of a problem-solving training on self-perception of problem-solving skills, locus of control, and academic competency for at-risk students. Unpublished doctoral dissertation, Arkansas State University, Jonesboro.

LoSciuto, L., Rajala, A. K., Townsend, T. N., & Taylor, A. S. (1996). An outcome evaluation of Across Ages: An intergenerational mentoring approach to drug prevention. Journal of Adolescent Research, 11(1), 116–129.

Reisner, E. R., Russell, C. A., Welsh, M. E., Birmingham, J., & White, R. N. (2002). Supporting quality and scale in after-school services to urban youth: Evaluation of program implementation and student engagement in TASC After-School Program’s third year. Washington, DC: Policy Studies Associates.

Rossman, G. B., & Wilson, B. L. (1994). Numbers and words revisited: Being “shamelessly methodologically eclectic.” Quality and Quantity, 28, 315–327.

Welsh, M. E., Russell, C. A., Williams, I., Reisner, E. R., & White, R. N. (2002). Promoting learning and school attendance through after-school programs: Student-level changes in educational performance across TASC’s first three years. Washington, DC: Policy Studies Associates.

Christopher Wimer, Research Assistant, HFRP
Email: wimer@fas.harvard.edu

 

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project