You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Karen Walker, director of community studies at Public/Private Ventures (P/PV), focuses on work in youth development and community initiatives. Currently, Dr. Walker is principal investigator on the San Francisco Beacons Initiative evaluation and co-principal investigator on the Extended Service School Initiative, funded by the DeWitt Wallace-Reader’s Digest Fund. P/PV’s evaluation of Beacons was designed around the site’s theory of change and includes both implementation and outcomes components. In addition to the initiative’s theory of change, evaluators identified intervening outcomes that may be precursors to changes in academic performance.

Given the strong pressure for out-of-school time programs to show academic results, what evaluation approaches can be used to understand the connection between academic outcomes and program activities?

Karen Walker Responds
“It is important to ask whether the program, by its intent and structure, lends itself to an evaluation that examines academic outcomes. Many out-of-school time programs do not. Some—such as those run by the Boys & Girls Clubs—provide safe havens where youth have opportunities to develop positive relationships with adults. Such opportunities are crucial to youth development as those who have positive relationships with adults, both family and non-family members, are more likely to succeed academically than those who don’t. But the link between positive adult–youth relationships and academic success is a complex one. To burden a program that solely intends to provide youth with adult role models and a safe place to be with the expectation of participants’ academic improvement puts it at unnecessary risk of being considered a failure.

“Program schedule and structure can also influence whether a program is appropriate for an evaluation of academic outcomes. For example, Beacon Centers—such as those in New York, Denver, and San Francisco—provide an array of activities during the week and on the weekend. One youth may participate in a dance group two afternoons and a science club one afternoon a week while another may participate in a youth social advocacy group and still another may play basketball. It is important to take into account the complexity, number of activities provided and differences in youth participation when determining how and what you will be evaluating.

“Evaluating youth’s outcomes across an entire program is a complex, labor intensive task and probably beyond the capacity of program staff already stretched for time. Practitioners wishing to do such a broad evaluation should consider contracting the work out to researchers.

“However, program staff can begin by evaluating a limited number (between one and three) of activities offered. For example, assume that you wish to evaluate an academic activity that meets several times per week and has a reasonable chance to positively affect a young person’s scholastic achievement. The assessment should take several issues into consideration:

“Participation: It is important to document how often youth attend the activity. In an activity with mandatory attendance, we might assume that most youth attend regularly and may experience beneficial effects on their academic achievement. In many activities, however, participation is voluntary and may be inconsistent, a fact that lessens the chance of the programs having a significant impact in the classroom. Documenting attendance and developing a good information system can be a challenging and labor intensive task. Paper systems that use standard enrollment forms and attendance sheets are the least expensive method. While program staff can collect the data from the activity providers, some programs have successfully used youth employees to do this task. Higher cost options include the use of computer databases and other technology. Some programs, including the San Francisco Beacons and the DeWitt Wallace-Reader’s Digest Fund’s Extended Service Schools (ESS) Initiative, have developed systems to track individual student participation by activity. The ESS Initiative is pilot testing the use of Palm Pilots to collect attendance data with the hope that minimizing arduous data entry will facilitate the process. Other organizations use swipe card systems that also cut down on data entry. Consider contacting other programs to see what they use and what the challenges and advantages of their systems are.

“Quality: It is important to know the quality of the activity. Examine how well the activity is structured and managed by staff: Do staff show up on time? How do they handle disputes among youth? Do the activity’s day-to-day tasks address the activity’s stated goals? Examine relationships between youth and staff: How responsive are the youth to the staff’s direction? How effective are staff in providing support to youth in accomplishing their tasks? What is the emotional tenor of the relationship? Examine the activity’s level of challenge: Do youth report that they are challenged? Do staff monitor youth’s levels of frustration and provide clear direction when frustration mounts? Make assessments by observing the activity and asking youth about their perceptions of the activity through short surveys or focus groups. Without these assessments, it is difficult to interpret the results of any activity evaluation. Knowing the activity’s quality provides program operators with information useful for two purposes: as a tool to improve program quality and a device to interpret findings on youth outcomes.

“Fit: The outcomes and indicators selected should relate closely to the activity’s goals and specific tasks. For example, a model building activity might aim to improve youth’s skills in calculating the area of surfaces and determining proportions. Assessments should focus on those specific skills—not math skills in general. Be prepared to be flexible in your assessment methods. Portfolios of a youth’s work can be used to track their progress in a skill-based activity and can include written work as well as visual arts. Pre- and posttests—assessments of individual performance taken before and after the program—can look specifically at the knowledge or a skill set that the program aims to develop. An effective homework help program, for example, should show that students who participate are more likely to complete their homework and do it well. Note, however, that you should assess the program’s quality to ensure that those helping students do not simply give them the answers but provide them with tools for solving problems.

“The pressure to show that out-of-school time programs can improve youth academic performance is fierce and future funding will likely depend on it. However, the most obviously available outcomes—standardized test scores and course grades—are unlikely to improve unless you specifically target the knowledge areas covered in tests or in classes. Therefore, we should resist the pressure to measure these if the program itself does not aim explicitly to improve youth’s academic outcomes.”

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project