You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Elizabeth Reisner of Policy Studies Associates discusses how the learning gains of a children’s literacy program relate to the program’s scaling process. 

A major goal of Save the Children (STC) in the United States is to improve the literacy skills of children who live in the nation’s poorest rural communities, including locations in Appalachia, the Southeast, the Mississippi River Delta, the Gulf Coast, the Southwest, and California’s Central Valley. 

The organization’s literacy program began in 2003 with 1,800 children in 15 pilot sites. These sites implemented the program’s after school literacy model, which at that time emphasized guided independent reading practice and employed leveled books (i.e., books organized by their degree of difficulty), computer-assisted quizzes (i.e., quizzes administered on a computer), and periodic assessment using a standardized, norm-referenced reading test. 

Also in 2003, STC hired Policy Studies Associates, a Washington, DC research firm, to launch a program evaluation. STC asked us to measure program participants’ change in literacy skills and to analyze these results in light of participants’ differences in program attendance, baseline literacy skills, and grade in school. 

Because we initiated the evaluation just as STC began program implementation, we had a chance to recommend program features that would facilitate evaluation. In particular, we encouraged STC to design its attendance database to record the number of books that each child read as part of the literacy program and each child’s scores on quizzes taken after reading each book. We also helped design the attendance database so that it could be merged with records of each child’s scores on the program’s reading test. 

Using these data, we have produced a series of six annual evaluation reports that describe the baseline educational characteristics of participating children, their program attendance, their level of engagement in the program (measured by books read, quizzes taken, and quiz scores), and their change in reading proficiency (measured in comparison with the reading test publisher’s national norming group). We have also produced tailored annual reports for each program site and for clusters of sites with shared characteristics. 

By the end of the 2008–09 program year, the initiative had grown to 146 sites serving almost 15,000 children. In addition to guided independent reading practice, the program now includes skill-building tutorials as well as activities to improve fluency and vocabulary, any of which can be offered in school or after school. 

Evaluation findings show that 60% of 2008–09 participants increased their literacy performance by at least 2 NCEs (normal curve equivalents) over and above the learning gain that would typically result from another year in school. Participants’ average literacy gain was 5.8 NCEs beyond the typical expected annual gain. Among children who participated in the program 55 days or more in 2008–09, the proportion improving by at least 2 NCEs was 63%. Among children who attended 55 days or more and who scored below grade level at the fall 2008 baseline, 66% gained 2 or more NCEs. Higher numbers of books read and quizzes passed were statistically associated with higher NCE gains. 

Although these findings are subject to selection bias because children attended the after school sessions voluntarily, they clearly suggest that the literacy program was making a difference for participating children at the same time that it was scaling up. These observations prompt a question: Did evidence of participant learning influence the growth and further development of the program? 

For the Evaluation Exchange, I asked STC program leaders to describe how program growth and documented learning gain intersected, if at all. Mark Shriver, STC’s vice president for U.S. programs, said that the relationship was positive and powerful. When he meets with educators and policy leaders to promote the program, he said, evaluation findings are what he talks about after he says hello. In the current tough economic climate, according to Shriver, state and local leaders will not consider any new education initiative unless it provides explicit evidence of effectiveness. Andrew Hysell, STC’s associate vice president for policy and advocacy, said that educators and policy leaders who make program decisions want to see evaluation evidence that is both rigorous and focused on state educational priorities such as literacy. 

Ann Mintz, STC education advisor for U.S. programs, has another perspective on the relationship of evaluation and scale-up. For her, evalua-tion is an essential tool in tracking program effectiveness, determining what works best in the program, and providing informed feedback to im-prove the program. For example, based on evaluation findings about which children benefit most, STC urges program leaders to recruit and retain children who fit the literacy program’s individual success profile (e.g., struggling readers who are capable of regular attendance and active en-gagement in the program’s literacy activities). Because regular program attendance appears to be so important in producing gains, STC encourages programs to educate parents about the value of regular attendance and to create “authentic” ways of encouraging attendance and personal engagement in program opportunities (no pizza parties for this health-conscious organization). According to Mintz, rewards for high attendance and engagement in STC programs include gifts of new books, the chance to read to kindergarten students, and the opportunity to eat lunch with the school principal. 

Mintz and her colleagues review evaluation findings with each program site to identify areas for improvement and to chart progress over time in order to achieve the largest gains for the greatest number of struggling readers. Evidence about learning needs among the program’s youngest participants was a factor prompting the development of STC’s emerging reader program for kindergarten and first-grade children. 

The most recent annual evaluation report in the series, which is directed at Policy Studies Associates by Richard White, is available at: www.policystudies.com/studies/school/STC.html

Elizabeth R. Reisner
Principal
Policy Studies Associates, Inc.
Email: ereisner@policystudies.com

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project