You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Sarah Levin Martin, currently with the Centers for Disease Control and Prevention, describes an innovative and cost-effective way to collect and report evaluation data for program quality improvement.

In the fall of 2002, the Nike company funded a new after school physical activity program in 31 Boys and Girls Clubs across the country. The program, called NikeGO, had two primary goals: (1) to get more kids moving and (2) to get kids already moving to move even more. Each club received Nike equipment and training in the use of the SPARK® curriculum. SPARK, or Sports, Play, and Active Recreation for Kids, is a nationwide program devoted to promoting physical activity for both students and teachers. With input from its 9- to 15-year-old clients, each club designed its own program. Although programs shared the same goals, they took very different approaches to implementation: Across clubs, activities ranged from street hockey to hip hop dance to yoga.

The Evaluation
Nike’s evaluation challenge was how to equitably and efficiently evaluate the programs with limited funding. The company asked an independent evaluation team (husband and wife faculty members at Morehead State University) to conduct a systematic assessment that could be used for continuous improvement. Limited to approximately 1% of each program’s total operating budget, Nike needed to determine both process measures and impact measures. Hence, Program Evaluation Across the Nation Using Technology (PEANUT) was conceived. PEANUT was used to measure the effectiveness of NikeGO against its two main goals, described above.

The Process
The PEANUT evaluation team recruited, screened, and hired research assistants (RAs) from universities and colleges across the United States as data collectors. These RAs were then trained to use technology such as conference calling and email to carry out scientific inquiry. Once trained, the RAs visited clubs to collect data through observation and structured interviews both with adult activity leaders and with participating youth. The RAs then submitted their findings electronically to the team for synthesis and evaluation.

The evaluation team graded clubs on three areas: activity level, participation rate, and implementation process. Activity level was graded on several factors, including the intensity of the activity (light, moderate, vigorous), the degree to which the skill and/or activity lent itself to continued participation beyond the program (lifelong activity), the number of times the activity took place each week (frequency), the length of the activity sessions (duration), and the continuity of the programming (timetable). Participation rate was graded on the reach of the program (attendance rates) and the number of sedentary kids attracted (i.e., youth that had not participated in the past). Implementation process was graded on the club’s ability to run the program (organizational capacity), and the degree to which the youth dictated the activities (kid input). Clubs were not compared to one another, but to a set standards; each received its own score, derived from the “call for proposals” put forth by Nike in their initial announcement of the grants available.

For scoring purposes, the reports adapted terminology from the game of horseshoes, which was operationalized to match the expectations of NikeGO:

  • Ringer – a program showing evidence of a greater-than-passing grade for activity level and participation rate (including all or nearly all subcategories) and at least a passing grade for implementation process
  • Leaner – a program showing evidence of a greater-than-passing grade for activity level or participation rate (including most subcategories) and at least a passing grade for implementation process
  • Shoe-in-Count – a program showing evidence of a passing grade for at least one subcategory from each of the three areas
  • Almost – a program showing weak evidence of passing in at least one of the three areas
  • Shoe-out-of-Count – a program showing no evidence of passing in any of the three areas

The Lesson
Evaluation results were shared with stakeholders at an annual meeting in Florida last spring. While the summaries were expressed in simple “horseshoe” terminology that was fun and easy to understand, Nike was given a more detailed report to help improve the quality of their program nationally. NikeGO is now entering an expanded second phase based largely on the first round of evaluation findings.

As for PEANUT, evaluators have furthered the use of technology. Web-based surveys have largely replaced interviews with club administrators, and the process of synthesizing the collected data has become partially automated.

For more information about the NikeGO evaluation, visit the HFRP Out-of-School Time Program Evaluation Database.

Sarah Levin Martin, Ph.D.
Centers for Disease Control and Prevention
Division of Nutrition and Physical Activity
4770 Bufford Highway, NE
Mailstop K-46
Atlanta, GA 30341
Tel: 770-488-5413
Email: sjl2@cdc.gov

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project