You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Program Description

Overview Voyager Expanded Learning, a company based in Dallas, Texas, offers in-school, after school, and summer programs to more than 750,000 children across the country each year. The Voyager Summer Program provides summer school programs for kindergarten through eighth grade students who are struggling academically and require additional support. Voyager's mission is to focus the experience and resources of its founders, board members, and staff on helping public schools ensure that every child has a successful educational experience and that no child is left behind. One main goal of the company is to close the achievement gap. The summer program aims to prevent summer learning loss, particularly among disadvantaged urban children. The company partners with organizations like the Smithsonian Institution and the Discovery Channel in developing its curricula and in applying the latest instructional technology, such as video and online resources.
Start Date 1994
Scope national
Type summer/vacation
Location urban, suburban
Setting public schools
Participants kindergarten through middle school students
Number of Sites/Grantees over 1,000 districts in 46 states
Number Served 400,000 students during the summers of 1999 and 2000
Components The Voyager Summer program provides an 80-hour four-week summer intervention. The curricula are organized so that children learn by exploring, role-playing, and engaging in activities designed to be fun and interesting. The program includes a variety of curricula in reading that engage students with intriguing content in an adventure format while improving their language arts proficiency. Voyager's core curriculum is focused on reading intervention with the goal of closing the achievement gap. The reading program is a strategic intervention designed for struggling readers. Voyager has a number of different curricula for each of the target age groups served by the program.
Funding Level $82 per child
Funding Sources Title I, Teacher & Principal Training (Formerly Eisenhower & Class Size Reduction), Innovative Programs, Special Education, Reading First, Early Reading First, Comprehensive School Reform, 21st Century Community Learning Centers grants, Grants to Eligible Partnerships for Professional Development, Fund for Improvement of Education.


Evaluation

Overview The evaluation sought to determine the effect of the Voyager Summer program during the summer of 2000 on the reading skills of program participants, as well as the perceptions of the program by students, teachers, principals, and parents.
Evaluator Greg Roberts, Ph.D., Texas Center for Reading and Language Arts, The University of Texas at Austin
Evaluations Profiled Technical Evaluation Report on the Impact of Voyager Summer Reading Interventions
Evaluations Planned Evaluations of each program are provided annually. A number of districts provide additional independent evaluations.
Report Availability Roberts, G. (2000). Technical evaluation report on the impact of Voyager summer reading interventions. Austin, TX: The University of Texas at Austin.

Available at: www.voyagerlearning.com/difference/publications.jsp


Contacts

Evaluation Greg Roberts
Nursing School
Campus Mail Code: D0100
University of Texas
Austin, TX 78712
Tel: 512-471-9911
Email: gregroberts@mail.utexas.edu
Program Dr. Jeri Nowakowski
Senior Vice President
Expanded Learning Voyager
1125 Longpoint Avenue
Dallas, TX 75247
Tel: 214-932-3250
Email: brawlinson@iamvoyager.com
Profile Updated January 16, 2003

Evaluation: Technical Evaluation Report on the Impact of Voyager Summer Reading Interventions



Evaluation Description

Evaluation Purpose To determine the impact of the summer 2000 Voyager Summer Program on the reading skills of participating students and to assess the reactions of students, teachers, and parents to the Voyager Summer Program.
Evaluation Design Quasi-Experimental and Non-Experimental: Data were collected from schools across the country that are implementing the Voyager Summer Program. There were, however, two main samples providing outcome data for the evaluation. In the first sample were schools across the country, including a total of 325 students, selected to submit pretest and posttest data for the Stanford Diagnostic Reading Test IV (SDRT-IV). These schools were chosen based on the following criteria: anticipated willingness to participate in the evaluation, expected level of program implementation, representation of socioeconomic diversity of Voyager students, and representation across Voyager's three TimeWarp programs. Identified schools were then asked to participate, and those agreeing to do so constituted this first sample. Additionally, some students in similar schools attending similar Voyager programs to the first sample provided survey data. The second main sample consisted of 9,000 Voyager summer students from Washington, D.C. and New York City for whom pretest and posttest data for Voyager-developed skill tests for reading and math existed.
Data Collection Methods Surveys/Questionnaires: Two groups of Voyager students were surveyed, along with their parents, teachers, and principals. The first group consisted of the SDRT-IV sample (n=193) and the second group consisted of students in similar Voyager programs at similar schools who were not part of the SDRT-IV sample (n=67). Some students (n=72) were also surveyed whom the evaluators could not identify as belonging to either the SDRT sample or the non-SDRT sample group—they are referred to as the “unknown group.” The surveys assessed students' satisfaction with the program, parents' (n=121) assessments as to whether or not their children's learning had been impacted by the program, principals' (n=4) satisfaction with the program, and teachers' (n=26) satisfaction with the curricula as well as their assessments of impact of the program on children's learning.

Tests/Assessments: The evaluation made use of two types of tests: the SDRT-IV and the Voyager-developed skill tests for the two main samples of children included in the evaluation. Three-hundred-twenty-five students took the SDRT-IV, a test designed to diagnose students' strengths and weaknesses in the major components of reading. Three levels of the SDRT-IV were used: students in the TimeWarp Egypt program took the Orange level (grades 2.5–3.5), students in the TimeWarp Greece program took the Purple level (grades 4.5–6.5), and students in the TimeWarp The Americas program took the Blue level (grades 9–13).

Voyager participants in Washington, DC and New York City took the Voyager-developed skill tests. In Washington, DC students enrolled in the TimeWarp Egypt, TimeWarp Greece, and TimeWarp The Americas programs took the respective TimeWarp Egypt, TimeWarp Greece, and TimeWarp The Americas language arts tests. In New York City, the assessments used were the Pre+Med Community Hospital Program (grades 1–3) tests for reading and math as well as the Pre+Med Emergency Room (grades 4–6) tests for reading and math. The pretests were given prior to beginning the Voyager Summer program and the posttests were given just after completing the program.
Data Collection Timeframe Data were collected during the summer of 2000.


Findings:
Formative/Process Findings

Activity Implementation Ninety percent of surveyed students indicated that they would like to know more about their Voyager adventure.
Recruitment/Participation The New York City sample of students had a program attendance rate of 89–90%. It did not vary significantly by program (Pre+Med Community Hospital and Pre+Med Emergency Room).
Satisfaction Ninety-two percent of surveyed students liked being part of the Voyager team and 95% liked the Voyager learning stations.

Ninety-four percent of surveyed students felt that Voyager would help them do better at school.

Surveyed students from the non-SDRT sample liked the guest speakers more than students from the SDRT sample.

The majority of surveyed parents felt that Voyager was exciting for their child, of higher quality than other summer programs, and a factor in increasing their child's learning.

Surveyed parents expressed a willingness to recommend the Voyager program to other parents.

Parent surveys from the “unknown” sample yielded higher rates of satisfaction with the program on a few survey items including willingness to recommend the program to others, perception that their child would benefit from another Voyager adventure, and belief that Voyager learning will carry over into the classroom.

Voyager teachers who completed a survey were satisfied with the program's effectiveness on the whole. Eighty-five percent of surveyed teachers reported that Voyager made a difference in their students' learning while 92% felt that Voyager increased students' interest in learning.

Ninety-two percent of surveyed Voyager teachers felt that the Voyager materials were engaging and interest-building for students.

Ninety-two percent of surveyed Voyager teachers believed that the use of a single theme throughout the program was a factor in students' improved ability levels, that the Curriculum Guides were well-designed and easy to use, and that the program contained content that was appropriately challenging.

Surveyed principals reported being satisfied with the Voyager program.


Summative/Outcome Findings

Academic Voyager students in the SDRT-IV sample revealed an average effect size (ES) of the program of .42. For the TimeWarp the Americas group, the effect size was largest (.55), followed by TimeWarp Egypt (.41), and TimeWarp Greece (.26). (An effect size is defined by the evaluator as a standardized measure of change in test scores from pretest to posttest.)

The average effect sizes for Voyager students in the SDRT-IV sample on each of the components of the test were as follows: .32 for phonics, .30 for vocabulary, .29 for comprehension, and .45 for scanning.

There was a wide range of effect sizes for Voyager students in the SDRT-IV sample in the 13 participating Voyager programs. One school had an effect size as low as .05, while another had an effect size of .97.

Normal curve equivalent (NCE) scores for Voyager students in the SDRT-IV sample increased by an average of .32 of a standard deviation from pretest to posttest. The average NCE gains for TimeWarp Egypt, TimeWarp Greece, and TimeWarp The Americas were 8 points, 5 points, and 7 points, respectively.

Average NCE improvements for Voyager students in the SDRT-IV sample were between 22% and 32% for all test subparts including phonics, vocabulary, comprehension, and scanning.

A large proportion of students taking the Orange level (grades 2.5–3.5) SDRT-IV test moved from below average standing at pretest to average or above average standing at posttest in the areas of phonics (between 4–19 percentage points more students at posttest at average or above average standing in the various measures of phonics), vocabulary (7–14 percentage points), and comprehension (2–16 percentage points).

A moderate proportion of students taking the Purple level (grades 4.5–6.5) SDRT-IV test moved from below average standing at pretest to average or above average standing at posttest in the areas of vocabulary (between 1–7 percentage points more students at posttest at average or above average standing in the various measures of phonics) and comprehension (1–8 percentage points).

The New York City sample of 3,261 matched pairs of reading pre and posttest data and 3,275 matched pairs of math pretest and posttest data revealed that, on average, students answered more test items correctly at posttest than at pretest. For reading, the students in grades 1–3 who were enrolled in the Pre+Med Community Hospital program had a gain of 1.2 correct items on their 35-item test; the students in grades 4–6 who were enrolled in the Pre+Med Emergency Room program had a gain of 2.5 correct items on their 40-item test. For math, the Community Hospital students (grades 1–3) gained 2.8 correct items from pretest to posttest on their 35-item test, while the Emergency Room students (grades 4–6) gained 2 correct items on their 40-item test.

The New York City sample revealed improved performance from the pretest to posttest in both reading and math. The Community Hospital students (grades 1–3) went from a 82% correct on the reading pretest to 85% correct on the reading posttest while the Emergency Room students (grades 4–6) improved from 68% correct at pretest to 74% correct at posttest in reading. The Community Hospital students improved from 65% to 73% correct in math; the Emergency Room students went up from 46% correct on the math pretest to 51% correct on the posttest.

Using the New York City sample, the average effect size for reading was .29 while the average effect size for math was .36. Both of these effect sizes are significant in that they reflect approximately one third of a standard deviation growth from pretest to posttest.

For the New York City sample, students who attended the programs more than 90% of the time displayed larger effect sizes in both reading and math across both programs. In reading, the effect sizes for the higher attending groups (>90% attendance) were .34 and .43 for the Community Hospital and Emergency Room students, respectively. In math, the effect sizes were .47 and .35 for the Community Hospital and Emergency Room students, respectively.

For the Washington, DC sample of 6,573 matched pairs of pretest and posttest data on the TimeWarp language arts tests, the average number of correct items gained from pretest to posttest were 2.1. (For grades 1–3, the test had 35 items; for grades 4 and up, the test had 40 items.) Specifically, for TimeWarp Egypt and Greece programs, the average number of correct items increased from 15.5 to 17.7. For TimeWarp The Americas, the average number of correct items increased from 18.9 to 20.5.

For the Washington DC sample, the average effect size of the programs was .43 and .26 for TimeWarp Egypt/Greece and TimeWarp The Americas, respectively.

For the Washington, DC sample, the correct-response rate on average increased from 62% at pretest to 70% at posttest. It increased 9 percentage points for students in TimeWarp Egypt and TimeWarp Greece and 5 percentage points for students in TimeWarp The Americas. Sixty-seven percent of students in TimeWarps Egypt and Greece improved their test scores from the pretest to the posttest while 60% of TimeWarp The Americas students did.

 

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project