You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼

An in-depth look at the challenges presented by the evaluation of the Early Head Start program - an evaluation which required the cooperation of multiple layers of research and program partners.

Conducting research with program partners and multiple levels of research partners represents a relatively new paradigm in evaluative research. The impetus for such a paradigm arises from the need to provide more usable and timely information on complex, multidimensional, multigenerational, community-responsive programs. This new model strives to overcome the shortcomings of traditional evaluation approaches that require program interventions to be highly standardized across individuals and sites. It is difficult to apply traditional designs to evaluate national programs that are designed to be flexible enough to fit the needs of diverse families and a variety of communities. This article examines some of the benefits of research-program partnerships and creative responses to its challenges in the context of a large scale evaluation.

The Structure of the Partnerships
The Early Head Start program is an initiative to serve pregnant women and low-income families with infants and toddlers. After its establishment in 1994, the Administration on Children, Youth, and Families (ACYF) embarked on the national Early Head Start Research and Evaluation Project. ACYF designed this project to encompass five major research and evaluation components: 1) a rigorous, experimental cross-site impact study¹, 2) an assessment of the program implementation, 3) local research on the mediators² and moderators³ of program impacts, 4) policy studies on child care, fathers, and children’s health, and 5) strategies for continuous program improvement.

Partnership has been central to the design of this research and evaluation. The study was conducted by the Early Head Start Consortium, which consists of:


  • A national contractor, Mathematica Policy Research, Inc. (MPR), and its subcontractor, Columbia University’s Center for Children and Families at Teachers College
  • 15 university-based teams of local researchers who both conduct local research and subcontract with MPR to collect national data
  • The central funding agency, ACYF, which is part of the U.S. Department of Health and Human Services
  • The 17 participating local Early Head Start programs

Related Resources

Professor Edward Melhuish of the Institute for the Study of Children, Families, and Social Issues leads the six-year National Evaluation of Sure Start, England’s early intervention initiative for children from birth to four years, their families, and communities. The evaluation consists of five modules: implementation, impact, local context analysis, cost-effectiveness evaluations, and support for local program evaluations. The website features information about each of the modules and presents frequent updates about the progress of the evaluation.

Knitzer, J., & Adely, F. (2001). Learning from Starting Points: Findings from the Starting Points Assessment Project. New York, NY: National Center for Children in Poverty. This evaluation of a Carnegie-funded systems change initiative to improve the well-being of young children examines the initiative’s strategies, successes, and challenges in four cities and seven states.

Racine, D. (Ed.). (2001). Identifying, Replicating and Improving Successful Early Childhood Programs: A Conference for Funders. Philadelphia, PA: Replication and Program Strategies, Inc. This publication contains the papers and proceedings of a conference about advancing the early childhood field and the crucial role of evaluation in replication efforts. The report points to the need to strengthen the market for knowledge within the early childhood field.

The Benefits of the Partnerships
Each level of partnership had a special function and contributed to the synergy of the entire consortium.

The national evaluation and national program partnership ensured that the research informed the national training and technical assistance efforts. The close partnership also ensured that the evaluation met the needs of the national office. It ensured that: 1) the measurement of the implementation fit well with the intended program process, 2) the selected outcomes for measurement were those that the program intended to affect, and 3) the national program would obtain data on the program implementation early in the program’s development.

The national evaluation and local program partnerships created trusting, committed, and open relationships that contributed to successfully implementing the random assignment of families to the study. These relationships also helped maintain families’ participation in the study, enabled the national team to collect good implementation study data, and facilitated increased research sophistication among program staff.

The national evaluation and local research partnerships enabled local research teams to collect unique local data that would complement and supplement the national contractor’s cross-site data. These partnerships led to collaboration among local research teams and the addition of local perspectives on national questions. Local researchers provided substantive expertise that no single national evaluation contractor could provide, while the national contractor added dimensions of continuity and synthesis that a collection of local studies could not have achieved.

Local research and local program partnerships enabled local research teams to better address the local programs’ unique interests. The local programs used the research data to help them meet their continuous improvement goals.

Meeting the Challenges of a New Partnership Paradigm
New evaluations pose a number of challenges, and having multiple levels of partnership adds to these challenges. Often we were able to address these challenges by resolving the tensions created by the partners’ unique perspectives. We identified four key tensions that existed within our evaluation. Our resolutions for each of them brought benefits to the project as a whole.

1. Balancing the rigor required by experimental design with issues raised when families did not participate in the program. The commitment to an experimental design was made by ACYF. Programs and researchers agreed to participate in this design as a condition of funding. Implementing random assignment when families applied to the program posed surprisingly few difficulties. However, many discussions followed about the ethics of analyzing service use and outcomes from families who had left the program. Cutting-edge methodologies provided analyses of impacts that took account of program participation within the experimental design. Researchers in the partnership are continuing to analyze data to learn more about the role of service variation over time.

2. Balancing the traditional question posed by evaluations about whether the "program worked" overall with questions focused on "what worked for whom and under what conditions." Early Head Start programs are designed to respond to community needs and to allow families to select from program options, whether home-based, center-based, or a combination of both. Program families are diverse both across the country and within communities. The program is not restricted to a particular group, such as first-time mothers. The contractor and Consortium carefully designed a system of targeted analyses to address questions of what worked for whom within the framework of the experimental design.

3. Recognizing the contributions and roles of a large number of partners while maintaining confidentiality, unity, and a shared vision. Over 100 persons, representing ACYF, the national study contractors, the 15 local research teams, and the 17 research partners, comprised the partnership. Each type of partner brought its own pressures and perspectives. In 13 meetings over the six-year life of the project, Consortium members worked through details of the differences. Many off-shoot projects and lively discussions ensued as a result of the divergent interests of group members, yet policies were proposed and agreement was attained in regards to confidentiality, publications policies, local-national contributions to reports, representation on presentations, and timing of returning data to programs for their continuous improvement.

4. Conducting an implementation study that informed program improvement along with an impact study that informs Congress about the value of the investment (ACYF 2001). The program announcement specified the seemingly contradictory intention of conducting a traditional experimental design, while providing timely information feedback to the program at both national and local levels. The Consortium addressed this tension by establishing policies for timing the release of data so that implementation data were available early, while impact data were delayed until the 24-month data collection was complete in all sites.

The collaborating entities and individual participants in the Early Head Start Research Consortium have discovered how much effort multi-layered partnerships entail. As strong individual entities, we have had to commit time and energy to our mutual goals, acknowledge and respect the importance of our structure, and be open to change as we continued along the path to developing, disseminating, and using research-based knowledge about effective Early Head Start programs.

Dr. Helen Raikes
The Gallup Organization
300 S. 68th Street Place
Lincoln, NE 68510

John M. Love
Ellen Eliason Kisker
Mathematica Policy Research

Rachel Chazan-Cohen
Louisa B. Tarullo
Judith Jerald
Administration on Children, Youth, and Families

Martha Staker
Judith J. Carta
Project EAGLE, University of Kansas Medical Center, and University of Kansas program-local research partnership


¹ Experimental study designs all share one distinctive element: random assignment to treatment and control groups. Experimental design is the strongest design choice when you are interested in establishing a cause-effect relationship. Experimental designs for evaluation prioritize the impartiality, accuracy, objectivity, and validity of the information generated. These studies look to make causal and generalizable statements about a population or impact on a population by a program or initiative.
² Scientific research generally looks for causal relationships between two types of variables—independent and dependent. Independent variables are those we can manipulate and dependent variables represent the outcomes being examined. A mediating variable helps explain the relationship between the independent and dependent variables.
³ A moderator variable refers to specific factors that can reduce or increase the influence of an independent variable on a dependent variable. In the social sciences, factors such as income, ethnicity, age, and gender, are common examples of moderator variables. While a mediating variable has an explanatory function, i.e., it explains how and why an outcome is manifested, a moderator variable describes the specific conditions under which a relationship between independent and dependent variables is found.

The work reported here is based on research conducted by the Early Head Start Research Consortium, as part of the national Early Head Start Research and Evaluation Project, funded by the Administration on Children Youth and Families (ACYF), U.S. Department of Health and Human Services, under contract 105-95-1936 to Mathematica Policy Research (MPR), Princeton, NJ. For more information, visit the ACYF website and the MPR website.

Administration on Children, Youth, and Families. (2001). Building Their Futures: How Early Head Start Programs Are Enhancing the Lives of Infants and Toddlers in Low-Income Families. Washington, DC: U.S. Department of Health and Human Services.

Administration on Children, Youth, and Families. (1999). Leading the Way: Characteristics and Early Experiences of Selected Early Head Start Programs. Executive Summary, Volumes I, II, and III. Washington, DC: U.S. Department of Health and Human Services.

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project