You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼

Jack Shonkoff, from the Heller School for Social Policy and Management at Brandeis University, reflects on the difficulties created by the highly politicized environment of program evaluation.

Early childhood programs have been a part of the nation's social policy landscape for decades. Beginning with the establishment of Head Start and the Handicapped Children's Early Education Program in the 1960s and extending into the debates over early care and education in the 2000s, the call for public investment has been impassioned and the demand for accountability has been persistent.

A Historical Perspective
Looking back over the past 40 years, it might be useful to think about the early childhood evaluation enterprise as a succession of three stages. The first could be named “Don't just stand there, do something!” as the compelling nature of the needs focused on the urgency for action rather than the value of research.

The second stage could be labeled “Don't just do something, stand there!” During this period, thoughtful leadership began to focus on the need to step back and reflect on a core set of important questions about why programs do what they do and what they are trying to accomplish. When they were done well, these efforts resulted in the construction, testing, and ongoing refinement of highly useful theories of change.¹

In 2000 a comprehensive report was released by the National Research Council and Institute of Medicine of the National Academy of Sciences entitled From Neurons to Neighborhoods: The Science of Early Childhood Development. Based on a critical analysis of extensive research, the committee identified the essential features of effective programs. These include:

  • Individualized service delivery
  • High quality program implementation
  • Appropriate knowledge and skills of service providers
  • Positive relationships between parents and professionals

The report concluded the following:

The general question of whether early childhood programs can make a difference has been asked and answered in the affirmative innumerable times. This generic query is no longer worthy of further investigation. The central research priority for the early childhood field is to address more important sets of questions about how different types of interventions influence specific outcomes for children and families who face differential opportunities and vulnerabilities. (p. 379)

This agenda defines the parameters of the third stage in the evolution of early childhood evaluation: “How do we know what's really making a difference?” This is where the rubber truly needs to hit the road. This is the time to ask and answer the tough questions about what kinds of services have what kinds of impacts, on what kinds of children, in what kinds of families, under what kinds of circumstances, and at what cost.

The Current Challenge
Currently we should be well into the third stage. The problem is that many of the most important questions are very hard to answer. Much has been written about the difficulties of evaluating early childhood programs. The limitations of non-experimental and quasi-experimental designs have been well described; the imperative of randomized, controlled studies to answer causal questions has been hammered home again and again. The logistical and financial barriers that must be scaled to successfully conduct high quality longitudinal studies are legendary. The ethical concerns about random assignment of vulnerable children and their families to “no treatment” control groups have been debated endlessly. When all is said and done, however, the most vexing obstacles to truly informative evaluation research may well be less a matter of science and more a matter of politics.

The Politicized Context of Program Evaluation
In order to understand the complex politics of early childhood evaluation research, it is necessary to recognize the difference between two very different pursuits of knowledge: knowledge for understanding and knowledge for advocacy.

Knowledge for understanding is typically referred to as scholarship or science. Its primary purpose in the early childhood arena is to disentangle the complicated dynamics of human development and elucidate the multiple influences on selected outcomes. Generally speaking, this type of research is a fascinating but relatively low-stakes enterprise that is engaged in an impartial search for “truth.” In its purest form, it is cautious, conservative, and focused on what we don't know.

Related Resource

Research Connections, a collaboration between the National Center for Children in Poverty, the Child Care Bureau, the U.S. Department of Health and Human Services, and the Inter-University Consortium for Political and Social Research, promotes high quality research in child care and early education and the use of that research to inform policymaking. The collaboration serves as a medium for connecting child care and early education, offering research, data sets, and syntheses from multiple disciplines to integrate the early childhood field, and making evidence available and easy for policymakers to access.

Knowledge for advocacy is what some people call lobbying. Its primary aim is to use data to influence the formulation of a particular policy or the delivery of a specific service. In most circumstances, this type of pursuit is a challenging and relatively high-stakes enterprise that is engaged in a dedicated campaign to prove a point. In its most common form, it is bold, assertive, and focused on how much we do know.

The difference between these two agendas strikes at the heart of the dilemma facing the early childhood arena. In theory, objective and independent knowledge for understanding would be the lifeblood of the field. In the best of all worlds, it would be welcomed by providers, recipients, and funders of services as a major source of new ideas about how to promote the healthy development of young children. Toward this end, the field would move beyond documenting success and would direct attention toward interventions that appear to be least effective in order to generate alternative strategies that can be tried, assessed, and refined continuously over time.

In reality, a great deal of evaluation research seeks knowledge for advocacy. This is encountered most commonly when evaluation is linked directly to decisions about core program funding, particularly when the results are used to determine whether or not a program should continue to exist. Under such circumstances, the rational strategy for any service provider is to assure that data are generated and presented in a way that demonstrates a program's success rather than questions its impact. Conversely, the agenda for an opponent of the program is to demonstrate its failure.

When program evaluation is conducted in a high-stakes political environment, reflective thinking is minimized, the status quo is reinforced, and critical thinking is stifled. Studies that demonstrate positive impacts are more likely to be disseminated, those that show nonsignificant effects frequently end up in a file drawer, and honest attempts to generate constructive criticism in the service of improving quality are seriously undermined. A field that shines a bright light on its failures and searches for lessons to be learned is more likely to remain healthy and grow. A field that focuses exclusively on its accomplishments and buries its shortcomings has a less promising future.

Stated simply, there must be a fundamental change in the culture of program evaluation that creates a safe environment for honest investigation and redefines what we mean by “positive” and “negative” findings. In a better world, a positive result would be defined as an insight or conclusion that advances our knowledge, not a finding that simply affirms what we are already doing. In a similar spirit, a negative result of an evaluation would be less about exposing failure and more about the disappointment of not learning anything new.

The primary responsibility for overcoming this political burden should not be placed on the backs of service providers and their research partners. These individuals are simply responding to the pragmatic pressures coming from their funders. The ultimate solution to this dilemma lies squarely in the laps of those who link core funding decisions explicitly to the production of positive evaluation results rather than reward serious self-criticism in the service of continuous improvement.

Creating a True Learning Environment
As for children, the healthy development of the early childhood field requires a safe and nurturing environment that provides opportunities for exploration, builds on previous experiences, promotes judicious risk taking, and learns from mistakes. This kind of environment would promote a spirit of collaboration and partnership among parents, service providers, evaluators, and funders. And it would lead to broad-based agreement about the need for unbiased and honest investigation to learn as much as possible about how to get the highest achievable return on the investment of finite resources in high quality programs that are well implemented by skilled providers.

Finally, it is essential that the dissemination of research be viewed as a science unto itself. Simply publishing findings is not sufficient to influence policymaking, service delivery, and the continuing growth of the early childhood field. New knowledge must be framed strategically to be accessible to its intended target audiences, particularly if the goal is to communicate with multiple and diverse stakeholders.

In 2003 a multidisciplinary group of leading scientists in neurobiology, early childhood development, and communications research established the National Scientific Council on the Developing Child ( to address this challenge. Its mission is to serve as a credible knowledge broker for multiple audiences by bringing sound and accurate science to bear on public decision making that affects the health and development of young children.

Shonkoff, J. P., & Phillips, D. A. (Eds.). (2000). From neurons to neighborhoods: The science of early childhood development. Washington, DC: National Academy Press.

¹ A program’s theory of change explicitly articulates how the activities provided lead to the desired results by linking service receipt to intermediate changes (such as changes in attitudes or in-program behaviors to short- and longer-term outcomes in participants.

Jack P. Shonkoff, M.D.
The Heller School for Social Policy and Management
Brandeis University
P.O. Box 549110, MS 035
Waltham, MA 02454-9110
Tel: 781-736-3883

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project