You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼

This special report offers commentaries from experts on the challenges and opportunities presented by the current federal policy’s emphasis on scientifically based research for the practice and evaluation of education reform.

As described on the U.S. Department of Education’s website, the No Child Left Behind Act (NCLB), signed into law by President Bush in January of 2002, signifies “a new era in education.”¹ One of the central tenets of NCLB is the use of scientifically based research (SBR) as a basis for targeting federal funds for education programs. Aspects of scientific quality prioritized in the legislation include: experimental control (or comparison) groups, replication of results through multiple studies, an ability to generalize results, rigorous standards especially via peer review, and convergence of results between studies.

The federal policy emphasis on SBR, and particularly on experimental studies, is a watershed in education research and evaluation. It is a policy development whose obstacles, opportunities, and implications for the field deserve careful consideration and open dialogue. It is in this spirit that HFRP sought commentary from seven experts in education research and evaluation to address the following question:

What are the critical challenges and opportunities of scientifically based research for those concerned with the practice and evaluation of education reform efforts?

Howard Gardner²
Professor, Human Development and Psychology
Harvard University Graduate School of Education
Cambridge, Massachusetts
Experts concur that American educational research is deficient; indeed, some imply that it bears the same tenuous relation to“real research” as “military justice” does to “real justice.” And, at least on the political front, a solution seems clear. Educational research ought to take its model from medical research—specifically, the vaunted National Institutes of Health (NIH) model. On this analysis, the more rapidly that we can institute randomized trials—the so-called “gold standard” of research involving human subjects—the sooner we will be able to make genuine progress in our understanding of schooling and education.

Perhaps, but perhaps not. Minds are not the same as bodies; schools are not the same as home or workplace; children cannot legitimately be assigned to or shuttled from one “condition” to another the way that agricultural seeds are planted or transplanted in different soils. It is appropriate to step back, to determine whether educational research is needed at all, whether it should be distinguished in any way from other scholarly research, what questions it might address, what are the principal ways in which it has worked thus far, and how it might proceed more effectively in the future.

If I had average means, but flexibility in where I lived, I would send my infant to day care in France, my preschooler to the toddler centers in Reggio Emilia, Italy, my elementary schoolchild to class in Japan, my high schooler to gymnasium in Germany or Hungary, and my 18-year-old to college or university in the United States.

What is striking is that none of these good schools is based in any rigorous sense on educational research of the sort being called for by pundits. Rather, they are based on practices that have evolved over long periods of time. Often, these practices are finely honed by groups of teachers who have worked together for many years, trying out mini-experiments, reflecting on the results, critiquing one another, co-teaching, visiting other schools to observe, and the like. In the past—indeed, in the present—much of the best school practice has been based on such seat-of-the-pants observations, reflections, and informal experimentation. Perhaps we need to be doing more of this rather than less; perhaps, in fact, research dollars might be better spent on setting up teacher study groups or mini-sabbaticals, rather than on NIH-style field-initiated or targeted-grant competitions.

Barbara Schneider
Professor, Department of Sociology, University of Chicago
Chicago, Illinois
The recent passage of the No Child Left Behind Act places the science of research in education at center stage. The emphasis on promoting a solid scientific research base that recommends how education studies should be conducted may lead to stronger designs and dispel impressions that most educational research is of poor quality.

One of the directives in the current legislation is the call for more clinical trials that use random assignment. Yet the use of this methodology, often viewed as the gold standard of medical and social science research, has been challenged by some education scholars as impractical and inappropriate for the complex and transient environments of classrooms and schools. There is also the concern that education is not medicine, and educational research, no matter how it is designed, is unlikely to find the cure for all ailments.

Certainly if all education studies were required to follow the random assignment model, it would undoubtedly restrict possibilities for discovery and experimentation as particular questions cannot be investigated within this paradigm. However, this is a critical period for educational research as considerable resources have been expended on promising reforms. Many initiatives, such as Accelerated Schools and Success for All,³ incorporate the best, up-to-date evidence on what works in schools for helping students learn. While evaluations of these programs are limited and the designs to test their effectiveness are often flawed, one cannot view legislation that calls for SBR as a threat to the credibility of these and other programs. Instead, we should consider employing rigorous designs under stringent conditions, an effort that might result in findings that are meaningful and replicable.

The current research emphasis on random assignment offers educational researchers a leadership role in designing studies that deal with the complexity of sample selection and attrition, cooperation, contagion, and other issues. Educational research has a tradition of producing significant contributions to methodology and statistics. Solving the problems that plague studies in education and other disciplines would again position education at the forefront of methodological research. The current legislation provides opportunities for enhancing the quality of educational research and building capacity in the profession. It is an opportunity worth taking.

Robert Boruch
Professor, Psychology in Education Division
University of Pennsylvania Graduate School of Education
Philadelphia, Pennsylvania
The challenges facing evaluation of school-based reform are similar to those encountered in efforts to change other kinds of institutions and organizations, such as changing hospital practices or the delivery of health care in medical units, deploying programs throughout entire housing projects to improve residents’ capacity to get jobs, and revising approaches to reduce crime. All involve entities that have been the units of random assignment and analysis in what are called place randomized trials, cluster randomized trials, and group randomized trials. These trials are the scientific basis for estimating the relative effects of interventions deployed system-wide.

The common challenges in these include:

  • Identifying entities that are ready to change and to participate in a trial that produces evidence on the effects of the change. Readiness includes being educated about trials and assuring that there are incentives to participate in them. The task also requires patience because learning what works better takes time.
  • Determining how to deploy the intervention in multiple schools, hospitals, and villages because the intervention, though it must be uniform in some respects, often has to be tailored to suit the setting. It includes developing systems to train people, such as teachers, physicians and nurses, and beat cops and their superior officers, who need to know what to do and be reinforced in the learning.
  • Learning how to monitor the deployment of the intervention and assure its fidelity or integrity at a reasonable cost. In addition, combining numerical and narrative evaluation methods has been a major sub challenge on account of intellectual provincialism in the universities.
  • Trying to assure that randomized trials are used where appropriate to estimate relative effects, as well as to counter ignorant claims that randomized trials cannot be done or that nonrandomized approaches will produce unbiased estimates of effect. These claims made about nonrandomized trials are sustainable only if one is willing to make often-heroic assumptions.

Partly as a prophylactic to naïve claims, the Campbell Collaboration has developed an electronic register of randomized and possibly randomized trials that is accessible to the public. (See box below.) There are about 11,000 references including about 200 on place-randomized trials.

Information on solutions to such challenges is also implicit in the experience of organizations that run such trials in the social and health sectors. The US Department of Education’s new Institute of Education Sciences (IES) has emphasized the use of randomized trials when the research aim is causal inference. Moreover, IES has assured through program announcements that resources are made available to support such trials.

Jennifer Greene
Professor, Department of Educational Psychology
University of Illinois at Urbana-Champaign
Champaign, Illinois
Education is among humankind’s highest and most demanding callings. It involves sharpening the analytic mind, cultivating the creative imagination, nourishing the developing persona, and comforting the social conscience. Education is a complex social practice that invokes scientific/technical, aesthetic/humanistic, and moral/ethical demands on our theories, our resources, and our capabilities. It is contextual, dynamic, and value-engaged. Given this complexity, no one particular lens on human endeavors can meaningfully capture and represent what is “good” or “high-quality” teaching and learning.

Yet, scientifically based research claims to do just that. It aims to restrict what counts as valid knowledge about educational matters to just one narrow lens, a lens that privileges a technical perspective on “effective” educational practices and disavows humanistic, aesthetic, and moral perspectives about good and meaningful education. In accordance with this technical perspective, important knowledge is limited to what kinds of educational materials, curricula, and teaching strategies cause “good” learning for the average student.

This is an important issue to consider. If there is a benefit to the contemporary demands for SBR it is this intent to elevate the stature and importance of educational research and evaluation within the government and the society at large by offering a set of guiding research principles. These, in turn, advance the renewal of a strong scientific culture of open and constructive critique as the surest safeguard against bias and insufficient claims to know something about the educational world.

Yet, knowing what is “effective” for the average is but a small piece of the education puzzle. Left out are understandings of the quality of the learning experiences themselves, their potential for developing the human spirit as well as the human mind, and their connections to human pathos, community, and morality. In this pluralistic era, multiple frameworks for educational research and evaluation are well accepted and they are all vitally needed to refocus our national commitments on ensuring high-quality and equitable educational opportunities for all of our children. The real danger of “scientifically based research” is that its perceived rigor would still allow for the continued and sanctioned neglect of those who have been left out for generations.

William A. Morrill
Senior Fellow, Caliber Associates
Fairfax, Virginia
The emphasis on SBR in education represents a substantial opportunity with a number of challenges embedded in the road to its realization. This view is built on four points:

  1. National resources supporting educational research and its application are embarrassingly inadequate, but not fixed. Clarity of purpose, soundness of methodology, and a strategy with priorities can help to expand resources.
  2. The quality of the research is crucial and must start with real-world questions derived from educational settings. All questions require a clear understanding of what, in fact, is known and a strategy for getting at the additional knowledge that is needed. Different methods are appropriate for different questions, but for causal impact questions, high-confidence answers require the best we know how to do, and that, in turn, requires random trials or quasi-experimental studies in combination with high-quality implementation studies. In the lifecycle of research on particular interventions, there is a logical sequence from small-scale to full-scale causal inference studies, unless the preliminary work has been done. It is also important that the research portfolio include studies of how students learn particular curricula, when basic questions about such learning are yet to be answered.
  3. The involvement of educators in research is increasingly recognized as necessary to ground the research in operational realities and increase its relevance and utility. Learning communities are an example of techniques for achieving this goal, as they strengthen both the research and the capacity of educational institutions to internalize new knowledge. Furthermore, the value and use of research has been missing for too long from the education and training of teachers. If we are going to indeed leave no child behind, that gap must be closed to make the whole knowledge-building enterprise work.
  4. The inventory of random trials and sound quasi-experimental studies is insufficient and educators will need to make choices in the near term based on other information and personal experience. It is important that such evidence be openly recognized as less than the best possible, yet certainly the best available at the time. Only with such candor will support for more rigorous work grow. In addition, it will be important to hone our priorities to focus on the most important issues from a user perspective as well as from a research or policy perspective. An agenda with this dual focus should overcome the challenges and lead to realizing the opportunities.

Bill McKersie
Project Director
Ohio High School Transformation Initiative at Cleveland Heights High School
Cleveland, Ohio
Former Senior Officer, The Cleveland Foundation, Cleveland, Ohio

Peter Robertson
Chief Research and Information Officer
Cleveland Municipal School District, Cleveland, Ohio
We write from the front lines of urban education reform, where the knowledge and insight of researchers still is not well connected with the work of practitioners. Working within the district and through local foundations, we and others set out to help institutionalize these links in Cleveland four years ago. Since then the district’s data capacity has been overhauled, and the CATALYST: For Cleveland Schools newsmagazine has been established to independently monitor Cleveland school reform.

Unfortunately, organizing researchers and district leaders into a common unit has been a tough go. Only this year the Northeast Ohio Research Alliance (NORA) emerged—a partnership between local universities and nonprofit agencies to support research for enhanced student learning in the Cleveland school district—and it faces tenuous support from universities buffeted by funding cutbacks and busy school district leaders.

Our experience prompts us to be optimistic about the new federal emphasis on SBR. Our story, while underscoring the often repeated challenges of connecting researchers and practitioners, does suggest a way forward.

It is our hope that SBR mandates provide incentives for scholars to work with their local school districts in a sustained manner on a selected set of priority research questions. School districts will not be the standard bearers of SBR as too few districts have research or program evaluation capacity, and few people at the state or local levels will push school districts on SBR.

The key will be to channel federal research monies in ways that shape the incentives and structures in research universities, leveraging more entities such as the Consortium on Chicago School Research and NORA. These should begin as small, high-level, locally based, and focused entities, with ample doses of foundation funding and mutual agreements about how to address the often divergent interests of practitioners and researchers. “Interest-based research” must become the watch phrase, guiding local teams of researchers and practitioners to commit to long-term partnerships that tackle technically and politically tough questions. If federal agencies, in defining research priorities and budgets, can ferret out and fund such entities, they will become the catalysts for meaningful SBR.

Related Resources

Scientific Research in Education, the influential education report edited by Richard Shavelson and Lisa Towne in 2002, describes the nature of scientifically based education research and offers recommendations for how the federal government can best support high-quality scientific research in education. www.nap.

In Doing What Works: Scientifically Based Research in Education, a recent article in The Evaluation Exchange, Suzanne Bouffard defines SBR and discusses its implementation challenges and implications for research and evaluation. 

The Campbell Collaboration is an international effort designed to prepare and promote access to systematic reviews of studies on the effects of social and educational policies and practices. Systematic reviews provide high-quality evidence on “what works.” The Campbell Collaboration is also a partner in the development of the US Department of Education’s What Works Clearinghouse (see below).

The What Works Clearinghouse was established by the US Department of Education’s Institute of Education Sciences to provide educators, policymakers, and the public with a central and independent source of scientific evidence of what works in education and to summarize evidence of the effectiveness of different programs, products, and strategies intended to enhance academic achievement and other important educational outcomes.

Lisa Towne
Senior Program Officer, National Research Council
National Academy of Sciences, Washington, D.C.
Co-Editor of Scientific Research in Education
A strong belief in science as a tool to promote sound public policy led Congress to charter the National Academy of Sciences back in 1863. This quintessentially American ethos is as strong as ever and in recent years has manifested in nearly ubiquitous calls to transform education into an “evidence-based” field. Evaluators and reformers alike ought to stand up and cheer that the political elite are calling for research and evaluation to drive education reform.

Of course, rhetoric and reality do not always square up and implementation invariably involves devilish details. Politically devised definitions of research and evaluation can be alarming and there are other early indicators that are troubling. Expectations that SBR alone can and should find the ever-elusive silver bullet of reform sets up the field for certain failure, underestimates the complexity of education, and degrades the wisdom of practice. The notion of narrowing the field to a short list of questions and an even shorter list of “approved” methodologies is an uncomfortable prospect.

In this context the National Research Council, the operating arm of the National Academy of Sciences, has assembled an interdisciplinary committee to articulate the nature of scientifically based research in education. The product of their deliberations, Scientific Research in Education (see box), strongly endorses a larger role for research in education reform, and clears up many of the dangerous misconceptions that crop up in implementing SBR. Science is an evolving enterprise that advances in fits and starts through the collective professional skepticism of the field; it is not a linear process that follows a strict algorithm. Science is enabled by the appropriate use of methods to address the question at hand; it is not, however, uniquely defined by methods, and certainly not by any one method.

We need clarity on these points as we go forward, and forward we should go as the opportunities definitely outweigh the challenges. Not since Lyndon Johnson’s Great Society of the 1960s have evaluators of social programs been so well positioned to do what they do best—harness the rigor of scientific inquiry for the service of the public good. As a profession that commingles the principles of science with the practical, yet often messy, purpose of improving programs, evaluation can help promote meaningful, systematic connections between research and reform. It is a long-term challenge to be sure, but one that must be met with all of the intellectual firepower and humility the profession can muster.

¹ Retrieved May 12, 2003, from

² This commentary has been excerpted with permission from the author and publisher from an article that first appeared in Education Week (Vol. 22, No. 1, pp. 72, 49) on September 4, 2002. The full article is available at:

³ See and

This special report was compiled by Holly Kreider, Project Manager at HFRP.

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project