You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Kathleen Shaw, Senior Researcher at Harvard Family Research Project, summarizes a new HFRP work in progress, Systems Reform: Challenges for Evaluation Research.

The promise and risk embodied in systems reform have placed these efforts at the center of a spirited debate. Some policymakers believe that nothing short of restructuring existing systems will overcome the fragmentation and duplication of current services for children and families. However, a sizable number of critics contend that systems reform is unrealistic, unachievable, and a waste of taxpayers’ money.

Systems reform is indeed a risky endeavor, requiring many years of funding and support, and its success depends on the stability of a range of factors, including political support, community participation, and highly motivated service providers and agencies. Nonetheless, the potential for these efforts to significantly improve the lives of poor children and families remains high.

The high-risk, high-payoff nature of systems reform requires a body of evaluation that enables the field to learn from its successes and failures. Despite the rhetoric on both sides of the debate, the current state of the field leaves us unable to assess the effectiveness of these reforms. Why? First, the quantitative experimental design dominating evaluation research is outmoded. Made to assess demonstration programs with well-developed models and tightly controlled participation, this model is inadequate in the face of the large-scale, dynamic systems reform efforts. Moreover, the traditional emphasis on outcomes ignores the desperate need for timely, formative descriptions of how these complex initiatives are being implemented. Finally, the conventional role of evaluator as “external expert” overlooks the need for accessible, accurate information by a variety of stakeholders.

In short, traditional evaluation models are proving inadequate on two major levels: 1) They do not accurately describe the intricacies of the systems reform efforts themselves; and 2) they are unable to attribute outcomes to these interventions. As a result, not only are we unable to determine whether these efforts work, we often do not document how they operate.

Defining Systems Reform

What, exactly, is meant by “systems reform?” A review of recent publications reveals that there is no standard definition. While systems reform efforts vary considerably, HFRP has developed a working definition that distinguishes them from discrete, categorical programs along the following dimensions:

  • The goals of these efforts are ultimately outcomes-focused and aim to improve the lives of children and families by restructuring the existing services system.
  • Their service strategies go beyond coordinating existing services to include creating a new set of integrated and user-friendly services.
  • They require a very large and diverse group of stakeholders and an increased emphasis on public accountability to succeed.
  • Collaboration is required and often mandated through the creation of cross-agency, cross-sector governance entities.
  • They are usually large in scale, either eligible population.
  • They are expected to sustain themselves past the initial round of funding and technical assistance.
  • They require major shifts in financing at the local, state, and sometimes federal levels.

The State-of-the-Art

No single evaluation theory has emerged to fill the vacuum created by the inapplicability of experimental design to the evaluation of systems reform efforts. Instead, a panoply of new and innovative approaches to evaluation has developed. In fact, now the field is not marked as much by dichotomies as by the language of choice: multiple approaches, flexibility and responsiveness. While none of these approaches to evaluation is a panacea, clusters of strategies are emerging that can teach us much.

Using our comprehensive database on the programmatic and evaluative aspects of over 60 comprehensive services and systems reform efforts around the country, we have identified four promising approaches to systems reform evaluation:

Mixed Methods. These research designs are perhaps the most common approach to evaluating systems reform efforts. They partially answer the calls for triangulation that are emerging from the literature, and also attempt to temper the pressure for accountability with a more realistic research design that tracks short and long-term outcomes rather than effects.

Mixed methods evaluations usually combine some type of outcomes tracking with some type of process study that attempts to document implementation of the reform effort. Yet combining an array of methods has its drawbacks: these evaluations often suffer from trying to be all things to all people—a particularly acute pressure in systems reform evaluation—and some fail to fully achieve any of their objectives.

Knowledge Development/Self Evaluation. Evaluations explicitly designed to feed regular information back to the program perform a knowledge development function. These evaluations often focus on developing a working and reliable Management Information System (MIS) that is accessible to most program personnel. Ideally, the MIS serves two functions: to provide information to the evaluation throughout the life of the project and to establish the permanent capacity of program staff to gather and reflect upon this information. In short, this evaluation strategy enables programs to improve their performance through continual reflective practice.

Yet this approach faces a number of challenges, the most important of which is the need to provide substantial and long-term technical assistance to program staff, teaching them both the technical and the analytical knowledge necessary to utilize the vast amount of information they will be collecting.

Public Accountability. One of the major challenges faced by systems reform initiatives is the task of garnering and maintaining public support for the effort. Publicly available data is increasingly used to achieve this goal. Strategies focusing on public accountability often blur the line between evaluation and program development, as evaluators utilize information to inform the public of outcomes, and of the status of program implementation.

However, public accountability strategies generally provide very little information about the process of reform, or interim measures, such as shifts in funding or service delivery. Thus, they risk putting the initiative at a disadvantage if the specified outcomes do not respond quickly to the reform initiative. Therefore, it is critical to identify outcomes with great care.

Participatory Evaluation. Unlike traditional top-down, external evaluation designs, participatory evaluation involves the input of various stakeholders in the design and implementation of the evaluation. While the locus of control remains with the evaluators, these evaluations benefit from the insight that only members of the community and direct service providers can give. However, they are difficult to implement. Individuals with varying degrees of expertise must be trained to think and act as evaluators, and the issues addressed in the evaluation may be of limited use to those stakeholders who have not participated in the evaluation.

Summary. While none of these four evaluation approaches meets all of the challenges posed by systems reform, taken together they suggest a new way of approaching the evaluation of systems reform.

Where Do We Go From Here?

Ideally, systems reform is a new way of doing business. Most of these efforts have been in existence for only a short while, and all are experimental in the sense that we do not know what a successful, fully implemented systems reform effort really looks like. Nor do we know the full potential of such efforts for improving the well-being of children and families.

Yet the enormous challenge of systems reform demands evaluation approaches that:

  • Consistently use information from the initial planning stages to the last report of outcome indicators.
  • Develop an internal capacity to collect, analyze, and disseminate this information to a wide array of stakeholders.
  • Learn from and build upon this information to continuously improve, and ultimately successfully implement, the reform effort itself.

Further Reading

Argyris, C., & Schon, D. (1989). Participatory Action Research and Action Science compared. American Behavioral Scientist, 32(5), 612–623.

Chelimsky, E. (1991). On the Social Science contribution to governmental decision-making. Science, 254, 226–230.

Forss, K., Cracknell, B., & Samset, K. (1994). Can evaluation help an organization to learn? Evaluation Review, 18(5), 574–591.

Greene, J., & McClintock, C. (1991).The evolution of evaluation methodology. Theory Into Practice, 1, 13–21.

Lincoln, Y. S., & Guba, E. G. (1985). Naturalistic inquiry. Beverly Hills: Sage Publications.

These demands suggest a new model of evaluation that builds on the notion of organizational learning writ large. If the target area of reform itself, be it local or statewide, can be conceptualized as a loosely defined “organization,” then systems reform can be thought of as the process of organizational adjustment to new knowledge, in a continuous attempt to achieve its goals. In the case of systems reform, the goals are improving outcomes for children and families. The information that was collected by external evaluators now becomes a daily piece of the internal workings of the reform process. Thus, knowledge development and utilization become integral parts of the systems reform effort itself; they are no longer marginalized or externalized as “evaluation.”

A recent study of the effect of evaluation on organizational learning suggests that few individuals within organizations learn from evaluations at all, and those that do learn relatively little (Forss et al., 1994). The challenge for the new knowledge development model of systems reform evaluation is to insure that the systems reform “organization” does, in fact, utilize new knowledge broadly and effectively. Developing a detailed evaluation model that successfully addresses this issue will be the focus of the second paper in this series.

Notes

Forss, K., Cracknell, B., & Samset, K. (1994). Can evaluation help an organization to learn? Evaluation Review, 18(5), 574–591.

Kathleen M. Shaw, Ph.D., Senior Researcher, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project