You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Julia Coffman of Harvard Family Research Project writes about using a logic model approach to evaluate a large and diverse foundation initiative.

Evaluators are increasingly facing the challenge of evaluating complex initiatives that are both multi-site and multi-level. These initiatives typically involve a number of different programs or organizations that have unique strategies and goals but are working toward a broader, common agenda. Both private and public sector funders are increasingly investing in complex initiatives with the intent that through them they will achieve a more strategic impact than they could make by funding individual programs. For example, complex initiatives can act as a catalyst for the connection and integration of different types of services or activities that may be needed to achieve broad-based change. In general, complex initiatives are expected to achieve more in terms of outcomes than would be possible with the sum of their individual parts.

In the last decade much has been learned about ways to evaluate complex initiatives. Theory-of-change and cluster evaluation have recently emerged as developing evaluation approaches that can aid evaluators facing the daunting task of designing and implementing complex initiatives such as comprehensive community initiatives. These approaches have helped evaluators make conceptual leaps in understanding how to address specific design challenges.

However, there is still much to be learned and shared. Of particular importance is achieving progress in describing complex initiative evaluations that are both efficient and effective, with due regard for reasonable time and resource boundaries. Additionally, there is a need for acquiring and sharing information on how to build management and reporting structures that complement these approaches.

In this article, we highlight two major lessons learned from theory-of-change and cluster evaluation about how to evaluate complex initiatives. In addition, within the context of our experiences evaluating the W.K. Kellogg Foundation’s (WKKF) Devolution Initiative,¹ we share tips on how we applied those lessons to meet our own evaluation challenges.

The Devolution Initiative (DI) was created in 1996 to respond to some of the information and governing challenges associated with welfare reform and health care devolution—the passing of responsibility for policy and service development and management from the national level to the state and local levels. Specifically, the WKKF saw a need to support states and local governments in their efforts to take on these new responsibilities. Consistent with WKKF’s mission to “help people help themselves,” the DI strives to help citizens learn what is and is not working in various states with respect to welfare reform and health care so they can participate in the development and implementation of more effective policies in their communities. The more than 25 national, state, and local grantee organizations involved in the DI work toward this goal by building a base of knowledge about the impacts of devolution, disseminating findings to diverse target audiences that include policymakers and citizens, and building the capacity of communities and their citizens to participate in and inform the policy process.

Lessons and Tips From Tested Approaches

Some clear lessons can be gleaned from the work that has been completed on evaluating complex initiatives so far. We present two of these lessons below. For each, we provide practical tips on how these lessons might be applied, based on our experience with the Devolution Initiative.

Lesson: Articulate the Complex Initiative’s Theory
It helps immensely to begin a complex initiative evaluation by articulating the broad theory (or theory of change) that weaves together the initiative’s many strands. This theory can serve as a framework for interpreting the initiative’s various layers and parts. The evaluation literature includes a wealth of information on ways to tease out program theory. The logic model is one of the most frequently used tools for this task, and there are many evaluation resources available on how to develop logic models. The models developed using these resources generally share as common ground the identification of initiative activities and how they relate to short-, intermediate-, and long-term outcomes.

Tip: Begin with a Broad Conceptual Model of the Initiative’s Theory
We began our evaluation by attempting to develop a logic model to articulate the Devolution Initiative’s theory of change. We found, however, that our desire to work with stakeholders to build a theory of change that fully articulated the short-, intermediate-, and long-term outcomes for each Initiative layer (individual, community, state, national) conflicted both with our time constraints (we needed to quickly develop and begin implementing an evaluation design) and with the fact that the DI was too early in its development to do that. Because the DI was still developing, the Initiative’s stakeholders did not want to articulate too much of the strategic detail or give a sense of false precision by making premature decisions about outcomes.

As a result, rather than enumerating all of the DI’s outcomes as a typical theory articulation process might do, we began by developing a broad conceptual model of the DI, one that illustrated its goals (or activities, depending how you look at it) and the general non-causal relationships between them (see Figure 1).

Figure 1

The advantages to using this type of model were substantial. First, the model’s early timing and quick development were critical. Because the DI was in its beginning stages, the model served as an effective socialization tool for getting an early shared understanding among stakeholders of what the Initiative was trying to achieve. Using a more traditional model that identified and achieved consensus on specific outcomes at this early point in time would likely have been met with frustration, and might have stalled the evaluation early on.

A second advantage of this model was its simplicity and broadness. Each grantee organization was able to understand how its individual activities fit within the Initiative’s broader context.

Another advantage of the model was its usefulness as a starting point for later theory articulation. As we describe below, while the model did not identify specific outcomes at first, we were able to use it later as the basis for identifying outcomes that could be attached directly to the model and further guide the evaluation.

Finally, probably the most important feature of the model was its sustainability. The DI evolved substantially over time, in both strategically and in terms of the number of organizations involved. Despite these changes, the overall model changed only slightly, because Initiative changes for the most part could be accommodated beneath the level of this overarching structure. This was critical because as we describe below, the model became the organizing structure for the evaluation’s design and management.

Lesson: Use the Initiative’s Theory as a Framework for Designing the Evaluation
Connecting the evaluation design to the initiative’s theory makes sense. The theory typically lays out what needs to be assessed in terms of both process and outcomes. Once you have that necessary focus, you can proceed through standard evaluation design steps, which may include identifying benchmarks or indicators connected to those outcomes and the methods needed to track them.

This lesson can be difficult to apply, but keeping certain principles in mind will help. For example, you want to make sure that if you focus on the theory’s parts (boxes in the logic model), you don’t lose sight of their connections (arrows in the logic model). In addition, you want to develop a design that is easy to manage and keeps data collection and reporting focused on what is being learned about the initiative’s theory as a whole and not only on its parts.

Tip: Develop Evaluation Objectives Linked Directly to the Model
Our first and most important step in the design process was to break the Devolution Initiative’s model into evaluation objectives in order to make the evaluation in general more manageable (see Box 1).

Box 1: Devolution Initiative
Evaluation Objectives


Objective 1: Examine WKKF and Devolution grantee roles in information development and dissemination.
Objective 2: Examine links between information development, dissemination, and target audiences.
Objective 3: Examine capacity-building activities.
Objective 4: Examine the link between building capacity and increasing state and local participation in policymaking.
Objective 5: Examine the Initiative’s success in informing the policy agenda.

The objectives focused on distinct pieces of the model and their relationships to one another. As a result, the objectives highlighted the importance of examining the links between the main model components. For example, Objective 2 involved several parts of the model. It was concerned with grantee information development and dissemination activities in terms of the types of information developed and the various mechanisms used to disseminate the information. Objective 2 was also concerned with determining whether that information reached specific target audiences and which information and mechanisms were most effective in reaching the audiences as well as assessing their information needs.

The evaluation objectives made it easier to complete key evaluation planning steps. Once we developed the objectives, we were able to proceed easily through standard evaluation planning procedures. Because the objectives were broad, they could be split out into a series of separate evaluation questions. Those evaluation questions then drove decisions on which methods were needed to answer the questions and therefore address the evaluation objectives.

For More Information


About the Devolution Initiative

See the WKKF website’s Devolution page at http://www.wkkf.org/default.aspx?tabid=75
&CID=162&NID=61&LanguageID=0

About Evaluating Complex Initiatives

Connell, J., Kubisch, A. Schorr, L., & Weiss, C. (Eds.). (1995). New approaches to evaluating community initiatives: Concepts, methods, and contexts. Washington, DC: Aspen Institute.

Fulbright-Anderson, K., Kubsich, A., & Connell, J. (Eds.). (1998). New approaches to evaluating community initiatives: Theory, measurement, and analysis. Washington, DC: Aspen Institute.

W. K. Kellogg Foundation. (1998). Evaluation handbook. Battle Creek, MI: Author.

This approach made it possible in some cases to use methodological approaches with some of the evaluation objectives that would allow us to examine causal links between parts of the Initiative. Because evaluation objectives looked at the links between parts of the model, evaluation questions on causality surfaced. While it was not possible to set up experimental or quasi-experimental designs to assess questions of causality and impacts with the initiative as a whole, it was possible to set up such designs to address more specific issues of causality.

In addition to being the organizing structure for the evaluation design, the model and evaluation objectives served later as an effective starting point for further articulating the Initiative’s theory of change. Similar to the process of developing the evaluation questions, it was possible to connect short-, intermediate-, and long-term outcomes directly to the evaluation objectives. Because the objectives were the overarching structure for the theory, as the DI evolved, outcomes could be added or modified without affecting the evaluation design’s basic structure.

Finally, the objectives became the essential managing structure for the evaluation. Small teams of evaluation staff were set up for each objective. Organizing and managing by objective, as opposed to by methods or outcomes, kept the team focused on what we were learning about the DI’s theory rather than what we were learning about its specific outcomes in isolation. Because the methods used crossed team boundaries, this structure also facilitated essential cross-team coordination on data collection and interpretation.

¹ Note that this evaluation is still in progress.

Julia Coffman, Consultant, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project