You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Beth Weitzman and Diana Silver from New York University’s Center for Health and Public Service Research offer their experience integrating a comparison group design into a theory of change approach.

To overcome some of the limitations of experimental and quasi-experimental designs, evaluators have employed a “theory of change” (TOC) approach to evaluate comprehensive community initiatives (CCIs).¹ This approach helps identify underlying assumptions, focuses on processes and systems within communities, clarifies desired outcomes, and embraces the complexity of comprehensive interventions. Yet some researchers question the adequacy of TOC to address rival hypotheses to explain findings.²

Our evaluation of the Robert Wood Johnson Foundation’s (RWJF) Urban Health Initiative (UHI), which shares many characteristics of CCIs, integrates the TOC approach with a quasi-experimental design to address the question, did this initiative make a difference? We believe this integrated approach addresses issues of initiative complexity while also measuring its effect.

UHI, a 10-year effort to improve the health and safety of young people in five cities, is “non-prescriptive.” Cities were allowed to select the health focus, target age group, particular strategies, and leadership. RWJF did, however, provide guidelines. Cities were required to focus on changing systems, not expanding programs. Cities had to use data, “best practices,” and evaluation tools to select and manage their efforts. Sites needed to mobilize a variety of leaders, both in and outside of government. Finally, RWJF expected measurable improvements citywide in outcomes for youth.

As the evaluators, we faced two significant challenges. First, how would we define this non-prescriptive, multi-city intervention? Second, what credible evidence could we assemble to assess whether changes in the participating cities were due to the UHI intervention?

Defining the Intervention
We first developed a theory of change with the RWJF for UHI as a whole. We believed that the RWJF had, implicitly, a theory of change that was more than the sum of the sites and that defined the intervention. The theory encompasses the RWJF’s broad guidelines and assumptions about improving outcomes for urban youth. It focuses on the complex processes that UHI is to influence, and the tools it is to use. Interim outcomes include the increased use of data for decision making, increased public expenditures on youth, and the development of prevention-focused public and private policies. We then developed city-specific “theories of change,” regularly updated with key stakeholders, to help us compare the local experience with RWJF’s theory. Using a national TOC, our research speaks to the questions of whether a foundation can inspire new processes at the local level, whether these processes create meaningful changes in policies, and whether these changes result in better outcomes for youth. Our design embraces local variation as the intent of UHI.

Integrating a Comparison Group Approach
Having determined how to define the “intervention,” the problem of how to test whether any changes we might observe could be credited to UHI remained. Prior TOC evaluations have compared program theory with program experience, as do we. Still, we believed that we could strengthen our approach by integrating a comparison group into the design. This would help us rule out other explanations for findings in both interim and final outcomes.

If we were to find that UHI cities more consistently used data and best practices over time, should we conclude that UHI activities were responsible? Key informant interviews in comparison cities might reveal similar changes, perhaps because of technological breakthroughs, occurring during this period. Similarly, improvements in health outcomes, such as teen pregnancy, could result from national economic trends and national attention to them, and not because of UHI. And, if rates of some problems were worsening in other cities, but holding steady in UHI cities, that comparison would strengthen the argument that UHI had an impact.

We needed a group of cities to which interim and final outcomes could be meaningfully compared. UHI participating cities were not randomly selected. They shared several distinctive features, including population loss, substantial concentrations of African Americans and people in poverty, and wealthy suburbs. These cities also shared—and were selected because of—high rates of health and safety problems for young people. What criteria should we use to select a group of comparison cities?

We chose to select comparison cities based on measures of their underlying economic and demographic conditions, not on health and safety indicators. We reasoned that these contextual features both explained and constrained the capacities of cities to change public and private systems. We gathered data on these conditions for the 100 largest cities in the U.S. and used cluster analysis to see which cities were most “like” the UHI cities. While the UHI cities looked most like each other, the analysis also yielded a group of cities that shared many key features, underscoring the notion that the lessons of the UHI intervention might be generalizable. Having selected 10 cities that resembled the UHI cities, we compared how these cities fared on several health and safety indicators and found them to be similar to the UHI cities.³

Our evaluation design uses multiple methods to test assumptions against both the program theory and the comparison group. We conduct key informant interviews in the UHI and comparison cities to investigate interim outcomes from the TOC concerning leadership, collaboration, and the use of data. Similarly, our national telephone household survey of parents and youth has samples in each of the UHI cities and in the group of comparison cities as a whole. Administrative data on health and safety indicators are collected and analyzed for the UHI cities, the comparison cities, and the rest of the top 100 cities.

Some intensive (and expensive) methods do not readily lend themselves to the comparison group approach. Neither annual site visits nor our public expenditure analyses can easily be done in both the UHI and comparison cities. Still, this integrated design gives us greater confidence that we can discern credible lessons for funders, practitioners, and evaluators about the ways in which this particular initiative did or did not lead to innovations in policies and programs for youth and to changes in health and safety outcomes attributable to those innovations.

For more information on the national evaluation of the UHI, go to www.nyu.edu/wagner/chpsr.

¹ A theory of change approach can be defined as “a systematic and cumulative study of the links between activities, outcomes, and contexts of the initiative.” From Connell, J. P., & Kubisch, A. C. (1998). Applying a theory of change approach to the evaluation of comprehensive community initiatives: Progress, prospects, and problems. In K. Fulbright-Anderson, A. C. Kubisch, & J. P. Connell (Eds.), New approaches to evaluating community initiatives, Volume 2: Theory, measurement, and analysis (pp. 15–44). Washington DC: The Aspen Institute.
² Hollister, R. G., & Hill, J. (1995). Problems in the evaluation of community-wide initiatives. In J. Connell, A. Kubisch, L. Schorr, & C. Weiss (Eds.), New approaches to evaluating community initiatives, Volume 1: Concepts, methods, and contexts (pp. 127–171). Washington, DC: The Aspen Institute.
³ For more information see Weitzman, B. C., Silver, D., & Dillman, K. (2002). Integrating a comparison group design into a theory of change evaluation: The case of the Urban Health Initiative. American Journal of Evaluation, 23(4), 371–385.

Beth C. Weitzman
Associate Professor of Health and Public Policy
Email: beth.weitzman@nyu.edu

Diana Silver
Research Scientist
Email: diana.silver@nyu.edu

Center for Health and Public Service Research
Robert F. Wagner Graduate School of Public Service
New York University
726 Broadway, 5th Floor
New York, NY 10003-9580
Tel: 212-998-7470

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project