You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Karen Horsch from Harvard Family Research Project reveals the practices that nine evaluators of community-based initiatives have used and lessons they have learned addressing challenges.

Interest in and support for local solutions to the nation's problems has grown tremendously in recent years. An alternative to fragmented, deficit-based, and top-down social and economic programs, community-based approaches offer the promise of more relevant, more integrated, and ultimately more sustainable programs.

The complexity, dynamism, and inclusiveness of community-based initiatives (CBIs) challenge both those who direct and staff them and those who study them.¹ Our Winter 1996 issue on CBIs discussed the challenges that executive directors face in meeting their own and funders' needs for information. The directors expressed concern about, among other things, identifying outcomes that can attend to the holistic approach of CBIs as well as the needs of multiple stakeholders, attributing results to initiative interventions, and building evaluation capacity so that they and their staff members can have information that is accessible, reliable, and timely.

This article discusses practices that evaluators of CBIs have used and lessons they have learned in addressing these challenges. HFRP solicited these insights in a set of focus group discussions with nine CBI evaluators. Their comments show that while much work remains to be done, there is a foundation of knowledge and experience upon which to build.

Identifying Outcomes

What outcomes are reasonable to expect from CBIs?

The Evaluators


Dale Blyth: Director of Strategic Initiatives, Search Institute.

Tom Burns: Director, The OMG Center for Collaborative Learning.

David Chavis: President, Association for the Study and Development of Community.

Anne Kubisch: Director, Roundtable on Comprehensive Community Initiatives for Children and Families, Aspen Institute.

Beverly Anderson Parsons: Executive Director, InSites.

Stanley Schneider: Senior Research Associate and Executive Vice President, Metis Associates, Inc.

Cindy Sipe: Senior Policy Researcher, Public/Private Ventures.

Susan Stephens: Senior Policy Analyst, Center for Assessment and Policy Development.

Rafael Valdivieso: Vice President and Director of School and Community Services, Academy for Educational Development.

HFRP would like to thank those who contributed their experiences and insights.

  • There are different categories of outcomes
    Evaluators point out that while specific outcomes vary by the nature of the initiative, it is important to be clear and realistic about both the level of outcomes to be achieved and the time-frame in which they are expected. Susan Stephens of the Center for Assessment and Policy Development noted three levels of outcomes: program outcomes which are directly related to the activities of a particular program; initiative-level outcomes which examine the extent to which the system is changing as a result of the initiative; and community-level outcomes which are the broader outcomes that take a long time to manifest themselves and which are often influenced by factors other than or in addition to the initiative. Stephens stressed that people frequently focus on community-level outcomes and are then frustrated by their inability to directly impact those; she suggests that to really understand what an initiative is doing, one needs to examine the initiative level.

What processes can be used to identify outcomes?

  • Facilitate a discussion of expectations
    An important first step in the process of identifying outcomes, evaluators point out, is clarifying stakeholders' expectations both for the initiative and the evaluation. This might include identifying evaluation questions as well as the outcomes that are desired. The evaluator often plays both a technical role, debriefing stakeholders about evaluation and providing guidance and structure to the study, as well as a facilitative role, helping others to discuss their viewpoints and reach consensus about their shared questions and expectations.

  • Examine outcomes that may have already been identified
    Evaluators note that in some cases, outcomes may have been identified during the planning for the initiative. Tom Burns of the OMG Center for Collaborative Learning points out that what the evaluator works with really depends on the extent and quality of this early work—how clearly the initiative was defined and how clearly people examined and understood the assumptions underlying it. Where this has not been done, Burns points out, evaluators have to work with initiative designers and other stakeholders to develop an outcomes framework.

  • Start with expectations for community-level outcomes
    Often within the CBI there tends to be a generally shared view of what the long-term, community-based outcomes are, evaluators observe. These are the outcomes that often resonate most with key stakeholders. While these may take time to clarify and prioritize, evaluators note the importance of having a broad framework of outcomes to garner interest and focus subsequent discussions about outcomes.

  • Use intermediate outcomes to link activities with long-term results
    Evaluators caution that since CBIs are complex and desired changes take a long time to manifest themselves, intermediate outcomes are needed to show the progress toward ultimate outcomes and the more direct results of the initiative. Anne Kubisch, of Aspen Institute, notes that while stakeholders can generally agree on the long-term results they want to see and can identify the immediate results of their activities, it is this intermediate level of outcome that is often the most difficult to articulate. Evaluators stress the importance of showing a relationship between intermediate results and longer-term expectations. The theory of change approach has emerged as a promising means to do so (see the following).

  • Be sure outcomes link to on-the-ground activities
    It is important to examine whether the outcomes—especially intermediate outcomes—can reasonably be achieved by the initiative's activities. Evaluators note that in some cases, initial outcomes have been guided by expectations about activities that are different from those actually funded and implemented. In these cases, evaluators have a responsibility to point out incongruities and work with stakeholders to identify those outcomes expected to be achieved by extant programs.

  • Balance process and results
    Identifying outcomes is a time consuming process. While the process can be used to build interest in and support for the evaluation, Dale Blyth, from the Search Institute, points out that it is important that the process not be so prolonged as to consume all the resources and energy of the community and the tolerance of funders. He suggests that evaluators not be shy about presenting their own ideas for the community to use as a starting point. David Chavis of the Association for the Study and Development of Community, reinforces this point, noting that some program approaches have already been tested, and evaluators and others need to be more willing to look at past experience and share their own experiences with one another.

Determining Attribution

  • Is attribution the right question?
    In discussing the issue of attribution, several evaluators raised the issue of whether the open, broader community contexts of CBIs lend themselves to discussions of attribution in the traditional evaluation sense. They stress that for these relatively new endeavors, alternative ways of understanding whether CBIs “work” may be necessary.

What promise do experimental and quasi-experimental designs hold for CBI evaluation?

  • CBIs pose a challenge to these approaches
    CBIs, the evaluators noted, do not lend themselves easily to experimental or quasi-experimental designs for several reasons. One of the greatest challenges is that the magnitude of changes that are seen as a result of CBI interventions is often very small. Another issue is that of the unit of analysis. Designs for attribution are easier when data are collected at the individual level; they are more difficult at the level of community. Additionally, since the power of attributional designs derives from the number of cases, if the unit is communities, the number will typically be small. Another question evaluators raise is whether there are truly comparable communities that can be randomly or nonrandomly assigned. Many people think there are not. Evaluators additionally stress the often high cost of doing experiments and question whether it is right to withhold services from a specific population.

  • In some instances, these designs can be appropriate
    Evaluators acknowledge, however, that experimental and quasi-experimental designs are feasible, and may be required, in some cases. For example, since CBIs are multilevel phenomena, these designs may be able to examine changes at the individual level. Some argue that comparison groups of communities situated on similar variables can be created in some instances. Stan Schneider of Metis Associates points out that trend data can be useful in examining the history of a community's performance before an initiative has begun and after it has been implemented.

What other techniques exist to address concerns about attribution?

  • Combine multiple sources of evidence for a confluence of results
    Evaluators point out the importance of using multiple evidence and multiple tools to strengthen the case for attribution. This means melding qualitative and quantitative data and using different analytical techniques. While evaluators acknowledge that one can always find fault with evaluation designs and data collection instruments, the use of multiple valid and reliable techniques can strengthen arguments about program impacts.

  • Use a theory of change approach to show progress
    Theory of change is emerging as a promising approach to answer concerns about showing program progress and perhaps making attributional arguments. While a theory of change approach does not allow for identification of a particular causal mechanism in a fine sense, it can enable the evaluator and others to argue that the logic fell into place and that the initiative is “on track.” It is important, evaluators point out, that in using a theory of change approach, one be careful about distinguishing between incorrect implementation and faulty theory.

  • Systematically examine other factors that might influence outcomes
    We are only beginning to understand how CBIs work and to recognize that other factors may influence outcomes. Rafael Valdivieso, of the Academy for Educational Development, uses a logic model approach to identify and monitor factors in addition to the initiative that may affect an outcome.

  • Document early outcomes that lead to desired long-term results
    Evaluation plays an important role in helping people learn about and improve community-based program approaches. Anne Kubisch observed that there is a growing recognition of the complexity and risk inherent in CBIs and, as a result, many are investing earlier in evaluation. She notes that this investment can help in setting forth a theory of change for early initiatives and help document the processes and early outcomes that are expected to lead to desired long-term results.

Building Evaluation Capacity and Learning in a Community

What does local evaluation capacity mean?

  • Stakeholders need to learn to ask questions and gather data to answer them
    In discussing what is meant by building capacity for evaluation at the community-level, Stan Schneider noted that evaluators often have very high expectations for the evaluation activities that a community can technically and fiscally undertake. He suggests that a more realistic expectation may be that residents and staff members must learn to ask their questions and make sure that they obtain valid and reliable data to answer the questions, and move to the next set of questions. Cindy Sipe, of Public/Private Ventures, notes that, in her experience, staff members and residents need to be able to track implementation—what kinds of things are they putting in place, who is getting exposed to them, and what resources are expended—and be able to use that information to guide how they are implementing the initiative.

What is needed for capacity to be built?

  • “Consumers” need to be educated and interested
    In order for evaluation capacity to exist at the local level, evaluators stress, residents and staff members need to understand that evaluation is intended to be a tool that can help them to understand their initiatives better. They point out that within CBIs, there are people who are interested in evaluation and people who are not; it is important to locate and work with those who have an interest in it. Beverly Parsons, of InSites, has found “inquiry teams” to be useful in engaging stakeholders; these teams, composed of opinion leaders and others, have the responsibility to “inquire” about and reflect on the initiative.

  • Internal and external pressures are needed for evaluation information
    Evaluators note that pressures for information can also be the impetus for the development of local evaluation capacity. Susan Stephens identified three such pressures: demands for accountability from funders or state agencies; desire for information on the part of a champion or leader; and/or public demand for information. She noted that while the evaluator often cannot create these pressures, he/she can be aware of them and advise the community entities about how they can respond to these pressures in an ongoing fashion. Rafael Valdivieso agreed, stressing the importance of having outcomes become part of the community psyche, captured in the media and internalized by the public.

What can one do to help build capacity?

  • Link members to other evaluation resources
    The evaluator can help increase capacity by being a vehicle by which staff members and residents can link to resources in the community that can help them do evaluation-related tasks. These resources include universities, advocacy groups, and individuals working with data in organizations such as the police department and the school system.

  • Focus on data use first and then on data generation
    Evaluators stress that community residents, especially leaders, need first to know how to use the data to advocate for change and improve the initiative. Thus, the key is getting stakeholders to talk about the information and examine it for themselves. One approach, described by Beverly Parsons, is to orient a discussion around survey results. In her work with one group, Parsons asked stakeholders—prior to sharing the results with them—what they thought a survey might show. She then presented the actual data, which were not consistent with expectations, and used those data to promote discussion about and generate more interest in the data.

  • Encourage community participation in data collection and interpretation
    Evaluators note the importance of getting staff members and community residents involved in the data collection to help build ownership of the evaluation and evaluation skills. In many CBIs, residents are trained to conduct community surveys, neighborhood inventories, or ethnographic research. Evaluators caution, however, that community involvement should not be left at the data collection phase. Residents and staff members need to be engaged as well in framing the questions, interpreting the results and figuring out what needs to be done on the basis of the information. Evaluators point out as well that evaluation expertise is needed to ensure data quality. It takes expertise, for example, to develop good questions to be included on community surveys. Evaluators should also take on the boring work—such as finalizing the questionnaire, entering data into the computer—while residents and staff members get involved in the more interesting aspects of data collection and analysis.

  • Produce timely and useful reports
    Reports that are immediately useful and user-friendly are important to provide an incentive for communities to become actively engaged in the evaluation. Rather than presenting “final” reports with all the interpretations, Cindy Sipe urged that evaluators share the data earlier with stakeholders to help ensure that they are then used as the basis for discussion. She uses short memos and oral presentations to disseminate information that is then used as a basis for discussion. Dale Blyth stressed that appealing to multiple learning styles is crucial: a two-page, text format for people who like to read; graphs for the visually and numerically inclined; and stories to illustrate the points for those who learn best that way.

  • Build tools to help residents collect and use information
    Evaluators suggest that evaluators can help build interest in evaluation and the development of evaluation capacity by creating and disseminating tool kits. These might include, for example, tools around staff assessment and leadership training. From his work, David Chavis has learned the importance of high-quality meeting minutes for both program management and data collection purposes. He has developed a kit to help community members improve the quality of the meeting minutes they take.

  • Begin early and stay involved
    Essential to helping build local capacity, Tom Burns notes, is early and continuous engagement by the evaluator—from the point of identifying the questions, to collecting the data, to assisting stakeholders to interpret results. He notes that while such work is essential, it is also time consuming and can be costly, in financial and human resource terms, to both the evaluator and the stakeholders.

While these focus group discussions highlight the primary areas of concern in evaluating CBIs, they are only a beginning to an exploration of these topics. Knowledge about these will only grow as more CBI evaluations are conducted, new approaches are tried, and practices and lessons are shared among those working in this field.

¹ Many terms are used to describe the community-based approaches discussed in this article. For purposes of this article, the term community-based initiatives (CBIs) is used. The term residents refers to people living in the community. The term staff is used here to mean people holding positions in the service or management infrastructure, many of whom are often residents.

Karen Horsch, Research Specialist, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project