Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.
Volume ll, Number 4, 1996
Issue Topic: Community-Based Initiatives
Theory & Practice
M. Elena Lopez and Cami Anderson of Harvard Family Research Project conduct a focus group of Executive Directors of five complex CBIs to learn about the evaluation and self-assessment efforts.
Executive directors of complex CBIs bridge the worlds of evaluation theory and practice. They enable theory-builders to test their models and methods; they pave the way for community-based programs to use information for decision making; they also can develop new evaluation tools based on an intimate knowledge of the communities that they serve. When they exercise these multiple functions, directors enrich the evaluation field and help create learning organizations and learning communities.
The roles of executive directors have not been fully recognized and discussed by evaluators, much less have their insights about evaluation issues been taken into consideration for new directions in evaluating complex community-building efforts. While the idea of bringing in directors as stakeholders in evaluation debates exists, there is often a gap between rhetoric and reality. For these reasons, we feature the voices of five directors of community-based organizations whose perspectives we found to be both profound and practical.
We selected our informants based on the breadth of activities of their initiatives, the diversity of communities that they serve, and their different stages of program development. (See textbox.) These informants included Howard Boutte (Executive Director, West Jackson Community Development Corporation), George Eberle (President and CEO, Grace Hill Neighborhood Services), Donna Johnson, (Deputy Director of Programming, Warren/Conner Development Coalition), Maurice Lim Miller (Executive Director, Asian Neighborhood Design), and Beatriz Olvera Stotzer (Founder and President, New Economics for Women).
We conducted two telephone focus group discussions with the directors. What follows is our analysis of the key themes of these discussions.
The Five Executive DirectorsHoward Boutte, Executive Director Howard Boutte has over 20 years of experience in business administration, finance, and real estate. Established in 1991 to address the needs of a city that has undergone recent and rapid decline, the CDC has targeted four main areas: housing development (building, rehabilitation, ownership assistance, family self-sufficiency plans); economic development (job training, enterprise development); education and youth development (summer and after school programs); and community organizing and leadership development. George Eberle, President and CEO George Eberle is recognized by his peers as an innovator in comprehensive antipoverty strategies. Grace Hill operates a health initiative; services for the elderly; an economic development program (job placement and small business administration); a community wellness program (including a peer mothering program and mentor dad program); shelter services; primary health care; extensive leadership training courses at community colleges; a work exchange program; and a barter system where citizens exchange deeds, services, and goods. Donna Johnson, Deputy Director of Programming Donna Johnson began at WCDC as program manager of the workforce and industrial development division, where she developed effective strategies for guiding families to self-sufficiency. Today, WCDC has four main program divisions: work force and industrial development (integrating chronically jobless families into the labor market, job readiness and placement, advocacy, dinner support club); neighborhood development (grocery store awareness campaign, land use study group, leadership training, community newsletter); real estate development (commercial development, business support and lending); and youth development (after school and summer activities, leadership development, academic support, school reform campaign). Maurice Lim Miller, Executive Director Maurice Lim Miller sits on many local, state, and national community development and foundation boards. AND has five program areas, including full architectural and urban planning services; a community resource and educational component; an employment training center for high risk youth; an affordable housing component; and business development services, which include the operation of a furniture manufacturing venture. Beatriz Olvera Stotzer, Founder and President Beatriz Olvera Stotzer has over twenty years of management experience in the areas of government, public affairs, and community program development. She is also CEO of NEWCapital, which leverages funds for NEW's business initiatives. NEW, fully owned and operated by Latinas, has built three housing complexes in East Los Angeles, two for families and one for single teen mothers (a fourth is under construction). In addition to providing housing, NEW has been involved in educational, social, and business programs such as family case management; life skills classes; psychological counseling; youth programs; community organizing; small business technical assistance and training; and health fairs. |
Learning From Evaluation
Organizations involved in community building have long recognized the value of developing evaluation capacity—that is, the ability to acquire and integrate information, assess performance, and respond to community needs and changing environments (World Bank, 1994). However, limited resources and funders' priorities have often made it difficult to become learning organizations—those that apply knowledge systematically to create their future. In response to the funders' emphasis on monitoring, directors have developed information systems to assess individual and family needs and track participation levels, resources, and selected outcomes.
The directors are on the forefront of the efforts to create meaningful linkages among different sources of evaluative information, including formal, external evaluation. They feel the pressure to document outcomes in a climate that is skeptical about programs for the poor and that has cut resources for them, thus creating more competition among community organizations for scarce dollars. In their quest to create a coherent system,the directors share their evaluation challenges.
Challenges and Next Steps
Three features of traditional evaluation practices limit their usefulness in evaluating CBIs. First, evaluations have focused on single programs. They tend to be segmented vertically by program components rather than organized horizontally across multiple programs.
Programs struggle to balance categorical funding streams so that participants experience a holistic service delivery approach; similarly, directors grapple with multiple information sources to assess progress. Evaluations often fail to provide managers with global information about their comprehensive initiative and the linkages among different program components. Grace Hill Neighborhood Services, for example, has 112 separate programs funded by different sources and mandating different tracking and reporting requirements. The type and quality of evaluative information often vary widely, from anecdotal data to sophisticated university-conducted evaluations.
The fragmented nature of evaluation results in fragmented knowledge about what works and why. In the experience of Maurice Lim Miller, the success of the same set of families is attributed to competing single factors such as stabilized housing, access to a family resource center, or job training and placement. Each of the explanations reflects a funder's categorical program and the theory of change built around it.
Furthermore, evaluations tend to focus on a single organization and its services. Despite positive feedback from participants, directors are often unable to determine whether their services made the critical difference in participants' lives. Residents may be in contact with other community resources that are not part of the evaluation; limited resources require that evaluations sacrifice information about the broader community context of informal and formal supports.
Directors are taking steps to begin addressing fragmentation. Miller, for example, is piloting an assessment tool that tracks an individual's progress over time. The tool consists of seven elements related to poverty (income, education, housing, personal attributes, personal relationships, life management skills, and safe surroundings). It enables his staff to think about holistic changes in individual behaviors and situations rather than categorical programs. Donna Johnson is working with evaluators to establish five-year outcomes for every program area and to create a database to identify the full range of services that families receive.
Second, evaluations have emphasized program results rather than the building of evaluation capacity. Evaluations tend to be episodic, with a discrete time frame and a well-defined beginning and end. Often, when an evaluation ends, the evaluation capacity of the organization has not been strengthened. The directors, however, need accessible and reliable information to be able to respond flexibly to their communities. Johnson underscores that documentation helps ensure continuous program improvement. She says, “The point is to make sure that we're being effective. We need adequate feedback from consumers and we also need to know whether or not they are continuing to participate in their program areas.” George Eberle adds, “I don't think anybody pushes us to become a learning organization. Funders focus on programs.”
The directors would like to see a closer link between evaluation and program development. They would also like to see a balance between the demand for outcomes and the effort to build ongoing evaluation capacity. They recognize that evaluation and program development should be driving each other in a feedback loop. Some directors have created more sophisticated tools to link information on service delivery to resource allocation and budgeting processes, or to gather a comprehensive portrait of behavioral changes—rather than services received—at the family level. Others are working with external evaluators to determine goals and outcomes as part of a strategic planning process.
Beatriz Olvera Stotzer considers cost benefit analyses to be a particularly critical evaluation method that can bridge evaluation and policy and program development. She says, “We get criticized that it is too costly to change people's lives on a short-term basis. We provide a continuum of holistic services to families but funders don't see that the dollars make sense. To make those numbers work for funders to continue our mission has been an uphill struggle.” Similarly, Howard Boutte points out that in his employment training program, allied supports nurture participants and help them cope with their situations. These supports also help them develop life skills and contribute to successful outcomes. There is a need to develop a tool to show the value added of these components and the costs they entail in order to convince skeptical funders of their merit.
Third, evaluation has distanced evaluators and communities. The directors feel that evaluation primarily serves the interest of funders. What gets documented and who gets evaluated are primarily funders' decisions. Evaluation designs and data collection methods are established by professionals, and participatory approaches still remain on the margin of evaluation practice. The perceptions of individual evaluators can differ from those of the community. What appear to be problems for outsiders may not be perceived as such by the community. Evaluation activities need to begin with community definitions of problems and should encourage the community to use information to make decisions and create solutions. Equally important to directors is their responsibility to provide funders with a different perception of problems, priorities, and solutions. Funders have to be reminded about the structural nature of urban problems and the need for comprehensive solutions. (See Questions & Answers section.)
Differences also exist in definitions of what constitutes reliable information. Evaluators have professional standards to maintain but such standards do come into conflict with staff and consumer perceptions. Johnson says, “There is no question that people are using our services, no question that they are getting jobs, no question that they are coming to our events, no question that they are the primary source of new referrals, but because our staff are not good at filling out fifteen-page forms with the right boxes checked, the evaluators are not sure whether or not our program is making a difference.” While evaluators deem these instruments “objective,” Johnson believes that outsider bias exists in the form of “the bias of unfamiliarity rather than the bias of familiarity.”
Directors would like to see evaluations that demonstrate a better fit between funders' information needs and those of local organizations and communities. It is not enough to create learning organizations; the goal has to be to nurture learning communities as well. (See Promising Methodologies section.)
Important steps are being taken in this direction. At the neighborhood level, for example, Eberle uses thirty networked computer systems that are accessible to residents. Residents can get information through data banks for job searches; police can find lists of licensed foster homes; and the Grace Hill staff can track service referrals and actual use. In the foreseeable future, it may be possible to empower residents to ask the right questions and use the computer system to generate the information to improve residents' decision making for larger community issues. Similarly, Miller is thinking of expanding the family assessment tool for use by groups of families. He is trying to move toward a system of peer evaluation in which clusters of families create goals, help each other accomplish those goals, and monitor and evaluate each other.
Implications for Evaluation
The evaluation of CBIs tends to focus on a few large demonstration projects. (See Evaluations to Watch section.) These evaluations are designed to explain what works and, to some extent, how and why. The information is necessary for policy development. However, another legitimate purpose of evaluation—one that is worthy of discussion and investment—is to build the capacity of managers to create and use ongoing evaluation information. This information is essential to develop highly effective and accountable organizations. If one goal of evaluation is to produce knowledge that can ultimately improve the lives of families and build healthy communities, then what are the key strategies to link evaluators more closely with managers of CBIs?
A related issue focuses on the implications of devolution and outcome accountability for CBIs. When these initiatives involve collaboration among multiple service providers, how can evaluations be designed to capture individual organizational accountability as well as collective accountability? Furthermore, smaller, less sophisticated organizations will need to gain access to technical assistance and tools for capacity building. What are the steps that need to be taken to develop and disseminate low cost and reliable evaluation tools widely?
Today's evaluations are often top-down and are based on perspectives of social issues that are not necessarily shared by community residents. Community involvement in evaluation is often a goal of CBIs. Residents need to make informed decisions appropriate for their needs. This calls for partnership and dialogue among program managers, evaluators, and residents. Some of the issues that will need to be addressed are: What are the best ways to train and equip communities to participate in evaluation? How can information technology be used to create and maintain learning organizations and learning communities?
Concerns related to the application of evaluation information, evaluation and accountability, and community participation will continue to shape the evaluation field. They deserve further reflection and discussion.
Reference
World Bank. (1994, November) Building evaluation capacity. Lessons & Practices 4, 1–12.
M. Elena Lopez, Associate Director, HFRP
Cami Anderson, Research Assistant, HFRP