You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Heather Weiss and Karen Horsch of Harvard Family Research Project highlight the experiences from an evaluation approach that rethinks the traditional program–evaluator relationship and enables us to learn what works and what does not.

In 1993, the Annie E. Casey Foundation launched its three-year Evaluation Grants Program. This program, which supported the evaluation activities of organizations working in the area of family support, differed from traditional foundation-funded evaluation in three fundamental ways. First, it established a partnership that involved shared power between an organization and an external evaluator. Organization staff selected the evaluator and participated actively in the evaluation. Second, the Grants Program emphasized both process and outcome studies. Third, in the evaluation process, it fostered broad participation of organization staff beyond the director level. In establishing this program, the Foundation sought to enable organization staff to learn more about their programs, to contribute to understanding in the broader field of family support, and to determine whether this new approach could be a model for future Foundation funding for evaluation.

Three organizations carried out evaluations as part of the Grants Program: the Center for Family Life in Sunset Park (Brooklyn, New York); Kaleidoscope (Chicago, Illinois); and the Center for Successful Child Development (Chicago, Illinois). While each of these organizations differed in the specific programs they provided, they were similar in their overall family support focus and their commitment to evaluation.

To learn more about the Grants Program approach, the Foundation commissioned HFRP to discuss the program with the evaluators and organization staff and to document the results. This work took place in two stages. First, HFRP conducted two telephone focus groups, one each with the evaluators and the organization staff. The purpose of these focus groups was to obtain preliminary insights and thoughts that would help to shape an agenda for a roundtable discussion of the issues. In early February 1998, organization staff, evaluators, and staff from the Foundation and HFRP met to discuss and reflect on what had worked, what had not worked, and how the approach might look different in the future. A summary of that discussion follows.

A Model for Foundation-Funded Evaluation

Flexibility has been the greatest strength of the Grants Program. Organization staff and evaluators stressed that the flexibility inherent in the Grants Program was its greatest strength. Organization staff expressed dissatisfaction with past evaluations because staff had not had an influence on the evaluation process and thus, the findings often lacked relevance for them. They appreciated that the Grants Program allowed them to select the evaluators and work closely with them to ensure that their questions about their programs were answered. Evaluators welcomed the opportunity to respond directly to the information needs of the organization staff, and in doing so, were able to innovate with new evaluation designs and instruments.

While flexibility has been vital to the success of this program, some guidance and structure can be helpful. Organizations funding this approach need to consider how much guidance and structure they should provide in managing issues of control and responsibilities between the two groups. It is important that evaluators and organization staff understand the purpose of the evaluation in the same way; this means the funding agency must clearly communicate its expectations for the evaluation.

Conducting simultaneous process and outcome evaluations has advantages and disadvantages. For organization staff, simultaneous process and outcome evaluations enabled staff to obtain both immediate feedback that is valuable to their work as well as information on outcomes in order to respond to demands for accountability. The concern, however, is that it often takes time to demonstrate outcomes. From the evaluators' view, there is a need to spend time up front to become thoroughly familiar with the program, as well as to gain the trust of staff and build the relationships necessary to conduct a useful evaluation. These initial steps, some argue, make it more desirable to spend time focusing on the process first, and the outcomes later. However, within one organization, there may be programs conducive to a simultaneous study of process and outcomes, while for others, a greater understanding of the program may be necessary before a study of outcomes is useful.

One approach might be for sponsoring organizations to consider a two-stage approach to evaluation funding. They may wish to fund a larger set of programs for process evaluations and then choose a subset of those for funding outcome evaluations. This approach provides the added benefit of determining whether a “fit” exists between evaluator and program before moving onto a focus on outcomes.

Opportunities to share ideas and experiences are important to the evaluation process. Participants noted the value of being part of a network within which to share experiences, problem-solve, and learn about each others' practices. These discussions need to begin early in the process. They can include targeted evaluation advice from outside experts but should also provide the opportunity for evaluators to share their insights, practices, and even instruments with one another. Forums for organization staff are also important. This would provide staff the opportunity to discuss common concerns about using information to build reflective practice, developing new tools, and innovating in practice. Bringing together organization staff and evaluators on a regular basis, perhaps annually, would enable them all to engage in a much broader discussion of the evaluation process and its usefulness.

Some organizations may benefit more from this approach than others. Participants noted that there are certain organizational characteristics that might make some organizations better suited than others to an approach like that of the Grants Program. These include organizations that are willing and able to use data for decision-making; are both proud of where they are but also humble enough to undergo some scrutiny and self-reflection; are risk takers; have experience in their program interventions; and have buy-in and a desire to have research done as well as some understanding of what that means for the organization.

Conducting the Evaluation

Engaging staff in the evaluation is key. The Grants Program facilitated active engagement of organization staff in the evaluation process. Engagement occurred in different ways across the sites, but generally involved selection of the evaluator; formation of research questions; identification of needed data and data collection instruments; and interpretation of results.

Relationships have to be nurtured. While the grants approach changes the evaluator-organization relationship into a potentially more productive one, its success depends greatly on the strength of the relationship. While organization ability to select the evaluator greatly enhances the likelihood of a mutually productive relationship, the relationship needs to be nurtured continually. This requires time and energy and an ongoing negotiation of expectations and roles.

Participants noted that several steps can be taken to help foster a strong evaluator-organization relationship. Organization managers need to be sure to involve their staff members in the evaluation and provide evaluators opportunities to get to know the program and the staff. Evaluators need actively to listen to, involve, and respect staff; be adaptable and willing to try new approaches; recognize that program experience with evaluation has often been negative and work to improve this; be clear about how much work evaluation involves (Deborah Daro, evaluator of the Center for Successful Child Development noted, “ownership does not minimize the work”); and provide regular feedback to organization staff on the status of the evaluation and its preliminary findings. Both evaluators and staff need to be flexible and willing to listen and communicate. Participants also noted that relationships are strengthened when regular meetings are established and attempts are made to get to know individuals outside of their roles.

Participants point out that while relationships need to be built and fostered, they are in part luck, and sometimes there simply is not a workable match between the evaluator and the organization. When this occurs, it is important that both parties recognize this and move on. Likewise, the sponsor needs to have sufficient flexibility built into the process, and enough trust in the programs, to reconsider the arrangement.

Both qualitative and quantitative data are necessary. Organization staff noted that they found both the qualitative and the quantitative data provided from the evaluations to be helpful. The quantitative information has helped them to respond to demands for accountability while qualitative data have provided the in-depth information most useful for program implementation. The inclusion of information about the context in which programs operate, which was provided in one evaluation, has made the evaluation information more meaningful, particularly to external constituents.

Building Learning Into Programs

Evaluation can provide program operators the opportunity to reflect on their practice. Program operators said the evaluation process contributed to an understanding of their practice and ways to improve it. Sister Mary Paul from the Center for Family Life noted, “it provides a way to articulate what is going on in a program, what is missing, and what is underdeveloped.”

Participants noted that the opportunities for reflection can take many forms. For example, an evaluator might use interviews and group meetings to engage staff in a discussion of their practice, or a doctoral student may observe a program and provide feedback to staff enabling them to understand their practice better.

Evaluators can help support and build the capacity for ongoing reflective practice. Participants noted that there are several things evaluators can do to help support reflective practice among program staff: Engage staff in formulation of evaluation questions and instruments; provide feedback regularly to staff and administrators; engage staff in problem solving and examining why findings are what they are; meet with groups of staff around problem solving (without administrators); and provide training in aspects of evaluation.

Sustainability of evaluation work is a real concern. Three years after the evaluation Grants Program started, organization staff find that they still have many more questions they would like answered. The evaluations have pointed to new areas for study beyond the evaluation grants period. Peg Hess, evaluator of the Center for Family Life stated, “We're done…and yet, we're clearly not done.” Additionally, the fundamental and rapid changes in the policy landscape of recent years require that programs continue to reflect and learn. To some extent, organization staff have developed new skills to help them in their own evaluative work. However, there is more to be done, and it is not clear where the resources will come from.

For sponsoring organizations, this raises important questions: How much time is enough? what level of resources are needed? how should resources be allocated? While recognizing that, in many cases, programs will not be able to fund full-scale evaluations on their own, sponsors should help develop some capacity for organizational staff to identify what they can do with available resources. Some participants suggested that sponsoring organizations may want to consider continued support for about 6-12 months after the formal evaluations are completed to ensure that some capacity is left in place. Organization staff also pointed out that sponsors may want to consider support for some administrative overhead; currently, agencies extend themselves and their staff to make evaluation work. Finally, developing linkages with universities, as was done in several of the programs, also has resulted in productive evaluation work.

Speaking to a Broader Audience About Family Support Programs

Identifying the audiences and ways of reaching them. Participants noted that there are a number of different potential audiences for information about this work: program staff; federal, state, and local policymakers; direct service practitioners; educators; agency staff; clients; advocates (for example, the national parent organizations); and researchers. Harriet Meyer of the Ounce of Prevention Fund pointed out that the real way to change programs is to influence the regulations, which requires that people work closely with the staff of public agencies, who, she observes, welcome evaluation information.

Participants identified several ways in which they have sought to reach different audiences: presenting at conferences; training; writing articles; and providing technical assistance. While programs have found success with these approaches, Karl Dennis of Kaleidoscope notes that it is difficult to “be a prophet in your own land.”

A legitimate question when speaking about audience is whether different audiences can be addressed with the same study or set of studies. Process studies enable evaluators to give staff some sense of the kinds of things they can work with as they develop their practice. What convinces policymakers is less clear. Some note that policymakers continue to ask for randomized designs in evaluation, while others observe that policymakers are increasingly recognizing the importance of context and are asking if “it works here.” Participants noted that understanding of evaluation and its use in the public arena must begin in the university—in policy/public administration and journalism programs. People in these programs often have a strong interest in children and families but little knowledge about what evaluation means and how to use the findings.

Key to reaching any audience is being creative in presenting the message. This means thinking about combining qualitative and quantitative information (for example, presenting a case study of one family with quantitative data which show how many of the program's clients share similar characteristics), presenting family histories, and using visual approaches such as photographs or a video.

Speaking to the larger evaluation and research communities. One way to view the broader benefits of funding evaluations of diverse programs is to examine their contribution to the study of family support. Evaluators in the Grants Program have, in some cases, found traditional approaches and instruments to be lacking and have developed new ones. For example, Michael Epstein, evaluator of Kaleidoscope, noted that evaluators and program staff involved in the Kaleidoscope evaluation developed a prototype of a strength-based scale that is now being used around the country. They also developed a valid, reliable way to assess an agency's implementation of wraparound—again, a contribution to the field. Participants point out, however, that innovative approaches are not always welcome by everyone; in some cases, the greatest resistance comes from the research community itself.

Informing the family support field. While the Grants Program allows for flexibility in evaluation and measurement design, the challenge is how the information gathered from three very different sites can come together and speak to the larger family support field. While cross-site comparisons would be useful, the uniqueness of each organization is well-served by the Grants Program approach. Participants suggest that while all programs share common dimensions, specifying these dimensions can be difficult. Some suggest that they might include similarity of client populations or the particular program approach applied (i.e., prevention or treatment). Another suggestion for examining commonality would be to examine how different organizations respond to the same environmental threat, such as welfare reform.

Conclusion

In advancing an innovative approach to funding evaluation, the Annie E. Casey Foundation sought to improve the relevance of evaluation for program staff. The experience of three programs implementing this approach suggests that the flexibility inherent in such an approach fosters more productive and supportive relationships between programs and evaluation staff, provides information of greater use to programs, and promotes reflective practice. This experience also shows that evaluation is never easy—relationships need to be nurtured, opportunities to share experiences must be created, and sustainability of evaluation work needs to be considered. Perhaps the greatest challenge to organizations funding such evaluation approaches is how to translate program-level findings into messages that influence a broader audience.

Those interested in learning more about the evaluations of each of these programs and/or receiving copies of evaluation reports, should contact the following people: Sister Mary Paul, Director of Clinical Services, Center for Family Life, 345 43rd Street, Brooklyn, New York 11232, Tel: 718-788-3500; Karl Dennis, Kaleidoscope, 1279 N. Milwaukee Avenue, Chicago, IL 60622, Tel: 773-278-7200; Deborah Daro, Research Director, National Committee to Prevent Child Abuse, 200 S. Michigan Avenue, Suite 1700, Chicago, IL 60604, Tel: 312-663-3520.

Heather Weiss, Director, HFRP

Karen Horsch, Research Associate, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project