Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.
Volume VIII, Number 1, Spring 2002
Issue Topic: Family Support
Promising Practices
Pablo Stansbery, Senior Research Associate at Harder+Company Community Research, describes the process of developing an evaluation design that addresses the unique challenges created by California’s Children and Families Act.
In November 1998, California voters approved the Children and Families Act, commonly known as Proposition 10, creating a new challenge for us as evaluators in California. The Act imposes a 50¢ surcharge tax on cigarettes and tobacco with the revenues earmarked for early childhood education programs. Proposition 10 funds are given to each of the 58 counties in California, based on the number of annual live births. Because funding is flexible, many new programs have emerged to meet the growing demand for early childhood services.
Proposition 10 requires each county, through a governing commission, to demonstrate results-based accountability. Counties must provide data indicating that the money spent to support programs has indeed had an impact on young children and their families. To document the change resulting from their funded programs, many county commissions have contracted with professional evaluation consultants for data collection and synthesis.
Designed in response to the growing literature documenting early brain development and the infant’s capacity to interpret and categorize the earliest experiences, Proposition 10 has generated considerable excitement for new programs focusing on expectant mothers and children up to age 5. Funded programs work to: 1) improve children’s health, 2) increase parent education and support services, 3) enhance child development and school readiness, and 4) improve systems that will support services for young children. Most counties have funded 10-30 diverse programs across these four program areas.
This diverse set of programs means that evaluation consultants confront the task of developing an evaluation plan that documents the impact of Proposition 10 funding at multiple levels:
Related ResourcesSutherland, C., McCroskey, J., & Halfon, N. (2001). The Challenges of Measuring the Impact of Proposition 10. In N. Halfon, E. Shulman, & M. Hochstein (Eds.), Building Community Systems for Young Children. UCLA Center for Healthier Children, Families and Communities, 2001. This article outlines the results-based accountability approach used in the California Children and Families Act (Proposition 10) including practical recommendations for selecting indicators, data management, and building data systems. Made to Measure? Evaluating Community Initiatives for Children. (2001). Children and Society, 15, 1. England: John Wiley & Sons, Ltd. This issue of the British periodical, Children and Society, provides a good overview of the evaluation of programs for children from the perspectives of evaluators, policymakers, and service providers. |
Evaluators are charged to develop an evaluation design that meets multiple evaluation protocols while being neither cumbersome nor duplicative. Our approach has been to build the evaluation design from the ground up as opposed to the top down. We work with each contracted agency to co-develop an evaluation design that responds to its unique program. This grassroots approach consists of the following components:
1. Ensure contractors fully understand how their project is linked to a common set of goals and objectives. Many agencies hire professional grant writers to develop a proposal and have limited understanding of how the program is connected to goals and objectives. We spend considerable time working with the staffs of contractors to increase their understanding of evaluation and to raise their capacity to conduct future evaluations.
2. Provide contractors with an overview of plausible evaluation methods. This information allows them to select the evaluation method they feel will most likely result in valid data.
3. Assist contractors to select evaluation instruments and/or design an instrument. We introduce agencies to standardized instruments and/or tools that are used in similar programs. In many other instances, we work with the contractors to develop an assessment instrument that they feel adequately measures the effectiveness of their program. In either case, contractors tend to assume ownership of their evaluation process, become proficient in evaluation implementation, and enhance data integrity.
4. Build on contractor’s knowledge of their targeted population. Typically contractors have experience working with their cultural community. We co-construct the evaluation design with contractors, using their cultural competency and sensitivity. Many excellent programs have been developed to enhance the lives of young children, but have met limited success when replicated outside the original context. Much like strategic planning and program development, the evaluation component must also reflect the ecological reality of the target community.
Building an evaluation design from disparate programs also challenges the evaluator to develop a common framework for county commissions. One approach is to retrofit the multiple contractor evaluations to a common scale. We aim to reconfigure each contractors’ evaluation tools into a 10-point scale. For example, a parent questionnaire with a value of 0-50 is weighted in a 10-point scale, as is a child development assessment that is normally scored on a 0-100 scale. Table 1 exemplifies how we collapse the divergent assessments into a common scoring system for a countywide assessment. Although each program may employ different assessments with different scoring systems, this method enables us to look at the overall effect of all programs. It also allows us to collapse programs based on target population (e.g. Latino/Hispanic community, first time mothers), target geographic area (e.g. zip code), or target service delivery mechanism (e.g. home visitation programs).
Table 1: An example of how different measurements can be rescaled and compared | ||||||||||
Contractor Assessment | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
Child Assessment (Max Score 50) | 0-4 | 5-9 | 10-14 | 15-19 | 20-24 | 25-29 | 30-34 | 35-39 | 40-44 | 45-50 |
Parent Assessment (Max Score 100) | 0-9 | 10-19 | 20-29 | 30-39 | 40-49 | 50-59 | 60-69 | 70-79 | 80-89 | 90-100 |
The purpose of a grassroots evaluation design is to include service providers and community members in the evaluation process. By incorporating these key players in the evaluation design, we raise evaluation capacity at the local contractor level, cultivate a new interest in evaluation by local contractors, and redefine evaluation from an obligation to a valued method for improving service delivery. Although this approach can be labor-intensive and time-consuming, it responds to the multiple levels of evaluation required in Proposition 10, and builds local evaluation capacity, which remains long after we depart.
Pablo Stansbery
Senior Research Associate
Harder+Company Community Research
444 De Haro Street, Suite 202
San Francisco, CA 94107
415-522-5400
pstansbery@harderco.com