You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Donna Bryant and Karen Ponder describe the evaluation of Smart Start—North Carolina's nationally recognized early childhood initiative—and share some of what the evaluation team has learned during the past 10 years.

The Smart Start initiative aims to ensure that all children under 5 are healthy and prepared for school. Former North Carolina governor Jim Hunt launched Smart Start in 1993 with legislation that also established the North Carolina Partnership for Children, a nonprofit organization that provides technical assistance and oversight for Smart Start statewide. Beginning with 18 counties, Smart Start has expanded each year and now encompasses all 100 counties in the state. Currently, 81 local or multicounty partnerships participate in Smart Start.

Smart Start's success is based on a few key elements: local decision making, community planning and collaboration, and a comprehensive approach to reach all children. Each local partnership develops a comprehensive plan that targets the community's greatest needs across three core service areas: child care (quality, affordability, and availability), health, and family support. Each part of this plan must connect to measurable outcomes, and services must not duplicate existing statewide or local efforts.

From the beginning, a statewide evaluation team was funded through the Department of Health and Human Services (DHHS), consisting of evaluators and researchers from the Frank Porter Graham Child Development Institute (FPG) and other schools and departments at the University of North Carolina at Chapel Hill. The evaluation's goal was to provide answers to some key questions:

  • How was the process working?
  • Did Smart Start–funded efforts improve child care quality?
  • Was Smart Start good for children?

Between 1993 and 2003, over 35 studies were conducted. Following is a discussion of some of the findings and lessons learned from those studies.

Related Resources


Bryant, D., Maxwell, K., & Burchinal, M. (1999). Effects of a community initiative on the quality of child care. Early Childhood Research Quarterly, 14, 449-464.

Buysse, V., Wesley, P. W., Bryant, D., & Gardner, D. (1999). Quality of early childhood programs in inclusive and noninclusive settings. Exceptional Children, 65, 301-314.

The Smart Start evaluation website. www.fpg.unc.edu/smartstart

The Value of a Logic Model¹
The first 12 Smart Start partnerships funded 220 different activities at the local level in the first year. Evaluators were thus challenged to define what Smart Start really was and how “it” should be evaluated. The team developed a post-hoc logic model that organized and simplified the multitude of activities, based mainly on their delivery process and intended outcome. A tally of where the bulk of Smart Start dollars was being spent—on child care quality and school readiness, families, medical care, and the collaboration process itself—guided evaluation efforts, which then focused on these activities.

Documenting Implementation
Partnerships reported their service counts (of the number of children involved, the number of child care teachers participating, etc.) and the amount of private contributions, both in terms of cash and volunteer time, on a quarterly basis. Tracking was initially done on paper, then on diskette, and is now done via a web-entry system. FPG summarized data quarterly and annually for the state partnership and provided continuing technical assistance to counties to increase accuracy. These monitoring data have been coupled with other evaluation information to help tell the story of the initiative, particularly in the early years before impacts could be shown.

Objectivity, Expectations, and Outcomes That Matter
Four large samples of preschool classrooms were observed in 1994, 1996, 1999, and 2001 to document quality over time and to associate increases in quality with the level of centers' participation in improvement activities funded by Smart Start. The steadily increasing classroom quality scores were influential to policymakers, although what seemed to matter most to them were scores involving children's outcomes:

  • Children attending centers that participated in Smart Start technical assistance (TA) were rated by teachers as more ready to succeed at kindergarten entry.
  • Children from participating centers were rated by teachers as having 50% fewer language delays and behavior problems.
  • Observations in 120 classrooms and assessments of 512 children showed significant relationships between Smart Start TA, classroom quality, and children's abilities and knowledge as they entered kindergarten.

Some studies have assessed the quality of family child care, linking it to providers' participation in Smart Start. Others have described the range of families participating in Smart Start–funded programs, documented higher rates of immunization and having a primary health care provider, and shown a 50% increase in the number of centers serving children with disabilities.

None of the quantitative studies described above could be considered “causal,” as counties were not randomly assigned to receive Smart Start, nor were centers randomly assigned to receive technical assistance. (Random assignment is considered the best way to show that one intervention is better than another, because it assures that the groups being compared are equivalent at the beginning on the characteristics of importance.) Smart Start's approach to understanding effectiveness, given that it did not use random assignment, was to steadily build evidence of a number of associations between factors and outcomes that could logically be interpreted as an effect of Smart Start.

Funders and policymakers were shown that no single study would answer all their questions. Evaluators had to resist the temptation to overstate the results; rather, they had to make very clear to decision makers what the results did and did not say. An evaluation's value to any initiative is diminished if policymakers perceive the evaluators as cheerleaders versus objective reporters.

Throughout its 10 years of evaluating Smart Start, the team also conducted several qualitative studies that focused on the following:

  • Needs of local partnerships
  • Partnerships' decision-making processes
  • The challenges of involving parents and business partners
  • The nature of the public-private partnership

Information from these studies has helped state leaders justify the financial commitment, add appropriate technical assistance, revise or recommend policies and procedures to improve the program, and explain how results were achieved. For example, a qualitative study using in-depth interviews with dozens of leaders in partnerships where quality had most significantly increased led to better insights about which strategies and activities must be implemented to truly have an effect.

Rigorous Methods
In the increasing push for accountability, policymakers have become more aware of the ideal of the “gold standard” randomized experiment. As noted above, Smart Start studies could not meet this standard. Whenever possible, though, random selection was used to enroll child care centers, teachers, family providers, families, and children into the studies. Data collectors were trained on standards of reliability and did observations in the field. When studies involved pre and post visits, two different data collectors were assigned. Data entry was double-keyed, and sophisticated analyses were conducted. Rigorous research methods in both qualitative and quantitative studies lent credibility to the results.

Translating Data to Change Practices
The evaluation team was fortunate to enjoy good relationships with both DHHS funders and the state Smart Start partnership staff. The funders and the partnership staff read all draft reports and added views from a policy standpoint that evaluators may have overlooked. Results could be understood and acted on by those most able to effect change. For example, findings from one child care study showed that certain types of quality improvement TA were more likely to be related to positive child outcomes than others. County consultants were able to use these results to recommend activities with the most impact and to decline funding for services with limited impact.

Supporting Local Evaluators
The FPG team helped train and support local evaluators so that local partnerships could compare outcomes for different local initiatives or determine whether partnerships were meeting their community goals. Approximately 50% of the partnerships employed a local evaluator either on staff or via contract. FPG support to partnerships included writing job descriptions, serving as a resource for assessment tools and reports, including local evaluators in training sessions for FPG data collectors, conducting quarterly meetings of the 50–60 local evaluators across the state, and hosting an annual evaluation conference where outside speakers would share useful information.

As a result of difficult budget times in 2002, the legislature discontinued funding for a statewide evaluation, but the team took pride in the cadre of capable local evaluators who continue to work on behalf of Smart Start all across the state, evidence that evaluation has indeed been built into the core of the initiative.

Evaluation has been used both to help improve Smart Start and to show its worth. Process studies conducted over the years helped describe and clarify the nature of the program, and provided food for thought and action until quality improvement could be observed and related to children's outcomes.

¹ A logic model illustrates how the initiative's activities connect to the outcomes it is trying to achieve.

Donna Bryant
Associate Director
Frank Porter Graham Child Development Institute
Campus Box 8180
The University of North Carolina
Chapel Hill, NC 27599-8180
Tel: 919-966-4523
Email: bryant@unc.edu

Karen Ponder
President
North Carolina Partnership for Children
1100 Wake Forest Road
Raleigh, NC 27604
Tel: 919-821-7999
Email: kponder@smartstart-nc.org

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project