You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Program Description

Overview The North Carolina Quality Enhancement Initiative (NC QEI) was a pilot project designed to prepare selected school-age child care programs in North Carolina for the process of accreditation by the National School-Age Care Alliance (NSACA; now known as the National Afterschool Association), with an emphasis on program improvement and awareness of quality standards for school-age care.
Start Date September 1997; completed May 1998
Scope state (three target communities in North Carolina: Charlotte, Greensboro, and Raleigh)
Type after school, before school
Location urban, suburban
Setting public school, religious institution, private facility, recreation center
Participants kindergarten through middle students (K–8)
Number of Sites/Grantees 30 school-age child care programs in three communities
Number Served 1,565 in 1997–1998
Components NC QEI funders selected programs for participation in the initiative from applicants in the target communities; priority was given to those that served children of employees at the three funding companies. Based on data from a program administrator questionnaire designed to measure a program’s readiness for improvement, programs were placed in one of two groups: (a) First Steps, a slower paced track focusing on targeted program improvements during the 1st year, with the goal of applying for NSACA accreditation in 1999; or (b) Team Works, a faster paced track designed to have programs ready to apply for NSACA accreditation in spring 1998.

The initiative began with a 2-day training conducted and planned by two National Institute on Out-of-School Time (NIOST) training associates. Each program was also assigned an advisor, who provided on-site consultation, telephone consultation, and/or resource development for approximately 9 months (September 1997–May 1998). The nine advisers (three from each community) were trained for 2 days in May 1997 and were allotted a maximum of 10 hours of technical assistance per program and 10 hours of bi-monthly peer support meetings. Each participating program also received two resources to assist them in program improvement: the NSACA Pilot Standards (Sisson, 1995) and the Assessing School-Age Quality Kit (O’Connor, Gannett, Heenen, & Mattenson, 1996).

References:
O’Connor, S., Gannett, E., Heenen, C., & Mattenson, P. T. (1996). Assessing school-age child care quality. Wellesley, MA: School-Age Child Care Project.

Sisson, L. (1995). Pilot standards for quality school-age child care. Wellesley, MA: National School-Age Child Care Alliance.
Funding Level unknown
Funding Sources IBM, AT&T, and GE Capital through the American Business Collaborative of Quality Dependent Care (ABC) and administered by Work/Family Directions, a collaboration of U.S. companies partnering to ensure that their employees have access to quality dependent care programs and services

Evaluation

Overview An evaluation was conducted to examine whether NC QEI helped to improve the quality of participating programs and the staff–child interactions.
Evaluator Alice Henderson Hall, Georgia Southern University

Deborah J. Cassidy, University of North Carolina at Greensboro
Evaluations Profiled An Assessment of the North Carolina School-Age Child Care Accreditation Initiative
Evaluations Planned none
Report Availability Hall, A. H., & Cassidy, D. J. (2002). An assessment of the North Carolina School-Age Child Care Accreditation Initiative. Journal of Research in Childhood Education, 17, 84–96.


Contacts

Evaluation Alice H. Hall, Ph.D.
Department of Hospitality, Tourism, and Family and Consumer Sciences
Georgia Southern University
P.O. Box 8021
Statesboro, GA 30460
Tel: 912-681-5720
Fax: 912 681-7087
Email: alicehall@georgiasouthern.edu
Program Elizabeth D. Joye
Training Associate
National Institute on Out-of-School Time
106 Central Street
Wellesley, MA 02481
Tel: 781-283-2547
Fax: 781-283-3657
Email: lizjoye@aol.com
Profile Updated September 20, 2005

Evaluation: An Assessment of the North Carolina School-Age Child Care Accreditation Initiative



Evaluation Description

Evaluation Purpose To assess whether programs’ participation in the initiative improved the quality of the school-age child care environment and staff–child interactions.
Evaluation Design Quasi-Experimental: Pretest observation and survey data were collected on the 28 programs that agreed to participate in the evaluation (93% of the 30 total programs). Posttest observation data were collected on the 26 programs from pretest still participating in the initiative (four programs, two of which had been included in the evaluation, dropped out of the initiative over the course of the project).
Data Collection Methods Observation: Pretest and posttest observations were conducted at each program using structured observation assessments. Visits lasted approximately 3 hours; observers arrived about 45 minutes before the children arrived, met with the site director, received a program tour, and observed until most children were picked up.

Surveys/Questionnaires: Program directors completed surveys that requested information about directors’ education level (high school, some college, or 4 years of college or more) and salary ($10,000–$20,000 or over $20,000), program size (small [under 30 children], medium [31–70 children], or large [over 70 children]), staff/child ratio (low [1:8–1:12], average [1:14–1:15], or high [1:18–1:24]), auspice (for-profit or nonprofit), and whether the program possessed a state license.

Test/Assessments: Three assessments were used during each program observation. The School-Age Care Environment Rating Scale (SACERS; Harms, Jacobs, & White, 1996) was used to assess the program environment, while the Caregiver Interaction Scale (CIS; Arnett, 1989) and the Human Relationships (HR) Keys of Quality from the NSACA pilot accreditation standards Assessing School-Age Quality (O’Connor, Gannett, Heenen, & Mattenson, 1996) were used to measure staff–child interaction quality.

The SACERS measures programs’ developmental appropriateness, focusing on 43 items from six subscales: (a) space and furnishings, (b) health and safety, (c) activities, (d) interactions, (e) program structure, and (f) staff development. Each item is rated on a 7-point scale with the score of 1 signifying inadequate, 3 minimal, 5 good, and 7 excellent. An average score on the 43 items is then calculated. There is also a seventh subscale of six items for programs that include children with special needs.

The CIS is a 26-item scale used to rate a single staff member, using such statements as “speaks warmly to the children” and “doesn’t supervise the children very closely.” A score of 1 indicates that a given behavior is never true, while a score of 4 indicates that a behavior is often observed.

The HR Keys of Quality consists of nine “keys” and a set of four standards specific to each key, for a total of 36 items. For example, the standards for relating to children in positive ways assess how staff members make children feel welcome, listen to what they say, and are engaged with children, while positive guidance techniques standards assess whether staff member set appropriate limits and encourage children to solve their own problems. Keys are statements such as “staff related to children in positive ways” and “staff use positive techniques to guide children’s behavior.” Each standard is then scored on a 0–3 scale, where a score of 0 indicates no evidence or not met, and a score of 3 indicates fully met. To achieve accreditation, NSACA guidelines state that the program must score at least a 10 on each HR key and at least a 2 on each standard.

The 26 posttest programs were grouped into two sets of performance clusters. One set of clusters was defined by pretest scores, and the other was defined by changes from pretest to posttest, based on averages of the three observation measures. For the pretest clusters, Cluster 1 was defined as those programs that were below the average of all participating programs on all measures except the CIS. Cluster 2 was defined as programs at about the overall mean of programs on all measures, and Cluster 3 was defined as programs with higher than average mean scores on all measures. For clusters based on changes in scores from pretest to posttest, Cluster 1 was defined as programs with negative change scores, Cluster 2 was defined as programs with small positive change scores, and Cluster 3 was defined as programs with large positive change scores.

References:
Arnett, J. (1989). Caregivers in day-care centers: Does training matter? Journal of Applied Developmental Psychology, 10, 541–552.

Harms, T., Jacobs, E. V., & White, D. R. (1996). School-age environment rating scale. New York: Teachers College Press.

O’Connor, S., Gannett, E., Heenen, C., & Mattenson, P. T. (1996). Assessing school-age child care quality. Wellesley, MA: School-Age Child Care Project.
Data Collection Timeframe Data were collected in fall 1997 (pretest) and May 1998 (posttest).


Findings:
Formative/Process Findings

Program Context/Infrastructure At pretest, 7 programs were classified as Cluster 1 programs (programs below the average on all measures except the CIS), 5 programs were classified as Cluster 2 programs (programs at about the overall mean on all measures), and 14 programs were classified as Cluster 3 programs (programs with higher than average mean scores on all measures).

For the pretest clusters, the average pretest School-Age Care Environment Rating Scale score was 2.87 out of 7 for Cluster 1, 3.23 for Cluster 2, and 4.01 for Cluster 3.

At pretest, higher quality clusters were more likely to have a state license; among pretest score clusters, all Cluster 3 programs were licensed, compared to 80% of Cluster 2 programs and only 29% of Cluster 1 programs.

Smaller programs tended to fall in the higher quality pretest clusters. Of the pretest score clusters, Cluster 3 had a higher percentage of small programs (38%) than did Cluster 2 (20%) or 3 (14%), although Cluster 3 also had a higher percentage of large programs (29%) than did Cluster 1 (14%) or 2 (0%).

Nonprofit status was not associated with the higher quality pretest clusters.

Of posttest programs, 19 had a state license, and 5 did not. (Two were missing these data.)

Of posttest programs, 7 were classified as small, 14 were medium-sized, and 5 were large.

Seventeen of the posttest programs were nonprofits, while the remaining 9 were for-profit.
Recruitment/Participation The average total enrollment among the 28 pretest programs was 54.5 children, with a range from 10–109 children.
Staffing/Training Of the 23 program directors for whom data were available at pretest, the average annual salary was $22,473, ranging from $5,000–$39,000.

Lower staff/child ratios, higher director education, and higher director salary were not associated with the higher quality clusters at pretest.

The majority of the 28 pretest programs provided benefits to staff, including sick leave (86%), annual leave (82%), health insurance (79%), and retirement (61%).

The average pretest Caregiver Interaction Scale score for pretest clusters was 2.85 out of 4 for Cluster 1, 2.25 for Cluster 2, and 3.18 for Cluster 3.

For pretest clusters, the average pretest Human Relationships Keys of Quality score was 3.57 out of 12 for Cluster 1, 5.20 for Cluster 2, and 8.57 for Cluster 3.

Posttest program directors’ education levels were as follows: high school (n = 2), some college (n = 5), and 4 years of college or more (n = 19). Of the 24 directors who attended at least some college, 9 majored in education, 4 each majored in psychology and child development, and 7 majored in fields unrelated to their roles working with children.

Ten of the posttest programs had low staff–child ratios, eleven had average ratios, and five had high ratios.


Summative/Outcome Findings

Systemic Based on pretest to posttest changes, nine programs were classified as Cluster 1 programs (programs with negative change scores), nine were classified as Cluster 2 programs (programs with small positive change scores), and eight were classified as Cluster 3 programs (programs with large positive change scores).

On average, significant increases were seen from pretest to posttest on the School-Age Care Environment Rating Scale (mean score gain = 0.68, p < .001), with 21 programs showing improvements in their scores. For the change score clusters, the average change from pretest to posttest was -0.08 for Cluster 1, 0.58 for Cluster 2, and 1.18 for Cluster 3.

Caregiver Interaction Scale scores showed marginally significant improvement on average from pretest to posttest (mean score gain = 0.17, p = .05), with 18 programs showing improvements. For the change score clusters, the average change from pretest to posttest was -0.21 for Cluster 1, 0.14 for Cluster 2, and 0.65 for Cluster 3.

On average, significant gains were seen in Human Relationships Keys of Quality scores from pretest (mean score gain = 1.66, p < .01), with 17 programs showing improvements. For the change score clusters, the average change from pretest to posttest was -1.14 for Cluster 1, 1.8 for Cluster 2, and 4.64 for Cluster 3.

Among change score clusters, those with fewer licensed centers showed the greatest improvements on all three measures; only 50% of Cluster 3 programs were licensed, compared to 89% in Cluster 2, and 83% in Cluster 1.

Large programs showed the least improvement on the three measures and were more likely to decline. For the change score clusters, Cluster 1 had a higher percentage of large programs (44%) than did Cluster 1 (12%) or 2 (0%). Conversely, Cluster 3 had a higher percentage of small programs (38%) than did Cluster 1 (11%) or 2 (33%).

Programs that improved the most on the three measures had directors with lower levels of education. For the change score clusters, the majority of Cluster 1 and 2 directors had 4 years or more of higher education (78% and 89%, respectively), compared to only 50% of directors in Cluster 3. Additionally, 25% of Cluster 3 directors had only a high school education, while none of the Cluster 1 and 2 directors had this level of education.

Nonprofit program status, director salary, and staff–child ratios were not associated with program quality improvement as measured by average score changes on the three observation measures from pretest to posttest.

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project