You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


This project was supported by generous grants from Kraft Foods, Inc. The contents of this publication are solely the responsibility of Harvard Family Research Project.

Increasingly, the traditional organizational and political boundaries between education and social services are giving way to more integrated approaches that help children and their families. School-linked services are one program approach that seeks to address concerns about fragmented and duplicative services by offering a single point of entry—the school site. These efforts take many forms and the goals of such programs are numerous. Most display a holistic approach to children, joint planning, shared service delivery, and collaboration and/or coordination.¹

Determining how school-linked services programs work, what their impact is, and whether they should be expanded, however, is difficult. We asked nine evaluators of school-linked services programs to identify considerations and best practices related to evaluating outcomes, sustainability, and collaboration. We also asked them to provide insights into evaluation design and data collection methods.

These nine evaluators are evaluating a variety of programs and utilizing a variety of approaches. The programs these evaluators have assessed include:²

  • California Healthy Start School-Linked Services Initiative: Begun in 1992, Healthy Start is a statewide initiative involving 65 grantees and 200 schools. Funded by the California Department of Education.
  • Polk Bros. Foundation's Full Service Schools Initiative: Begun in 1997, the Polk Bros. Foundation's Full Service Schools Initiative is located in three Chicago schools. Funded by the Polk Bros. Foundation.
  • Delaware Academy Student Health Program: Begun in 1991, a school-based health care program operated by Delaware Academy and Central School in Delhi, New York, and Mary Imogene Hospital in Cooperstown, New York.
  • Gardner Extended Services School: Begun in 1997, located in the Gardner elementary school in a neighborhood of Boston. Funded by the DeWitt-Wallace Reader's Digest Foundation.
  • Iowa School-Based Youth Services Program: Begun in 1991, a statewide initiative that involves 18 sites in over 30 communities. Funded by the Iowa Department of Education.
  • Kentucky Family Resource and Youth Services Center Program: Begun in 1990, a statewide program funded by the Kentucky legislature as part of the Kentucky Education Reform Act.
  • Missouri Caring Communities: Begun in 1989, a school-based program located in six schools. Funded through the collaboration of the Missouri departments of Health, Mental Health, Social Services, and Elementary and Secondary Education.
  • New Jersey School-Based Youth Services Program: Begun in 1987, located in 30 sites statewide. Funded by the New Jersey Department of Human Services.
  • School of the Future: Begun in 1990, the School of the Future has been implemented in four Texas cities. Funded by the Hogg Foundation for Mental Health.

Some evaluations are in their final stages or completed; others are just beginning. What follows is a summary of their responses.

Evaluating Outcomes

In this era of increased scrutiny of and calls for accountability in public programs, demonstrating the outcomes of school-linked services initiatives is vital. Funders want to know that their money is being spent wisely and effectively. Program staff members want to know if their interventions are having an impact and how they can be improved. Community members want to know how school-linked programs affect their lives.

(A) Decide Which Long-Term Outcomes to Measure

Because school-linked services address the needs of children and families more comprehensively than traditional programs, deciding which long-term outcomes are salient for analysis is difficult. The results of integrated programs often take many years to manifest themselves, and disaggregating which outcomes derive from program interventions is often impossible. At the same time, evaluators and program staff want to be sure that the outcomes that are identified are those for which a program can reasonably be responsible.

Include educational as well as social service outcomes

One fundamental question about the outcomes of relevance for school-linked services is whether outcomes related to educational results should be considered. Educational outcomes are critical to policymakers, school leaders, and parents, and many evaluators of school-linked services programs do examine them. For example, evaluators of the Iowa School-Based Youth Services Program are looking at GPA, high school drop-out rates, high school reenrollment, and self-perception of school attendance and performance, among other measures. California Healthy Start evaluators focused on educational measures such as school attendance, performance, and completion rates. Evaluators of the Gardner Extended Services School are examining reading, English, and math test scores as well as student knowledge about different careers and the opportunities open to them.

Evaluators point out, however, that while educational indicators such as achievement scores might be part of any outcome evaluation in school-linked services, it is important to recognize that school-linked services may only indirectly influence educational results, and that process measures which are or could be related to school success should also be used (see discussion below).

Examine a variety of outcomes

Evaluators of school-linked services programs cannot be limited to one set of either educational or social service outcomes; instead, a variety of outcomes need to be studied to obtain a true picture of program impacts. The evaluators of the nine programs discussed here offer a variety of ideas about the types and nature of the school-linked services outcomes that should be measured.

Evaluators of the New Jersey School-Based Youth Services Program are examining four areas of student outcomes: process outcomes, focusing on student access to needed services in an age-appropriate and culturally sensitive manner; treatment outcomes, focusing on program components that address and ameliorate existing problems; prevention outcomes, focusing on program components that help youth avoid common problems of adolescence (such as substance abuse, adolescent pregnancy, and violent behavior); and developmental outcomes, focusing on components that help youth make a healthy and successful adolescent transition to adulthood, employing such activities as nutrition counseling; exposure to educational and cultural resources; recreation; social skills development workshops; and opportunities for service, decision making, and leadership. The evaluators will use the data on these outcomes to answer questions about: the program's impact on access to needed services; its impact on specific problems; and its general impact. The evaluators also suggest that the selection of relevant outcomes include the development of strength and personal and behavioral assets, as well as the problems and risk behaviors that are more traditionally measured in evaluations.

Evaluators of the Iowa SBYS program addressed impact by measuring the productivity of students who have dropped out of school and returned and graduated. Productivity was measured via eight indicators, including: income generated by employment; post-secondary education; volunteer activity; participation in the political process; homemaking and childrearing; talents and skills not used in job/leisure activities; public assistance; and penal system involvement. The first six indicators are considered to be positively associated with productivity; the last two are considered to be negatively related thereto. Productivity, the evaluators point out, is a goal that can be identified as common to all support service agencies and may serve as a common goal in the future. These measures of productivity of students involved with SBYS are being considered for future research.

The evaluators of Kentucky's family and youth resource centers focused on four broad categories of change: classroom performance (e.g., task, work completion, compliance), social interaction (e.g., friendship, peer interaction), global change (e.g., achievement, attendance), and perceived risk (e.g., dropout, learning).

Ground outcomes in actual services provided

Those responsible for selecting outcomes to be evaluated must work to ensure that the outcomes are related, even if indirectly, to the services actually provided by the program. Pressure for “significant” outcomes can result in the identification of long-term outcomes on which a given set of interventions has no influence. Evaluators, thus, must ensure that the outcomes which are selected for examination are reasonable, given the nature of the intervention. Identifying intermediate outcomes and processes, and using a theory of change approach can help to examine and demonstrate linkages between interventions and the long-term results expected (see discussion below).

Identify and understand the context for outcomes

Given the newness of multi-service initiatives such as those linked with schools, as well as the pressure to demonstrate results “early and often,” evaluators should take care to provide important information about the context for outcomes. Findings on outcomes should be carefully framed within a description of the program's actual operating context (including fiscal and philosophical support) and should take into account some measure of the actual level of services each participant received. Evaluators of the School of the Future (Texas) point out that, in some cases, indicators may get worse before they get better. For example, a successful program which retains students with the lowest academic standing in school may result in lower test scores school-wide, at least in the short term. Careful documentation of such phenomena is important in assisting audiences to understand why certain results are achieved.

(B) Link Process and Intermediate and Long-term Outcome Measures

Many social services programs are promoted for their potential to make positive impacts in people's lives. The reality is not only that effects are interrelated and complex but that they frequently take a long time to manifest themselves. The evaluators stress the need to link long-term outcomes to intermediate outcomes and an understanding of processes.

Identify intermediate outcomes

Most desired changes in social and educational well-being will take many years to achieve. Thus, evaluators point out the importance of identifying and measuring important shorter-term outcomes. These outcomes may reduce some of the pressure for programs to show long-term impact in a short period of time. For example, evaluators of the Kentucky FRYSC found that by moving from long-term social indicators to more proximal classroom indicators and linking these, they were able to relieve some of the demands on the program to demonstrate reduction of dropout and substance abuse rates and enhanced school attendance and achievement in the short-term.

Recognizing that dramatic changes in outcomes will not be apparent in the three years of the project, the evaluators of the Polk Bros. Foundation's Full Service School Initiative are using a social capital theory approach to understanding short- to mid-term change in the Polk Bros. Foundation's FSSI in Chicago. Social capital theory implies that the creation and enrichment of social and structural resources for children and families (e.g., mentorships, after school programs, emergency hotlines) are a necessary prelude to changes in the lives of children and families. Social capital theory can help an evaluator conceptualize links between interim and more long-term outcomes, while suggesting a range of authentic indicators of short-term progress. The evaluators of this program try to interject social capital thinking into the design of most of the measures and surveys they use. Student, parent, and teacher surveys focus on the perception that resources and relationships relevant to school achievement and child/family well-being are becoming more accessible within the community, especially through the agency of the FSS. Focus groups will follow up on these same themes. The evaluators are also tracking the emergence and extent of organizational linkages between each FSS and other community institutions, across a range of organizational categories.

Examine and document program processes

Implicit in the discussion of more immediate outcomes is the importance of understanding and documenting program processes. Most evaluators of the programs included here examined issues related to access and delivery of services, including service quality. For example, evaluators of the Healthy Start program examined the services that were provided (type/combinations, number of services, number of clients served/not served, and nature—child or family-focused) as well as the service delivery (the comprehensiveness, the level of integration, accessibility, cultural appropriateness, and service quality).

Linking social service and health processes to educational process outcomes is especially important. The evaluators of the New Jersey SBYS program point out that since school-linked services also often work to improve the school climate and the manner in which students with difficulties are treated, some of these outcomes could be included in evaluations. Measures related to these outcomes might include the visibility of the intervention within the school, the incorporation of the program into school practices as a way to assist and manage students, increased knowledge about and sensitivity to youth issues among school staff, and an increased sense of support for staff working with troubled youth.

Use a theory of change approach to elucidate complex relationships and processes

A theory of change approach requires programs to specify clearly their intended activities and the expected short- and longer-term outcomes of these activities. This approach is one way of beginning to clarify the “black box” of integrated service interventions. It enables evaluators and stakeholders to relate long-, intermediate- and short-term goals with activities and processes and helps to clarify the expected linkages among them.³

Evaluators of the Polk Bros. Foundation's FSSI in Chicago have found using a theory of change approach particularly helpful in working with the sponsoring foundation. They began by analyzing the Foundation's implicit models of change as expressed in its RFP and goals. The evaluators then compared this model with their own understanding of child development and institutional change to derive a working model of FSS change. They are using the Foundation's four explicit objectives to help frame a discussion about measurable goals, especially between the Foundation and the three sites, and to identify indices of change that can be tracked and included on a commonly-held benchmarking document and process (which is still under development).

A theory of change approach is best employed early in the process—ideally in the planning stage of the intervention—and evaluators employing such an approach may need to play the role of facilitator to help others to articulate the program's theory of change.

Evaluating Collaboration

By their nature, school-linked services require collaboration among the different entities providing services and between these entities and the school. Governance structures take many different forms—from formal agreements outlining specific roles and responsibilities to ad-hoc partnerships. While collaboration among different entities serving children and families is one of the most important factors in the success of integrating social and educational services (and is often a goal of these programs), it is also one of the most difficult things to evaluate.

A) Decide Which Aspects of Collaboration to Evaluate

Decisions about the aspects of collaboration to evaluate influence the questions that the evaluation asks and the evaluative evidence that should be collected and analyzed.

Examine the structure, nature, and image of collaboration

The structure of collaboration: This includes an examination of the contractual arrangements and financing of service provision.

Questions related to this might include:

  • What services are provided, by whom, and when
  • The extent to which program staff and services are housed in school facilities
  • The availability of other resources (personnel, expertise, in-kind support) for programs
  • The degree to which school and program budgets are integrated
  • The number of agencies that collaborate with the project over time
  • The degree to which program and school staff engage in collaborative strategic planning
  • The extent to which there is coordination among agencies and between agencies and schools in job descriptions.

The nature of collaboration: This includes an examination of how smoothly the different entities work together and the extent to which cultures, goals, and interests are congruent.

Questions related to this might include:

  • The extent of participation of program staff in school-related committees, activities, and functions
  • The extent of participation of school staff in program committees, activities, and functions
  • The extent to which program services are incorporated into the school's planned response to particular school problems (for example, anger management and crisis response
  • The extent to which parents and students participate in program committees, activities, and functions
  • The extent to which private sector entities participate in program committees, activities, and functions
  • The extent to which program staff serve as a resource in school classes
  • The extent to which staff development programs both include multiple agency participation and focus on issues related to interdisciplinary work
  • Specific knowledge and support of the program and its services on the part of key administrators and student support personnel, and general knowledge and support of the program and its services by teachers
  • The number of agencies that cosponsor grant proposals with the project
  • The number of agencies that communicate with the project in letters and memos
  • The degree to which membership shares a vision or purpose.

Evaluators of the Iowa SBYS program point out that collaboration ultimately relates to the ability of the organizations to deliver services better. To assess this, the evaluators are examining: number of cancellations and increased follow-through; parent consent approvals that apply to all agencies; waiting time for services; number of staff; number of self-referrals; number of referrals; number of referrals by peers; number of completion of services/positive terminations; number of youth most-at-risk and resistant to services offered; ways of overcoming transportation problems in delivering services; increased communication among agencies; and minimizing the duplication of services.

The image of collaboration: This is an examination of the positive or negative image stakeholders (including students, parents, program staff, teachers, school administration) have about the program.

Questions related to this might include:

  • Program inclusion in the automatic phone menu or in the student handbook
  • Ready visibility of the program's brochures in the administrative and guidance offices
  • The incorporation of information about agency services into the school curriculum.

Evaluators are also using surveys of program beneficiaries (students and parents) as well as other important stakeholders to glean information about whether the program and its operation are viewed positively or negatively. Such information can also help evaluators to determine the extent to which school-linked services might be sustainable.

Investigate parent and family involvement

While collaboration is often conceived as the relationship among formal agencies, parent and family involvement in the school-linked services effort is also an important aspect of collaboration. Evaluators of the Iowa SBYS program are evaluating family and parent collaboration via staff assessments which consider communication systems and opportunities for families to be involved in the programs offered. Communication systems include letters, surveys, phone contacts, home visits, newsletters, in-school conferences, parent workshop meeting, and reports of student progress. Opportunities for family involvement include individual program planning, parent/family counseling, decision-making committee participation, program evaluation, classes for parents, youth opportunity fairs, and other activities to help families understand support services.

Examine private sector involvement, where appropriate

In some school-linked services programs, such as that in Iowa, the private sector is expected to be part of school-based services. To assess the degree of this collaboration, the evaluators of this program are using staff reviews of aspects such as business funding of financial aid for the training of students and staff, job training provisions, paid work experience, scholarship funds for assisting at-risk youth to enter and complete post-secondary training, equipment donation for training, sponsorship of job fairs, long-term mentoring through business partnership or individual volunteers, willingness of businesses to have employees volunteer to help youth learn about careers, volunteering in classrooms, and providing the opportunity for job shadowing.

Establish a baseline for collaboration

Where collaboration is an explicit goal of programs, it is important to identify and establish the conditions before the programs have been implemented. Through interviews and document reviews prior to the program, one should try to determine the nature of the relationships among the various entities.

(B) Use the Evaluation of Collaboration for Self-Assessment and Improvement

Evaluators of Missouri Caring Communities Initiative point out that the process of assessing the health of a collaborative can itself be useful to sites. If all partners to a collaboration are invited to complete an assessment instrument, points of agreement and disagreement can be fed back to the site, and important issues can be surfaced.

The evaluators suggest that assessing collaboration might require two instruments: one for the collaborative coordinators, including items that are factual rather than attitudinal or perceptual, and one for collaborative members, to facilitate the kind of analysis described. Dimensions to be measured on both include: the degree to which the membership shares a vision or purpose; the content of this vision; how well the governance structure is working; barriers to the work of the collaborative; strengths of the collaborative; interaction styles and issues between the school and the service site; views of communication styles and success; financial needs; service delivery content, barriers, and successes; perceptions about clients; views of the collaborative's future; and equity and diversity issues in the collaborative.

On the coordinator's questionnaire, there should also be items about the collaborative's history, membership size, attendance at meetings, sources of funding, official governance structure, employees, number and type of clients served in a given timeframe, composition of staff, and services delivered.

Responses to these instruments are generally quite complete when the responses are collected anonymously at a meeting of the collaborative and when respondents know that they are to be seen only by outside researchers. Follow-up mailing can be completed by those not present at the designated meeting. When gathered across sites, such data can be used to compare collaborative characteristics with outcomes. At a given site, members of the collaborative can see, for example, how much they agree on their biggest barriers and their greatest strengths.

Evaluating Sustainability

School-linked services often begin as experimental programs, and the issue of their duration beyond their funding and their possible replication and scale-up are important evaluation considerations.

(A) Decide Which Aspects of Sustainability to Measure

Evaluators identify four aspects related to sustainability that evaluation should address.

Examine institutional and individual relationships

Evaluators note that an examination of the multiple and reciprocal institutional and individual relationships at a number of different levels is an important aspect of evaluating the sustainability of programs. These include relationships between:

The program and the host school/other organizations or agencies: Considerations include the extent and kinds of formal support offered by the local educational leaders. Indicators that one might want to examine are similar to those used to assess collaboration: the degree to which program staff and services are housed in school facilities; the extent to which school and program budgets are integrated; the extent to which program and school staff engage in collaborative strategic planning; specific knowledge and support of the program and its services on the part of key administrators and student support personnel; general knowledge and support of the program and its services by teachers; the extent of participation of program staff in school-related committees, activities, and functions (and vice versa); the extent to which program services are incorporated into the school's planned response to particular school problems; and the extent to which program staff serve as a resource in school classes.

The program and constituencies: The sustainability of a program also depends on consistent and visible community support. Therefore, it is important to assess the extent and degree of engagement by parents, students, community agencies, and churches in the school-linked services programs.

The evaluator of the Delaware Academy Student Health (DASH) program points out that the ongoing support from constituents depends on at least four factors: the extent to which program beneficiaries feel that their own needs are being met; the extent to which the program is viewed by key constituencies as making a “difference” and as crucial to the success of the school's children across a range of indicators; the degree of controversy related to the program; and whether stakeholders feel that the use of scarce resources for this purpose is justified relative to other potential uses.

Examine the stability and adequacy of funding sources

Evaluation of financial sustainability requires the identification of all funding sources and their relative importance, and an assessment of their long-term potential. It is also important to examine how programs have applied or acted on their knowledge about potential funding sources. Questions might include: the variety and extent of financial resources (including federal, state, local, and private funding); the anticipated duration of funding; and the extent to which a long-term perspective is emerging in the program and the actions taken to ensure future viability.

In Iowa, each SBYSP is being given technical assistance to develop a continuous written plan, identifying specific activities to accomplish continuation. Evaluators of the program are examining several factors considered essential to continuation including: cost per student and cost per contact service; determination of essential services that need to be continued; future planning of school facilities to accommodate the collocation of multiple service providers; analysis of existing jobs in all service agencies to determine roles necessary to continue services that can be fulfilled with existing staff; level of administrative planning compared to service level; planning development of common goals and ownership; sharing of resources between agencies to accomplish common goals; linkage of funds from multiple sources and the development of a common monitoring system utilized and implemented across agencies.

Evaluators of the School of the Future note that while follow-up on projects is not very often done in evaluations, it is very important. Where feasible, evaluation should include following a project beyond the initial funding period to explore its survival and determine the reasons for its survival (or demise). This aspect of a process evaluation can be the most valuable for others interested in implementing school-based service projects.

Examine the sustainability of outcomes as well

While program durability is an important aspect for evaluation, evaluators also suggest that evaluation examine the sustainability of the outcomes achieved. The evaluators of the New Jersey SBYS recommend that the durability of outcomes be examined at the individual and at the school and community levels. At the individual level, to evaluate the sustainability of student outcomes, evaluators should not only look at the persistence of positive outcomes or reduction of negative outcomes over time, ideally following students through adolescence, but also examine evidence of developmental outcomes, identifying program-developed strengths and assets that the student can apply broadly and use to address other issues in his or her life. At the school and community level, evaluators should look at the establishment and maintenance of a stable infrastructure of peer and adult support for positive youth development and the institutionalization of a coherent and well-integrated system of linked services to address a range of youth needs.

Consider the extent to which institutions become effective “learning organizations”

Evaluators of the Polk Bros. Foundation's FSSI in Chicago have found out that an examination of the extent to which the program is becoming an effective “learning organization” is a very important consideration for long-term sustainability. The evaluators have begun to track the three sites to asses how effectively the three planning groups “learn to learn,” that is, learn to create, conserve, analyze, and disseminate information.

Evaluation Design

Underlying the implementation of any evaluation are issues related to evaluation design. This includes deciding on an evaluation approach and considering the different ways to collect data.

(A) Decide on an Evaluation Approach

Use a participatory approach

Evaluators of school-linked services programs note that a participatory approach to the evaluation of these programs is important. The incorporation of perspectives from multiple stakeholders ensures the development of instruments and strategies sensitive to and reflective of stakeholder understandings of the programs and brings their perspectives to understanding and using the data collected. Evaluators note that participation of multiple stakeholders is particularly important in the identification of the important outcomes to be evaluated.

Begin early

Evaluators suggest that the evaluative endeavor should begin as early as possible. This allows time to build trust, understanding, and buy-in among participants as well as for understanding basic parameters for on-site research (including potential constraints on data collection methods). This allows time as well for networking. Evaluators caution however, that early evaluation work should focus on collecting information on the process of program implementation; evaluation of outcomes and impact should wait until the program is reasonably mature.

Incorporate multiple methods

Evaluators note that the evaluation design for school-linked services programs should employ both quantitative and qualitative methods. These methods can reinforce and inform one another. Information from multiple methods additionally helps evaluators to communicate findings to a wide variety of audiences. For example, interviews, focus groups, and observational data can be used to develop questions for survey instruments. Likewise, findings from surveys and reviews of program data can be illuminated through individual interview and case study data.

Maintain evaluation standards

To ensure that the evaluation design is carried out soundly and fairly, the evaluators of the Gardner School program made it a requirement that each evaluation question correspond to desirable attributes of sound evaluations as delineated by the Point Committee's Program Evaluation Standards (1994). The standards for sound and fair evaluation practices are categorized into four groups: utility, feasibility, propriety, and accuracy. Each of those categories can be utilized, when applicable, to describe the evaluation questions being pursued in each phase of the evaluation design.

(B) Consider Comparison Groups to Make the Case for Attribution

One of the most difficult design issues in evaluating school-linked services is attribution. Classical random assignment designs are difficult since, for logistical or ethical reasons, evaluators often cannot randomly assign families. Identical schools do not exist in any case.

Evaluators of the Missouri Caring Communities Initiative point out, however, that there are ways in which attribution may be approximated using comparison standards:

Another school: While no two schools are exactly alike, it may be possible to identify a nearby school, with a demographically similar population, which can be compared to the site that has school-linked services. If annual surveys are used at each school, it would be possible to compare families on some important outcomes such as the percentage who have their basic needs met, school involvement, access to needed services, satisfaction with the school, and so on. Attribution here is, of course, still arguable. However, families can also be asked to what extent it was the school-linked site that helped them with given issues, strengthening the case for attribution at least slightly.

Clients served in traditional ways: If an agency out-stations a worker at a school-linked site, the same agency may have staff members serving clients in traditional settings. For example, in one case, evaluators examined differing results for workers located in one central office and two workers from the same office who were out-stationed at a school-based site. The evaluation design tested whether families at the school were less likely to have their benefits interrupted because of the greater ease of reaching their workers with needed monthly installments and whether more of the clients at the school-based site moved to employment, since staff at this site were able to see clients more often and work their cases more intensively.

Comparisons with families in the same zip code: In some cases, families who reside in the same zip code area as a school-linked service site never get to that site for services, while others do. Outcomes for those who become clients can be compared with those of people who do not become clients.

Comparisons of students or families in the same school: Even in a school with linked services, some students and families do not receive those services. It is possible to compare outcomes among families and children in the same school by service levels.

The theory of change approach: A theory of change approach can be used to make the argument that services are, in fact, provided, and if the shorter- and longer-term outcomes do occur as predicted, it is likely that these services contribute to the observed outcomes (see discussion above).

None of these strategies is a classical random assignment experiment. It is possible to argue that families who get to a service site are certainly different from other local families who do not get to the site, or that families who get the most intensive services are clearly different from those who do not receive services. It is also possible to argue that the theory of change was, in fact, wrong, and that the occurrence of predicted outcomes was fortuitous. But in trying to decide if a school-linked site is the “cause” of any change made by families, some comparison standard is helpful.

Evaluators of the New Jersey SBYS program point out that the design should, where feasible, include comparison data that are sensitive to critical factors beyond demographics, such as family stress, prior school performance, and so forth. Gross comparison (e.g., use/nonuse, race, or gender) should be avoided. Replications across sites and over years should be studied to determine the relationships among outcomes, the individual site context, and youth at different age levels. They are attempting to identify comparison nonusers within the school, based on similar characteristics identified through surveys of all students (users and nonusers).

(C) Build Evaluation Capacity

Maintain ongoing information exchange

Evaluation should be seen as part of organizational learning—as integral to program design, implementation, and success. Therefore, evaluators of school-linked services programs suggest that adequate feedback loops be established to ensure timely reporting of findings to program staff and other stakeholders, including parents and communities. These might include meetings, presentations, or reports. This helps to develop buy-in into the evaluation while at the same time assuring that the evaluators have obtained an accurate picture of reality.

Maintain information exchange within the research team

Evaluators of the Polk Bros. Foundation's FSSI in Chicago note that an ongoing information exchange network also needs to be established within the evaluation team. They acknowledge that this is often much easier said than done, especially when data collection intensifies. However, consistent information exchange among researchers helps to reinforce shared reflection of the study and the program. The small staff of evaluators of the program maintained a regular electronic correspondence summarizing all project-related interviews and observations to each other in a timely manner and met weekly to discuss progress. One of the researchers archives the reports using a qualitative data base (NUD*IST) that can also support scanned documents and other project records.

Consider the use of evaluation coaches

The evaluators of the California Healthy Start Initiative suggest that “evaluation coaches” can be used in a multi-site evaluation to enhance the capacity of local sites both to provide and use evaluation information. Once coaches cultivate a relationship with sites, they can work with local staff to address the data needs of the statewide evaluation and to customize the evaluation to local circumstances. This customizing might include developing a local strategy for collecting the required service and outcome data (i.e., what forms should be used, who should complete forms and when, and how clients can be tracked over time), as well as identifying ways to collect information that sites want for their own evaluation purposes. Once a local strategy is established, coaches can provide training to local staff and monitor data quality.

The choice of coaches is important. Successful coaches are flexible, nonthreathening, interested, respectful, motivated, experienced with related programs, not overcommitted to other professional activities, and knowledgeable about the statewide evaluation requirements. Coaches also need to maintain consistent communication with and even receive ongoing training from those responsible for the overall evaluation. Regular feedback from sites about coaches can also increase their effectiveness. Finally, coaches need to invest substantial time and energy to ensure that sites can carry out evaluation responsibilities and maintain an evaluation focus.

(D) Collect Data

Consider data collection at the onset of the project

Data collection ideally should be considered at the beginning of the project—evaluation also should be considered before a program begins to ensure that adequate thought and attention have been given to the effort and so methods and data collection can be designed to take full advantage of the opportunity to learn about the effort.

Often, however, baseline data for an evaluation have not been collected. The evaluator of the DASH program notes that where baseline data are not available, evaluators must use indirect methods to research evaluative measures of interest. This might include, in some cases, using knowledgeable perceptions as measures. For example, in the DASH case, the evaluator used interviews with teachers and parents to help determine the extent to which the program made a difference in the students' attendance in class and in their attentiveness. The faculty were surveyed about whether they felt students were present in class more, returned to class sooner after experiencing a health problem, and paid more attention in class since DASH had been available to handle their health concerns. Teacher perceptions were used as a way to quantify differences made by the program to students' ability to benefit from their education. In another example, through surveys of parents, the DASH evaluation was able to uncover the fact that many children were receiving health care from the DASH program, which they otherwise would not have received. In order to provide some quantification of this, parents were surveyed about the number of times their children would not have received health care without DASH due to various listed constraints (work demands, cost, etc.). The results led to a supported approximation of the difference made by the program to enhancement of health care access and the relative influence of different access barriers.

Involve key stakeholders in data collection

Evaluators point out that it is important to get input from stakeholders such as site representatives when considering how to collect these data. Since site personnel will participate in the project on a daily basis, they can assist in the collection of much information. If needed, they should be trained to collect data in methods that are easy-to-use but not time-consuming. One way of getting site personnel involved is to demonstrate to them how the data being collected will be of benefit to their project.

Balance data requirements and data collection costs

Evaluators suggest that a balance must be struck in terms of the costs of the data collection effort and the utility of the data. While negotiating this can be difficult, it is necessary. The evaluators point out as well that resources devoted to the collection of quantitative data must be negotiated so that there are adequate resources for the collection of qualitative data. The development and implementation of surveys, for example, can be very expensive, and in some cases, cheaper alternatives may need to be used.

Use appropriate instruments

Evaluators note that the choice of instruments is important. Evaluators of the School of the Future state that instruments should be sensitive enough to pick up gradual changes and the behaviors being measured should realistically be able to change in the time specified. Evaluators of the Polk Bros. Foundation's FSSI in Chicago point out that pre-validated instruments should be used where possible, especially if these allow for comparison with national databases.

Develop a data system

Evaluators point out that, where possible, a database system should be put in place to collect and monitor data. Clearly this can be potentially costly and this must be considered during the evaluation. Computerized databases make the job of collecting and collating data on outcomes easier. For example, in Iowa, evaluators, in seeking to address the need to document long-term impact on families and children, have developed a computerized database (entitled “Efficient Accounting of Services to Youth” or EASY) to assist in longitudinal monitoring and follow-up. This data collection is complemented by an intake process utilizing an instrument designed to determine the people most at risk. Data collection also includes staff questionnaires, case studies, and student/parent surveys. Data can then be aggregated to determine whether the most and least at-risk students and family members are being helped. Local programs benefit by having this feature to analyze and discuss outcome data for the most and least at-risk people within the community.

Conclusion

School-linked services are emerging as a promising approach to address the increasingly complex problems facing children and families. Our understanding of these programs is far from complete, however. We do not yet understand which school-linked service interventions work and which do not. We know that these programs must be accountable, but defining to whom and for what they are to be accountable is difficult. We know that these programs have substantial governance implications but are unsure as to whether new collaborative arrangements can work. We seek to expand these interventions if they prove successful, but are unsure about the conditions necessary to ensure long-term viability. Evaluation plays an important role in helping us to understand whether and how these approaches work. The insights of the fourteen people cited in this paper provide valuable information to help begin to address some of these challenges and identify promising evaluation practices.


¹ Harvard Family Research Project. (1996). Challenges in evaluating school-linked services: Toward a more comprehensive evaluation framework (p. 9). Cambridge, MA: Author.

² For a full description of the programs as well as their evaluations, see the Appendix below.

³ The terms “logic model” and “evaluability assessment” are also used to describe similar approaches.


Acknowledgments
Harvard Family Research Project would like to thank everyone who participated in this effort: Mary Geisz of Cornell University; Robert Illback of R.E.A.C.H.; Scott Keir of the Hogg Foundation for Mental Health; Ray Morley of the Iowa Department of Education and James Veale, Statistical Research Consultant and Educator; Susan and William Philliber of Philliber Research Associates; Mary Wagner and Shari Golan of SRI International; Mary Walsh and George Madaus of Boston College; Constancia Warren and Anita Baker of the Academy for Educational Development; and Sam Whalen of Northwestern University.


Summary of Evaluation Considerations

Evaluating Outcomes

(A) Decide Which Long-term Outcomes to Measure

Include educational as well as social service outcomes
Examine a variety of outcomes
Ground outcomes in actual services provided
Identify and understand the context for outcomes

(B) Link Process and Intermediate- and Long-term Outcome Measures

Identify intermediate outcomes
Examine and document program processes
Use a theory of change approach to elucidate complex relationships and processes

Evaluating Collaboration

(A) Decide Which Aspects of Collaboration to Evaluate

Examine the structure, nature and image of collaboration
Investigate parent and family involvement
Examine private sector involvement, where appropriate
Establish a baseline for collaboration

(B) Use the Evaluation of Collaboration for Self-assessment and Improvement

Evaluating Sustainability

(A) Decide Which Aspects of Sustainability to Measure

Examine institutional and individual relationships
Examine the stability and adequacy of funding sources
Examine the sustainability of outcomes as well
Consider the extent to which institutions become effective “learning organizations”

Evaluation Design

(A) Decide on an Evaluation Approach

Use a participatory approach
Begin early
Incorporate multiple methods
Maintain evaluation standards

(B) Consider Comparison Groups to Make the Case for Attribution

(C) Build Evaluation Capacity

Maintain ongoing information exchange
Maintain information exchange within the research team
Consider the use of evaluation coaches

(D) Collect Data

Consider data collection at the onset of the project
Involve key stakeholders in data collection
Balance data requirements and data collection costs
Use appropriate instruments
Develop a data system


Appendix: Evaluators of School-Linked Services

Program
California Healthy Start School-Linked Services Initiative (Healthy Start)

Description
Statewide initiative providing planning and operational grants to support the development of local collaborative structures and processes and specific service strategies for meeting community needs. Integration of health, mental health, social, and other support services for children and families. Involves 65 grantees (200 schools). Funded by the California Department of Education. Initiative begun in 1992.

Evaluation Description
Longitudinal study comparing baseline with changes in processes and results over time. Data sources included surveys, service records, intake/follow-up forms, school records, and California Basic Education Data System. Evaluation conducted 1992–1996. Evaluation funded by the Foundation Consortium for School-Linked Services.

Evaluator
Mary Wagner, Shari Golan, SRI International


Program
Polk Bros. Foundation's Full Service Schools Initiative (FSSI)

Description
Objectives include: improved access to a range of recreational, educational, social and health-related services; increased participation by school and community representatives; improved relationships between parents and school staff; creation of patterns of mutual support between classroom and social support services at each school. Located in three Chicago schools. Each school was required to partner with a nonprofit partner in the community, to constitute a broadly inclusive oversight and planning committee, and to hire a Resource Coordinator specifically to facilitate the development of the effort. Funded by the Polk Bros. Foundation for three years (1997–1999).

Evaluation Description
Evaluation of outcomes and processes. A mix of qualitative and quantitative methods to study changes in the lives of four primary constituents of each FSS: students, parents, teachers, and associated service providers. Methods include key informant interviews with planners and participants, surveys of school and neighborhood conditions, focus groups, collection of academic outcome information, collection of program attendance and program evaluations, tracking of linkages between schools and external organizations, direct observation of planning meetings, FSS events, and FSS programs. Evaluation to be conducted from January 1997 through December 1999. Evaluation funded by the Polk Bros. Foundation.

Evaluator
Sam Whalen, The Center for Talent Development, Northwestern University


Program
Delaware Academy Student Health Program (DASH)

Description
Range of primary health care services at two elementary schools and one combined middle and high school in very rural county in upstate New York. School-based health care program operated by Delaware Academy and Central School in Delhi, New York, and Mary Imogene Bassett Hospital in Cooperstown, New York. In operation since 1991.

Evaluation Description
Mixed methods evaluation approach. Qualitative data collected through interviews, clinical observation, and document review. Quantitative methods included surveys and statistical analyses of clinic utilization data. Data collected were integrated during interpretation, reporting, and presentation. Evaluation conducted from January–December 1996.

Evaluator
Mary Geisz, Cornell University


Program
Gardner Extended Services School (GESS)

Description
Program extends the school day, links the school with community agencies, and responds to the needs of two generations. Program located in an elementary school in a neighborhood of Boston. Program begun in 1997. Funded by the DeWitt-Wallace Reader's Digest Foundation.

Evaluation Description
Four-phase effectiveness evaluation will assess longitudinally the inputs, processes, and outcomes of the project for four target groups: children, parents, teachers, and community. First phase (early planning phase) will gather needs assessment/planning information. Initial project implementation phase will answer formative questions about early initiatives/plans/processes. Full-scale implementation phase will collect ongoing quantitative and qualitative data on four target groups. Impact assessment phase will provide information on intended and unintended outcomes. Effectiveness evaluation will be conducted for five years. (Additional process evaluation will be conducted by the Foundation). Evaluation funded by the DeWitt-Wallace Foundation.

Evaluator
George Madaus, Mary Walsh, Boston College School of Education


Program
Iowa School-Based Youth Services Program (SBYSP)

Description
Statewide initiative links local schools with other community resources to provide services for children. At a minimum, each school involved provides case management, primary health care services, human services, mental health services, employment and training, recreation opportunities, and life-skills training. Involves 18 sites in over 30 communities. Funded by the Iowa Department of Education. Initiative began in 1991.

Evaluation Description
Evaluation of implementation and outcomes. Information on a number of measures including long-term program impact gathered through a computerized database management system as well as staff questionnaires, case studies, and student/parent surveys.

Evaluator
Ray Morley, State of Iowa Department of Education
James Veale, Statistical Research Consultant and Educator


Program
Kentucky Family Resource and Youth Services Center Program (FRYSC)

Description
Family Resource Centers offer preschool and after school care, parent training, and health services and referrals. Youth Services Centers provide health services, substance abuse counseling, and family crisis or mental health counseling. Funded by the Kentucky legislature as part of the 1990 Kentucky Education Reform Act (KERA).

Evaluation Description
Evaluation examined implementation and outcomes. Mix of qualitative and quantitative approaches. Quantitative data were used to learn about who is being served, the services provided, and outcomes. Qualitative data were used to glean descriptive data about salient program characteristics, resource utilization, implementation problems and successes, unmet needs, and important issues and themes that were central to the program's mission. Evaluation funded by the Annie E. Casey Foundation.

Evaluator
Robert Illback, R.E.A.C.H.


Program
Missouri Caring Communities

Description
Caring Communities is an approach for schools, neighborhoods, and public agencies to link services and supports to achieve better results for children and families. Caring Communities tailors an array of services to the specific needs of the neighborhood families and children served. Initiative begun in 1996 and includes collaboration of Missouri Departments of Health, Mental Health, Social Services, Labor and Industrial Relations, and Elementary and Secondary Education.

Evaluation Description
Longitudinal study monitors 18 benchmarks measuring changes in six core areas. Data are recorded at the community (ZIP code), school, and core client level. Comparisons exist with similar communities and individuals not participating in Caring Communities. Data sources include records of schools, state agencies, and participating providers. Evaluation funded by the collaborating agencies of the State of Missouri.

Evaluator
William Philliber, Rogéair Purnell, Philliber Research Associates


Program
New Jersey School Based Youth Services Program (SBYSP)

Description
Core school-based services include individual and family counseling, primary and preventive health services, employment counseling, training, and placement, drug and alcohol abuse counseling, and recreation. Local programs have flexibility in how they provide services and activities. Initiative begun in 1987. Each project is managed by a lead agency, and a 25% local resource match is needed to secure funding. Funded by the New Jersey Department of Human Services. In operation since 1987.

Evaluation Description
Two-phase evaluation. First phase is a policy context study used to analyze SBYSP's evolving policy context, articulate a theory of action in each program context-area, and design an outcome study. The second phase is an outcome study involving an examination of outcomes for individual youth. Evaluation employs a mix of quantitative and qualitative data. The first phase relied on site visits and interviews. The second phase entails: periodic surveys; review of participation, utilization, and school data; and case studies. Evaluation conducted from 1995–1998. Evaluation funded by the Annie E. Casey Foundation.

Evaluator
Constancia Warren, Anita Baker, Academy for Educational Development


Program
School of the Future (SoF)

Description
School of the Future provides an integrated array of both treatment and preventive services at the school site. Implemented in four Texas cities: Austin, Dallas, Houston, and San Antonio. Funded by the Hogg Foundation for Mental Health, Texas for five years (1990–1995).

Evaluation Description
Mix of quantitative and qualitative methods. Quantitative aspect collected annual student-level information on emotional well-being, self-esteem, perceptions of school climate, and academic achievement, through school records, educational tests, and surveys. Qualitative aspect included collection of information on the process of implementing the program through family surveys and key informant survey of site representatives. Quantitative portion of evaluation conducted through the five years of funding. Qualitative information was gathered throughout the funding period and two years beyond. Evaluation funded by the Hogg Foundation for Mental Health.

Evaluator
Scott Keir, Hogg Foundation for Mental Health

Free. Available online only.

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project