You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Herbert Turner, Chad Nye, and Jamie Schwartz explain the Campbell Collaboration’s application of its systematic review process to parent involvement interventions.

The Campbell Collaboration (C2), an international volunteer network of researchers, practitioners, policymakers, and consumers, produces systematic reviews that assess the effect of interventions on the individuals or groups targeted. A hallmark of a C2 systematic review is the use of the highest standards of scientific inquiry, with an emphasis on randomized control trial (RCT) studies.1 C2 systematic reviews are designed to identify, organize, analyze, and summarize data from existing research to provide guidance to stakeholders. The development and dissemination of a C2 systematic review involves a transparent process which, when feasible, includes a meta-analysis to statistically reconcile study results. To illustrate the procedural, integrative, and interpretative components of a C2 systematic review, we present below a step-by-step synopsis of the review we are currently completing in the area of parent involvement.2

A Systematic Review of Parent Involvement
A C2 systematic review involves five steps:

  1. Formulating the problem and criteria for study inclusion. In this case we began with the question, What is the effect of parent involvement on the academic achievement of elementary school children in Grades K–5? We decided to include only studies in which (a) parents provided education enrichment activities outside of formal schooling, (b) academic achievement was measured as an outcome, and (c) random assignment was used to create treatment and control groups.
  2. Locating studies. We searched 20 electronic databases and emailed over 1,500 policymakers, researchers, practitioners, and consumers to request referrals either to studies or to people who knew of studies relevant to our review. To date, we have located 20 RCT studies that meet the inclusion criteria.
  3. Coding the studies. The 20 studies were independently coded for design characteristics (e.g., random assignment method), intervention characteristics (e.g., parent as reading tutor), outcome measures (e.g., reading achievement), and target population (e.g., fifth graders).
  4. Computing effect sizes.3 We then took the difference between the group means on an outcome measure, such as reading achievement, and divided that difference by the pooled standard deviation of both groups.
  5. Interpretation of results. We found that individual evaluations might turn out to have quite different results, indicating that parent involvement is statistically significant in one study but not in another. Using meta-analysis, that is, grouping the studies and computing an overall d index,4 what may at first appear to be contradictory studies can be quantitatively summarized and reconciled to give a useful estimate of the magnitude of the intervention effect. Based on a bare-bones meta-analysis5 of a subset of four of the studies in our review (d = 0.64), we were able to conclude with 95% confidence that children in the parent involvement group scored approximately 2/3 of a standard deviation above the average academic achievement score for children in the control group, and that the effect is statistically significant.

Potential Contributions of C2 Systematic Reviews
An important characteristic of a C2 systematic review is transparency. Transparency allows stakeholders to interpret the validity of the review, which helps them to distinguish among interventions that are effective, ineffective, and even harmful. Practitioners can use the review as a guide to implement effective interventions, while policymakers can use the review to formulate policy or fund new or existing programs.

Related Resources


Cooper, H. (1998). Synthesizing research: A guide for literature reviews. Thousand Oaks, CA: Sage.

Hunt, M. (1997). How science takes stock: The story of meta-analysis. New York: Russell Sage Foundation.

Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage.

1 A randomized control trial study randomly assigns individuals to a treatment group or to a control group. When certain assumptions are met, the intervention is interpreted as causing group differences on outcomes.
2 This review is monitored, supported, and disseminated under C2’s Education Coordinating Group.
3 An effect size measurement calculates the difference between two or more groups.
4 The d index estimates the magnitude of the differences among groups in standard deviation units. A d index of .20 is small, .50 moderate, and .80 large. For more information on the d index, see McCartney, K., & Dearing, E. (2002). Evaluating effect sizes in the policy arena. The Evaluation Exchange, 8(1), 4, 7.
5 A bare bones meta-analysis is a meta-analysis that examines effect sizes prior to additional analytical steps, such as moderator analysis.

Herbert Turner, Ph.D.
Scientific Research Project Director
The Campbell Collaboration at Penn
3701 Walnut Street
Philadelphia, PA 19104
Tel: 215-794-9849
Email: hmturner@campbellcollaboration.org
Website: www.campbellcollaboration.org

Chad Nye, Ph.D.
Executive Director
Center for Autism and Related Disabilities
Email: cnye@mail.ucf.edu

Jamie Schwartz, Ph.D.
Assistant Professor
Communicative Disorders
Email: jschwart@mail.ucf.edu

University of Central Florida
Center for Autism and Related Disabilities
12001 Science Drive, Suite 145
Orlando, FL 32826

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project