Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.
Volume X, Number 4, Winter 2004/2005
Issue Topic: Evaluating Family Involvement Programs
Promising Practices
Herbert Turner, Chad Nye, and Jamie Schwartz explain the Campbell Collaboration’s application of its systematic review process to parent involvement interventions.
The Campbell Collaboration (C2), an international volunteer network of researchers, practitioners, policymakers, and consumers, produces systematic reviews that assess the effect of interventions on the individuals or groups targeted. A hallmark of a C2 systematic review is the use of the highest standards of scientific inquiry, with an emphasis on randomized control trial (RCT) studies.1 C2 systematic reviews are designed to identify, organize, analyze, and summarize data from existing research to provide guidance to stakeholders. The development and dissemination of a C2 systematic review involves a transparent process which, when feasible, includes a meta-analysis to statistically reconcile study results. To illustrate the procedural, integrative, and interpretative components of a C2 systematic review, we present below a step-by-step synopsis of the review we are currently completing in the area of parent involvement.2
A Systematic Review of Parent Involvement
A C2 systematic review involves five steps:
Potential Contributions of C2 Systematic Reviews
An important characteristic of a C2 systematic review is transparency. Transparency allows stakeholders to interpret the validity of the review, which helps them to distinguish among interventions that are effective, ineffective, and even harmful. Practitioners can use the review as a guide to implement effective interventions, while policymakers can use the review to formulate policy or fund new or existing programs.
Related ResourcesCooper, H. (1998). Synthesizing research: A guide for literature reviews. Thousand Oaks, CA: Sage. Hunt, M. (1997). How science takes stock: The story of meta-analysis. New York: Russell Sage Foundation. Lipsey, M. W., & Wilson, D. B. (2001). Practical meta-analysis. Thousand Oaks, CA: Sage. |
1 A randomized control trial study randomly assigns individuals to a treatment group or to a control group. When certain assumptions are met, the intervention is interpreted as causing group differences on outcomes.
2 This review is monitored, supported, and disseminated under C2’s Education Coordinating Group.
3 An effect size measurement calculates the difference between two or more groups.
4 The d index estimates the magnitude of the differences among groups in standard deviation units. A d index of .20 is small, .50 moderate, and .80 large. For more information on the d index, see McCartney, K., & Dearing, E. (2002). Evaluating effect sizes in the policy arena. The Evaluation Exchange, 8(1), 4, 7.
5 A bare bones meta-analysis is a meta-analysis that examines effect sizes prior to additional analytical steps, such as moderator analysis.
Herbert Turner, Ph.D.
Scientific Research Project Director
The Campbell Collaboration at Penn
3701 Walnut Street
Philadelphia, PA 19104
Tel: 215-794-9849
Email: hmturner@campbellcollaboration.org
Website: www.campbellcollaboration.org
Chad Nye, Ph.D.
Executive Director
Center for Autism and Related Disabilities
Email: cnye@mail.ucf.edu
Jamie Schwartz, Ph.D.
Assistant Professor
Communicative Disorders
Email: jschwart@mail.ucf.edu
University of Central Florida
Center for Autism and Related Disabilities
12001 Science Drive, Suite 145
Orlando, FL 32826