Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.
All Publications & Resources
|
Elisabeth Jacobs discusses mixed-methods research in a policy context, highlighting the demonstration program Moving to Opportunity.
Stephen Bagnato, Robert Grom, and Leon Haynes describe an evaluation design that provides scientific rigor in a community setting.
Marielle Bohan-Baker describes the instructive and collaborative approach to planning and evaluation of six community partners in Long Beach, California.
This Snapshot examines the range and scope of activities being implemented in current out-of-school time programs to set a context for understanding the links between program activities and positive outcomes for youth.
This paper examines how communication campaigns with different purposes (individual behavior change and policy change) have been evaluated. It offers a discussion of theories of change that can guide evaluation planning, along with five case studies of completed campaign evaluations. Each case study includes lessons from the evaluation and the paper finishes with a set of cross-case-study lessons gleaned from these evaluations and others.
This brief offers expert commentary on the implications of the first-year report of the national evaluation of the 21st Century Community Learning Centers program for future evaluation and research. It includes a methodological critique of that study, written by Deborah Vandell.
Marjorie Weschler of SRI and Jane David of the Bay Area Research Group describe the importance of flexibility and feedback in conducting formative evaluation.
This section features an annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Evaluating Education Reform.
María Elena Torre and Michelle Fine describe the process and potential of participatory action research with youth researchers to investigate race, ethnicity, class, and opportunity gaps in education.
Megan Beckett, Sandy Berry, and Kristin Leuschner of RAND Corporation describe a framework approach for transforming research findings into a practical tool for policymakers, parents, and practitioners.
This special report offers commentaries from experts on the challenges and opportunities presented by the current federal policy’s emphasis on scientifically based research for the practice and evaluation of education reform.
Julia Coffman of HFRP describes one approach OST programs can take to develop a logic model.
Maria Elena Figueroa from the Johns Hopkins University Center for Communication Programs reveals the Center’s methods for evaluating communication campaigns and offers five examples of their evaluations in progress.
This resource provides definitions of evaluation terminology frequently used in the out-of-school time field. It also provides answers to frequently asked evaluation questions.
This paper offers ideas for the roles that evaluation can play in helping ensure a discussion about sustainability is started early enough and maintained throughout an initiative. The ideas in this paper are based on Harvard Family Research Project's broad spectrum of experience in the past two decades with large-scale initiatives.
A collaboration with the Finance Project, this brief provides practitioners of local out-of-school time programs with techniques, tools, and strategies for improving their program and tracking their effectiveness over time.
This issue of The Evaluation Exchange examines the use of evaluation for continuous improvement. It incorporates advice from well-known experts, such as Paul Light, Rosalie Torres, and Joe Wholey, outlines innovative evaluation practices, and provides insights into the evaluations of a wide range of initiatives.
Rosalie T. Torres, Ph.D. is Director of Research, Evaluation, and Organizational Learning at the Developmental Studies Center in Oakland, California. Her 24-year career in evaluation has focused on researching, teaching, writing about, and practicing a learning approach to evaluation.
Amy Coates Madsen describes how, by setting best practices for nonprofits, the Standards for Excellence program both helps nonprofits to improve and increases public confidence in them.
Sharon Edwards and Ira Cutler of Cornerstone Consulting Group explain how organizations can use reflective assessments to assess their progress and consider the choices ahead.
Kim Sabo of Sabo Consulting and Dana Fusco from York College, CUNY illustrate how they conducted a participatory evaluation of an after school literacy initiative to support its continuous improvement.
Philip Harris and Lori Grubstein of the Crime and Justice Research Center describe the “bottom-up” development of ProDES, an outcome-based information system that tracks youth in the juvenile justice system.
The following are excerpts from an evaluation panel at the conference, “Nurturing Strong Full Service Schools: Building Bridges with Communities,” that took place on May 20, 2002. It was the fifth in a series of national conferences about full service schools organized by Margot Welch and the Collaborative for Integrated School Services at the Harvard Graduate School of Education. Panelists shared their evaluation findings and lessons learned.
An introduction to the issue on Continuous Improvement by HFRP's Founder & Director, Heather B. Weiss, Ed.D.
Charlie Schlegel of Citizen Schools explains how their evaluation strategy successfully balances the need to determine program impact with the need for continuous improvement.