Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.
|
Arnold Love and Betty Muggah describe how Hamilton Community Foundation applied democratic evaluation principles to transform challenged neighborhoods into vibrant communities.
Seema Shah, a researcher at the Institute for Education and Social Policy, shares her experience of engaging community organizing groups to develop a logic model on how community organizing leads to better student outcomes.
Katrina Bledsoe of the College of New Jersey writes about the inclusion of student voices in the evaluation of an obesity prevention program
Saville Kushner of the Centre for Research in Education and Democracy at the University of the West of England suggests ways that an evaluation's participants can make evaluations more democratic.
Sally Leiderman, President of the Center for Assessment and Policy Development, explains how evaluation can be a tool to help communities and their partners do work in racial equity.
An introduction to the issue on Democratic Evaluation by HFRP's Founder & Director, Heather B. Weiss, Ed.D.
The New & Noteworthy section features an annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Democratic Evaluation.
Katherine Ryan, Associate Professor of Educational Psychology at the University of Illinois, describes three approaches to democratic evaluation and argues that they can provide field-tested methods for addressing equity and inclusion issues in evaluations of programs for children, youth, and families.
This web only version of the New & Noteworthy section features an expanded annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Democratic Evaluation.
This 2-day meeting brought together the perspectives of diverse stakeholders to inspire new ideas and foster stronger links between research, practice, and policy. Participants discussed issues of access, quality, professional development, the role of evaluation research, and systems-building efforts.
Robert Penna and William Phillips from the Rensselaerville Institute’s Center for Outcomes describe eight models for applying outcome-based thinking.
John Bare of the Arthur M. Blank Family Foundation explains how nonprofits can learn about setting evaluation priorities based on storytelling and “sacred bundles.”
Abby Weiss from HFRP describes the tool that the Marguerite Casey Foundation offers its nonprofit grantees to help them assess their organizational capacity.
John A. Healy, Director of Strategic Learning and Evaluation at The Atlantic Philanthropies, shares ways to position learning as an organizational priority.
Robert Boruch, a founder of the Campbell Collaboration and professor of education and statistics at the University of Pennsylvania, discusses how the Campbell Collaboration and randomized trials contribute to evidence-based policy.
This issue of The Evaluation Exchange periodical focuses on evaluation methodology, covering topics in contemporary evaluation thinking, techniques, and tools. Mel Mark, president-elect of the American Evaluation Association, kicks off the issue with a discussion about the role that evaluation theory plays in our methodological choices. Other voices in the issue include Georgia State University evaluator Gary Henry, who makes the case for a paradigm shift in how we think about evaluation use and influence, and Robert Boruch, a Campbell Collaboration founder, who discusses the role of randomized trials in defining “what works.” Other contributors to the issue respond to various “how to” questions, such as how to foster strategic learning, how to find tools that assess nonprofit organizational capacity, how to select and use various outcome models, how to increase the number of evaluators of color, how to enhance multicultural competency in evaluation, and how to measure what we value so others value what we measure. Finally, the issue explores theory of change, cluster evaluation, and retrospective pretests—methodological approaches currently generating much interest and dialogue.
Andrea Anderson is a research associate at the Aspen Institute Roundtable on Community Change, where she focuses on work related to planning and evaluating community initiatives.
Gary Henry makes the case for a paradigm shift in how we think about evaluation use and influence.
Patricia Rogers of the Royal Melbourne Institute of Technology describes how a theory of change can provide coherence in evaluating national initiatives that are both complicated and complex.
The John S. and James L. Knight Foundation and Wellsys Corporation describe how they plan to aggregate lessons learned across a "thematic cluster" of youth development investments.
Teresa Boyd Cowles of the Connecticut Department of Education offers self-reflective strategies evaluators can use to enhance their multicultural competency.
Mehmet Öztürk discusses findings from a review of evaluations of programs at selective colleges and universities to be used for improving undergraduate academic outcomes for underrepresented minority or disadvantaged students.
Rodney Hopson and Prisca Collins of Duquesne University describe a new graduate internship program designed to develop leaders in the evaluation field and improve evaluators' capacity to work responsively in diverse racial and ethnic communities.
Theodore Lamb, of the Center for Research and Evaluation at Biological Sciences Curriculum Study, discusses retrospective pretests and their strengths and weaknesses.
The New & Noteworthy section features an annotated list of papers, organizations, initiatives, and other resources related to the issue's theme of Evaluation Methodology.