You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


On February 9, 1995 Harvard Family Research Project hosted Jennifer C. Greene as its fourth speaker in the Evaluation Seminar Series. Dr. Greene is Associate Professor in the Department of Human Service Studies at Cornell University. She is a pioneer in innovative strategies for combining qualitative and quantitative research methods in program evaluation and policy analysis.

Professor Jennifer C. Greene of Cornell University advocates using mixed methods to evaluate human service programs. She believes that this approach provides richer data—and thus better answers for evaluators and program managers—than does any single framework used alone. Greene attempts to bridge the gap between “theory and practice, concepts and fieldwork” by grounding her approach to evaluation in, as she says, “what is problematic in the field.”

Greene illustrated her mixed-method approach by discussing how an evaluator might assess a program designed to respond to persistent inner-city poverty among families by offering a wide range of job training, education, and support services. Evaluators and program managers of the hypothetical program, the Family Place, would, Greene discussed, want to ask a variety of questions in order to gather information useful for program improvements. Their primary questions might be:

  • Is the Family Place effective in responding to persistent urban poverty among families?
  • What difference does the Family Place make in the lives of the participants?
  • Does the Family Place work as intended? Has it been implemented as planned?

Rather than choose one approach to answer these questions, Greene would suggest using several methods and involving both staff and participants as co-inquirers. She might suggest combining case studies and questionnaires or self-reports in order to include different types of data from a variety of perspectives. Since “social phenomena are extremely complex,” she points out that we need diverse tools and different kinds of methods in order to understand them more completely.

Further Reading

Greene, J. C. (1994). Qualitative program evaluation: Practice and promise. In N. K. Dentin & Y. S. Lincoln (Eds.), The handbook of qualitative research (pp. 530-544). Thousand Oaks, CA: Sage Publications.

Caracelli, V. J., & Greene, J. C. (1993). Data analysis strategies for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 15, 195-207.

Greene, J. C. (1990). Technical quality vs. user responsiveness in evaluation practice. Evaluation and Program Planning, 13, 267-274.

Greene, J. C., Caracelli, V. J., & Graham, W. F. (1989). Toward a conceptual framework for mixed-method evaluation designs. Educational Evaluation and Policy Analysis, 11, 255-274.

Despite Greene's strong belief in the value of using mixed methods, she recognizes that combining techniques does not automatically result in good social science. In fact, she argues that the quality and defensibility of mixed-method practice is assured only if an inquiry is “planful”—that is, guided by a theory that takes into account the type of information needed to make decisions, the suitability of different kinds of methods for learning about different kinds of phenomena, and the technical limitations and biases of various methods. Green contends that this approach underscores the key role of purpose in mixed-method evaluation. Different purposes invoke different designed mixes of methods.

Greene concluded her presentation by stressing that a mixed-method framework for evaluation invites the “uncertainty and openness needed for evaluators to use their findings for action and change.” That is, the ability to examine multiple questions about a program offers “exciting and potentially meaningful opportunities for connectedness and solidarity” for evaluators, program managers, and program participants. Such connectedness is particularly important when the demand for outcomes and accountability requires that multiple groups with competing interests work together to keep a program alive. As Greene illustrated, the “practical possibilities” of mixing inquiry methodologies contribute to, and reflect, the pluralistic nature of modern society.

In addition to this presentation, HFRP has hosted the following discussions in our ongoing Evaluation Seminar Series: self-evaluation practices (Dr. Lynn Usher), culturally sensitive evaluations (Drs. Barbara Clinton, James Davis, Dee Spencer), and developmental evaluation (Dr. Michael Patton). We welcome suggestions for future topics and speakers. Contact Scott Balderson at HFRP.

Lori Rutter, Research Assistant, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project