You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Picture of Ricardo Millett
Ricardo Millett

Ricardo Millett is a veteran philanthropist and evaluator and is president of the Woods Fund of Chicago. The foundation is devoted to increasing opportunities for less advantaged people and communities in the Chicago metropolitan area—including their opportunities to contribute to decisions that affect their lives. Prior to joining the Woods Fund, Dr. Millett was director of evaluation at the W. K. Kellogg Foundation. He has also held management positions at the United Way of Massachusetts Bay and the Massachusetts Department of Social Services. Dr. Millett has been a consistent and prominent voice on issues of diversity in evaluation and has been instrumental in developing solutions that will better enable evaluators to address and build around diversity and multiculturalism.

Q: How should we be thinking about diversity as it applies to evaluation?

A: Evaluators are in the business of interpreting reality through the kind of information we capture. If we are good at what we do, our work is seen as legitimate and we influence programs and policies. My concern is that much evaluation is not accurately capturing the experiences of those who are affected by the programs and policies we inform. Conventional program evaluation often misses the kinds of data and experiences that can help to frame effective programs and policies, and this problem relates directly to how we approach diversity and multiculturalism in our profession.

Jennifer Greene, Rodney Hopson, and I recently wrote a paper about the need for evaluation to generate authentic knowledge about social programs and issues.¹ This is knowledge that captures and authentically represents the experiences and perspectives of people affected by these programs or issues—often individuals or communities of color. Generating authentic knowledge is about finding a way to make sure that evaluation is participatory and grounded, and collects and interprets data within real settings. It is not about capturing whether participants work well for a program, but whether a program works well for participants.

Consider the issue of public housing. Many cities have developed efforts to transfer low-income residents out of public housing or high-rise projects into affordable mixed-residential developments. Sounds like a good idea, right? But once we had these programs in place, we suddenly realized that there were problems with this approach. We were moving people out of low-income housing faster than we could find alternative housing. Some individuals had deeply entrenched problems that made them hard to place. Some programs shut males out of the transition program because they didn’t take into account nontraditional conceptions of family dynamics and structure. And the support services that were previously available suddenly were not available in new neighborhoods.

So families had better housing, but now they had all sorts of new problems. That suggests to me that the planning and evaluation that helped to design these programs did not capture and relate the authentic experiences of those who actually experienced them, and did not use those experiences to craft effective transition programs. That kind of shortsightedness is the difference between a conventional approach to evaluation and what I call a multicultural approach that respects and captures authentic knowledge and experience as part of the evaluation process.

If we are going to get better at capturing authentic experience, we need to look more carefully at who is doing the evaluation and at the approach being used. We must ask who—in terms of ethnicity, background, training, and experience—is doing the evaluation and how they are doing it.

I am not suggesting that capturing authentic experience necessarily requires an evaluator to be of the same class and ethnicity as the individuals in the program being evaluated, though those considerations are critical. But I am suggesting that evaluators have to possess the sensitivities, abilities, and capacity to see experiences within their context. If we don’t, then we are likely to do damage by helping to sustain ineffective policies or strategies. If we understand them enough and are willing enough to dig into these experiences with our evaluation approach, then we are more likely to capture authentic experience.

If not, we risk helping to legitimize the negative characterization of people in poverty and the programs or policies that keep them poor. Capturing authentic experience requires a little humility and an understanding that a lot of our work is more art and sociology than hard science.

Q: How would you characterize the evaluation field’s current stance on issues of diversity?

A: Several years ago, when I was director of evaluation at the W. K. Kellogg Foundation, a number of colleagues of color and I started a diversity project that originated from the questions many foundation program officers had about why so few evaluators of color were receiving evaluation contracts.

We developed a directory to identify evaluators of color across the nation. But then we realized that if we wanted to address this issue seriously, we needed to do more than a directory. There simply were not enough evaluators of color available, or not enough evaluators with the capacity to work across cultural settings.

As a result, the American Evaluation Association (AEA) and the Duquesne University School of Education have been engaged in a joint effort to increase racial and ethnic diversity and capacity in the evaluation profession. The project is developing a “pipeline” that will offer evaluation training and internship opportunities (in foundations, nonprofits, and the government) for students of color from various social science disciplines.

Initially, success for this pipeline project will mean that we no longer have to scour the world to find evaluators of color. If the project is further funded and expanded, long-term success will mean that the courses and tools that have been developed will be institutionalized into the broader realm of evaluation training and professional development and made available to all evaluators, not just those of color. Eventually, approaches that help us capture authentic experience will become a legitimate part of the way the evaluation profession does business.

In the beginning this idea met with some resistance and defensiveness in the broader evaluation community. Questions about eligibility for internship participation and even the need for such an approach surfaced, along with the feeling that the notion of multicultural evaluation was not something that should be legitimated. This resistance has diminished over time, but it is something that the field as a whole must continue to struggle with. Now we are having more open dialogue about these issues, spurred in large part by the very active and vocal efforts of the Multiethnic Issues in Evaluation topical interest group within AEA.

Q: What has improved over the past decade?

A: Ten years ago, we—meaning evaluators of color—were isolated and frustrated that these issues about diversity in evaluation were not on anyone’s radar. Ten years ago there weren’t enough evaluators of color in leadership positions; and there weren’t enough academicians and practitioners for whom this issue resonated.

Related Resources


Mertens, D. M. (1997). Research methods in education and psychology: Integrating diversity with quantitative and qualitative approaches. Thousand Oaks, CA: Sage.

Part of the American Evaluation Association, the Multiethnic Issues in Evaluation topical interest group’s mission is to (1) raise the level of discourse on the role of people of color in the improvement of the theory, practice, and methods of evaluation, and (2) increase the participation of members of racial and ethnic minority groups in the evaluation profession. www.obsidcomm.com/aea/

Ten years later, we have not just evaluators of color pushing this issue, we have a range of evaluators in leadership positions supporting it. The articulation of these concerns has become sharp, coherent, and effective in getting the attention of major stakeholders in the funding world and at academic institutions. The response has been much greater, and more foundations are willing and ready to take these issues on and build the capacity that the evaluation profession needs.

Q: What should we be doing to make real and sustainable change on issues of diversity in evaluation?

A: In addition to raising the profile of these issues, offering more education on approaches for capturing authentic experience, and increasing the number of evaluators of color, we should be paying attention to what evaluators in other countries are doing. The kind of evaluation that is participatory and captures authentic experience is almost standard in the third world. We have been slow in this country to learn and adapt.

Also, more often than not we accept and compromise the principles of truth for a contract. We offer and accept evaluation dollars that are less than what we need to get good and authentic information. We accept forced definitions of problems, and we don’t push what we know to be true.

As evaluators we need to play out our responsibility to generate data that is as true as possible to interpreting the reality of people that are being served, and not legitimate the programs and policies that keep them from having a voice in and improving their own conditions.

¹ Greene, J. C., Millett, R., & Hopson, R. (in press). Evaluation as a democratizing practice. In M. Braverman, N. Constantine, & J. K. Slater (Eds.), Putting evaluation to work for foundations. San Francisco, CA: Jossey-Bass.

Julia Coffman, Consultant, HFRP
Email: julia_coffman@msn.com

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project