You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Molly Engle and James Altschuld describe results from their research into recent trends in university-based evaluation training.

Preparing evaluators is an ongoing process and one that engages many individuals in universities, colleges, government agencies, and professional organizations. No two paths to the evaluation profession are the same. Reviewing the current opportunities for preparing evaluators allows us to see progress, identify where growth can and is occurring, and to preserve the profession’s history.

Early in 2000, the American Evaluation Association (AEA) endorsed a project funded by the National Science Foundation (NSF) to update what we know about university-based evaluation training. In addition, the evaluation team was charged with determining what kind of professional development training was being offered. Assisted by Westat, a leading research corporation, we conducted two surveys, one for each type of training in question. Both surveys were international in scope. This article presents preliminary findings of the university-based survey.

First, a little history. In 1993, we conducted a similar survey¹ in which we defined a “program” as a curricular offering of two or more courses in sequence, specifically, “A program consists of multiple courses, seminars, practicum offerings, etc., designed to teach what the respondent considered to be evaluation principles and concepts.”² This statement made it possible to interpret “program” in a variety of ways, but it clearly excluded single-course programs.

At that time we identified a total of 49 programs. Thirty-eight were based in the United States, a decrease from the previous study, conducted in 1986,³ which found 44. We also identified 11 programs internationally, all in Canada or Australia. Three programs were in government agencies and there was one nontraditional program, which did not exist in 1986. It is important to note that of these 49 programs, only one half (25) had the word “evaluation” in their official title, limiting the visibility of the others.

The process we used for the current survey was similar to that used in 1993. We distributed a call for nominations through various listservs including Evaltalk, Govteval, and XCeval, as well as through personal communication and general solicitation at AEA’s annual meeting. We developed a sampling frame of 85 university-based programs and 57 professional development offerings (not discussed here). A unique aspect of the current survey that differed from the previous surveys was that NSF requested we examine whether any training programs focused on science, technology, math, or engineering. In addition, the AEA Building Diversity Initiative, an ad hoc committee, requested that we develop a mechanism to determine the extent of training programs in minority-serving institutions such as the Historically Black Colleges and Universities (often the 1890 land-grant institutions), Hispanic Serving Institutions, and Tribal Institutions (the 1994 land-grant institutions). The sampling frame for minority-serving institutions was developed separately and returned a list of 10 schools offering individual courses.

The preliminary results from the current study show that the face of university-based evaluation training has once again changed. The total number of programs has decreased from 49 in 1993 to 36—26 United States programs and 10 international programs. One reason for this decrease could be that senior evaluation leaders are retiring from their academic lives. Often these programs were the passion of a single individual who developed a collaborative and interdisciplinary program. We have not yet begun to see the next generation of university-based programs led by passionate young faculty.

Of those 36 institutions responding, 22 (61%) have “evaluation” in their formal title. The lack of a recognizable program title remains problematic for the future of the profession. If individuals are unable to quickly locate training opportunities in evaluation, they will be more likely to choose a different course of study. This could lead to a further reduction of university-based programs due to low enrollments, to an increase in alternative training opportunities, or to some hybrid approach to entry into the profession.

¹ Altschuld, J. W., & Engle, M. (Eds.). (1994). The preparation of professional evaluators: Issues, perspectives, and programs. New Directions for Program Evaluation, 62.
² Altschuld, J. W., Engle, M., Cullen, C., Kim, I., & Macce, B. R. (1994). The directory of evaluation training programs. New Directions for Program Evaluation, 62, 72.
³ May, R. M., Fleischer, M., Schreier, C. J., & Cox, G. B. (1986). Directory of evaluation training programs. In B. G. Davis (Ed.), New Directions for Program Evaluation, 29, 71–98.

Molly Engle, Ph.D.
Associate Professor of Public Health
College of Health and Human Sciences
Oregon State University
307 Ballard Extension Hall
Corvallis, OR 97331
Tel: 541-737-4126
Email: molly.engle@oregonstate.edu

James W. Altschuld, Ph.D.
Professor of Education
The Ohio State University
310B Ramseyer
29 W. Woodruff Avenue
Columbus, OH 43210
Tel: 614-292-7741
Email: altschuld.1@osu.edu

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project