You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Joellen Killion from the National Staff Development Council outlines an eight-step process for measuring the impact of professional development.

For many years, the only evaluation of professional development was the traditional end-of-program “smiley” sheet on which participants reported their degree of satisfaction with the program, presenter, and facilities. Policy and decision makers who wanted to know if professional development produced any results had few options.

That changed in the mid-1990s, when sweeping changes in federal policy required those spending federal funds to evaluate the effectiveness of professional development. Many professional development specialists recognized the weakness of their evaluation attempts, and some argued that it was impossible to link professional development with student achievement because of the large number of intervening variables. Others wondered if the evaluation field provided the necessary tools and processes to measure impact of professional development.

In 1999, with a grant from the Edna McConnell Clark Foundation, we at the National Staff Development Council (NSDC) launched a 2-year initiative to find ways to measure the impact of professional learning on teacher behavior and student learning. With a team of experts in evaluation and professional development, we discovered that the major problem with evaluating professional development lay not in evaluation but in the design of professional development. Educators wrongly believed that one-shot professional development sessions would transform not only teacher classroom behavior but also student learning. Confronting this fallacy presented a new challenge for professional development leaders and providers: If one-shot sessions do not work, what does it take to change teacher classroom behavior and student learning?

The answer was that it almost always takes more than just a single session. Ongoing sessions of learning, collaboration, and application, accompanied by school- and classroom-based support, over an ample time period are necessary to incorporate new behaviors fully into a teacher's repertoire. If the design of professional development is sufficiently strong and long enough to promote deep changes, it will be possible to measure the impact of professional development on student learning.

Using a theory of change1 evaluation model and building on logic models2 that define the transformation process, we developed an eight-step evaluation process that encourages evaluators to build pathways with evidence to measure the impact of professional development on teacher classroom behavior and student learning.

An Eight-Step Process for Measuring Impact

1. Assess evaluability. Evaluators examine the design of the professional development program to determine its likelihood of producing the intended results; scrutinize the program's goals, objectives, standards of success, indicators of success, theory of change, and logic model; and ask about the program's clarity, feasibility, strength, and worth. If, after that analysis, the program is deemed evaluable, the evaluator moves ahead to Step 2. If the program is deemed not evaluable, the evaluator encourages the program's designer(s) to revise the program.

2. Formulate evaluation questions. Evaluators design the formative3 and summative4 questions, which focus on the initial and intermediate outcomes and the program's goals and objectives. By asking questions about results (e.g., did teachers use the strategies? did student work demonstrate evidence of teachers' application of the strategies?) rather than about services, evaluators can measure impact rather than program delivery.

3. Construct the evaluation framework. Evaluators determine what evidence to collect, from whom or what sources to collect the evidence, how to collect the evidence, and how to analyze the evidence.

4. Collect data. Evaluators use the data collection methods determined in Step 3 to collect evidence to answer the evaluation questions.

5. Organize and analyze data. Evaluators organize and analyze collected data and display analyzed data in multiple formats to use in Step 6.

6. Interpret data. Working together, stakeholders and evaluators interpret the data to make sense of it, draw conclusions, assign meaning, and formulate recommendations. Including stakeholders in this process is essential because their participation expands and enhances the meaning of the data.

7. Report findings. Evaluators report findings and make recommendations in formats sensitive to the needs of the multiple audiences. Rather than a single technical report, evaluators prepare multiple reports of varied lengths and in varied levels of sophistication and formats.

8. Evaluate the evaluation. The evaluator analyzes his or her own evaluation methodology, processes, resources, skills, and so forth. As a reflective practitioner, the evaluator looks back at the work done and identifies its strengths and areas for continued refinement and growth.

Related Resource


Killion, J. (2002). Assessing impact: Evaluating staff development. Oxford, OH: National Staff Development Council.

In addition to using this eight-step process, it is essential that evaluators believe that the professional development program has the potential to produce the intended results. Lack of belief in professional development's potential—not evaluation—has been the greatest challenge in evaluating professional development.

1 A theory of change identifies the sequence of actions a program intends to take to accomplish its goals and the assumptions upon which those actions rest.
2 A logic model is a tool that turns the actions into results, so that evaluators and program managers can monitor progress and results.
3 Formative evaluations are conducted during program implementation in order to provide information that will strengthen or improve the program being studied.
4 Summative evaluations are conducted either during or at the end of a program's implementation. They determine whether a program's intended outcomes have been achieved.

Joellen Killion
Director of Special Projects
National Staff Development Council
10931 W. 71st Place
Arvada, CO 80004-1337
Tel: 303-432-0958
Fax: 303-432-0959
Email: joellen.killion@nsdc.org


‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project