You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Geneva Haertel and Barbara Means of SRI International describe how evaluators and policymakers can work together to produce “usable knowledge” of technology’s effects on learning.

Evaluating technology’s effect on learning is more complicated than it appears at first blush. Even defining what is to be studied is often problematic. Educational technology funds support an ever-increasing array of hardware, software, and network configurations that often are just one aspect of a complex intervention with many components unrelated to technology. Since it is the teaching and learning mediated by technology that produces desired results rather than the technology itself, evaluation should examine the potential influence of teachers, students, and schools on learning.

Understandably, policymakers tend to leave the evaluation of educational technology to evaluation professionals. But we believe policymakers and evaluators should work in tandem—by collaborating they can avoid the intellectual stumbling blocks common in this field. While evaluators bring specialized knowledge and experience, they are not in the best position to set priorities among competing questions. This is the realm of policymakers, who need to think carefully about the kinds of evidence that support their decisions.

In a recent volume (Means & Haertel, 2004), we identify six steps evaluators and policymakers can take to produce more useful evaluations of learning technologies.

  1. Clarify evaluation questions. The language of the No Child Left Behind Act often is construed to mean that the only relevant question is technology’s impact on achievement—an important question, but not the only one local policymakers care about. In some cases implementation of a technology (say Internet access for high schools) is a foregone conclusion, and instead policymakers may need to address an issue such as how best to integrate the technology with existing courses.

  2. Describe technology-supported intervention. Evaluators, policymakers, and other stakeholders should work together to develop a thorough description of the particular technology-supported intervention in question. A theory of change (TOC) approach would specify both the outcomes the intervention is expected to produce and the necessary conditions for attaining them.

  3. Specify context and degree of implementation. Evaluators and policymakers should identify both those served by the intervention and those participating in the evaluation. At this point, they should also specify the degree to which the intervention has been implemented. They can pose questions such as (1) What degree of implementation has occurred at the various sites? and (2) Have teachers had access to the training they need to use the technology successfully?

    Answers will enable evaluators to advise policymakers on whether to conduct a summative evaluation or an implementation evaluation. Some informal field observations of the technology can also be helpful at this point. This is the stage where the original purpose of the evaluation is confirmed or disconfirmed.

  4. Review student outcomes. The outcomes measured will be those targeted in the TOC. Evaluators can generate options for the specific methods and instruments for measuring outcomes. Some technologies aim to promote mastery of the kinds of discrete skills tapped by most state achievement tests; others support problem-solving skills rarely addressed by achievement tests. A mismatch between the learning supported by an intervention and that measured as an outcome can lead to erroneous conclusions of “no effect.”

    Evaluators and policymakers will need to prioritize outcomes, picking those that are most valued and for which information can be collected at a reasonable cost.

  5. Select evaluation design. The choice of evaluation design requires both the expertise of evaluators and policymaker buy-in. True (random-assignment) experiments, quasi-experiments, and case studies are all appropriate designs for some research questions. While federal legislation promotes the use of true experiments, it is easier to conduct experiments on shorter term, well-defined interventions than on longer term or more open-ended interventions.

  6. Stipulate reporting formats and schedule. Policymakers and evaluators should agree in advance of data collection on the nature, frequency, and schedule of evaluation reports. Reporting formats should make sense to a policy audience and provide data in time to inform key decisions.

To produce “usable knowledge,” or professional knowledge that can be applied in practice (Lagemann, 2002), we call for (1) evaluations that address the questions that policymakers and practitioners care about, (2) integration of local understandings produced by evaluator-policymaker partnerships with disciplinary knowledge, and (3) use of evaluation findings to transform practice.

References and Related Resources
Haertel, G. D., & Means, B. (2003). Evaluating educational technology: Effective research designs for improving learning. New York: Teachers College Press.

Lagemann, E. C. (2002). Usable knowledge in education. Chicago: Spencer Foundation. www.spencer.org/publications/index.htm

Means, B., & Haertel, G. D. (2004). Using technology evaluation to enhance student learning. New York: Teachers College Press.

Geneva D. Haertel, Ph.D.
Senior Educational Researcher
Tel: 650-859-5504
Email: geneva.haertel@sri.com

Barbara Means, Ph.D.
Center Director
Tel: 650-859-4004
Email: barbara.means@sri.com

Center for Technology in Learning
SRI International
333 Ravenswood Ave., BN354
Menlo Park, CA 94025

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project