You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Steven Harvey and Gregory Wood describe how they created a methodology to capture data across a series of parenting workshops.

Traditionally, evaluations of impact on program participants rely on pretest–posttest survey methodology, whereby a sample group of participants completes surveys prior to and after their participation in a given program. EPIC (Every Person Influences Children) offers programs to help parents and other adults to raise academically successful and responsible children. For years, EPIC used “pre–post” surveys to measure our impact on the parents who attend our multisession parenting workshops.1 However, while the small sizes of our individual workshops—10 to 15 parents—help us to connect with parents, they also result in small sample sizes, which made it difficult to measure the extent of our impact.

Participant absenteeism further complicates the sample size issue. Sample size eroded quickly when a handful of participants missed the first workshop session, at which the pretest survey was administered, and another handful missed the final session and thus the posttest survey. In this way, a program in which 20 parents participated could easily result in a report based on the surveys of just 7 or 8.

Facilitator error compounded the problem. Most parent workshops are facilitated by volunteers trained by EPIC. While these volunteers were highly motivated to deliver the program curriculum, many failed to appreciate the importance of administering evaluation surveys, sometimes forgetting to administer the posttest survey. Even when we implemented better facilitator training, sent reminders to facilitators, and sent surveys to participants via the mail, we were not able to correct the problem.

Five Factors That Impact Parental Effectiveness

  1. Knowledge about parenting skills
  2. Attitudes toward implementing parenting skills
  3. Perceived effectiveness of the parenting workshops
  4. Feelings of parental isolation
  5. Perceived confidence with regard to implementing parenting skills

Small sample sizes created additional headaches for program evaluators. First, small sample sizes led to low statistical power when testing for pre–post differences in key variables. As a result, in many cases EPIC was not able to demonstrate statistically significant changes in our clients, even though we were confident of program effectiveness based on data from studies with larger sample sizes. Second, small survey sample sizes created the impression that EPIC failed to adequately recruit participants. Clearly, EPIC needed to adjust its survey approach in order to better represent our work to organizations that had invested in our mission and programming.

To address these issues, we developed a new approach to program evaluation. We began by defining a logic model describing the conceptual basis for the parenting workshops. The model specified that EPIC programs impacted five specific factors that years of research show, in turn, impact parental effectiveness (see sidebar).

Next, we developed evaluation booklets, which allow us to collect data from participants in an ongoing way. Each “chapter” of the booklet consists of a program evaluation survey to be administered at the end of the session. If a participant attends all sessions, she completes all the surveys; if she attends three out of five sessions, she completes only those three surveys. With this approach, EPIC is able to establish a posttest for almost all participants. It also allows us to assess the impact of the program across time, not just from the first to the last session. Finally, this new methodology allows EPIC to report on participant attendance over the series of workshops, thereby giving funding agencies a more realistic picture of program activity.

An independent research firm analyzes the data and reports the findings with respect to outcome accountability and continuous program improvement. The report contains data on average group performance on the five key variables, the sample size associated with each session, and evidence of change over the course of the workshop series.

While the new approach does not evaluate changes for individual participants, it does provide meaningful descriptive evidence of client performance on key outcome variables across workshop sessions. In addition, volunteer facilitators are more likely to administer the surveys because it is now part of the standard practice of each session and not a special activity at the beginning and end of a series. Finally, we are able to provide funding agencies with meaningful information that corresponds to the logic model upon which the program curriculum is based. This new approach has improved the meaningfulness of evaluation reports for EPIC’s multiworkshop programs, and funding agencies have responded favorably to the new report formats.

1 A typical workshop series consists of anywhere from 6 to 16 individual workshop sessions.

Steven J. Harvey, Ph.D.
Director of Research and National Program Coordination
EPIC – Every Person Influences Children
1000 Main Street, Buffalo, NY 14202
Tel: 716-332-4126
Email: harveysj@epicforchildren.org

Gregory R. Wood, Ph.D.
Chair
Managment/Marketing Department
Richard J. Wehle School of Business
Canisius College
2001 Main Street, Buffalo, NY 14208
Tel: 716-888-2645
Email: gwood@canisius.edu

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project