You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Julia Coffman of HFRP describes four ways evaluators may need to adjust their approaches when evaluating advocacy and policy change.

One of the most common questions about the evaluation of advocacy and policy change is whether there is anything different about it compared to, say, evaluating programs or direct services. My unequivocal response is, “Yes and no.”

First, let's address what's not different. To be sure, there are universal principles of evaluation design and practice that apply to advocacy and policy change evaluation just as they do to other evaluations.1 For example, all evaluators conduct systematic and data-based inquiries. Those inquiries can be quantitative or qualitative and typically use a core set of methods such as interviews and surveys. Evaluators also have tools—like logic models or theories of change—that are helpful in most, if not all, evaluations.

In addition, all evaluations share some similarities in purpose. Evaluators aim to provide high-quality information that has significance or value for whom or what they are evaluating. While evaluators have choices in the kinds of data they produce and how they position that data for use, those choices are similar across evaluations. Evaluation can be used to inform strategy and decision making, build the capacity of evaluation stakeholders, or catalyze programmatic or societal change.

In terms of these core evaluation principles, then, evaluations of advocacy and policy are not different from all other evaluations. They can serve similar purposes and draw upon the same basic evaluation designs, models, and methods.

Now, let's address what is different. This requires us to think about how advocacy work differs from programs or direct services. The most important difference is that advocacy strategy typically evolves over time, and activities and desired outcomes can shift quickly. Also, the policy process itself is unique. While programs and direct services can be affected by unpredicted and contextual variables, the policy process takes that possibility to a whole new level. Finally, as author Allison Fine makes clear in her article, most advocacy organizations are small both in terms of their size and their capacity to manage evaluation.

All of these distinctions have implications for how we approach evaluation to ensure that our work is relevant and gets used. Below are four recommendations for evaluators who work in the advocacy and policy change field.

1. Get real about real-time feedback.
The term “real time” is used most often in the computing world, where it refers to a timeframe so short as to be imperceptible and make feedback seem immediate. Computers that process information in real time read the information as it comes in and return results to users instantly.

In recent years, the term “real time” has infiltrated the evaluation world, and many evaluators now use it to describe their reporting approach. We use “real time” to mean that we report regularly and not only at the evaluation's conclusion. The purpose of real-time reporting is to position the evaluation to inform ongoing decisions and strategy.

But how many of us really follow through on our promises to report in real time? Think about what doing so actually means. True real-time reporting requires more than providing feedback at regular intervals. It means giving feedback quickly after a significant event or action occurs. While scheduling regular reporting (e.g., every 6 months) can be useful and good evaluation practice, its success in informing strategy can be hit or miss. Even if provided frequently, the data's timing may be off, or the data may arrive too late.

All of this is important because, perhaps even more than evaluations of programs and direct services, evaluations of advocacy and policy change can benefit from real-time reporting. Most advocates' strategies for achieving their policy visions evolve without a predictable script. Consequently, advocates regularly adapt their strategies in response to changing variables and conditions. To make informed decisions, advocates need timely answers to the strategic questions they regularly face.

Evaluators who provide real-time feedback need to stay on top of advocacy strategies and focus less on their own predetermined reporting timelines and more on the timelines of who or what is being evaluated (in this case advocacy and policy change efforts). Their evaluations, at least in part, need to build in flexibility, so that when a strategy changes or a critical event occurs, the evaluation can adjust with it. The Innovation Network provides an example of this kind of flexibility with its “intense-period debrief” described in this article. Another example comes from evaluators who very literally expect the unexpected and reserve part of their evaluation design for “rapid-response research.” These methodologies are not planned up front but are designed and implemented as needed to address emerging strategy-related questions (e.g., how is media outreach working? how engaged are coalition members after a policy defeat?).

2. Give “interim” outcomes the respect they deserve.
Remember the “Million Mom March” in Washington, DC, a few years back? It happened on Mother's Day in 2000, organized to protest the country's weak gun laws. I remember seeing newspaper photos the next day of the marchers walking down the National Mall; estimates put the number of marchers as high as 750,000. While impressed with the turnout, I also remember wondering what impact the march would have. Would it actually impact policy?

From what I can surmise, the march itself did not change gun policy, at least not right away. Looking at its policy impact alone, the Million Mom March might be judged a disappointment. But that's not the only way to look at the march's impact.

A few months after the march, I learned something that changed my perspective. I have a friend who marched that day and who, prior to participating, was not at all active politically. In the months and years that followed the march, she grew into a full-fledged advocate and continued to be involved in the Million Mom March effort. In fact, after the march occurred, a national network of 75 Million Mom March Chapters formed across the U.S. to advocate regularly on state and federal gun policy; my friend became a key figure in her chapter.

From that experience, I learned that it is important to assess advocacy for more than just its impact on policy. In addition to informing policy, much advocacy work has a larger set of outcomes in mind as advocates try to sustain their influence in the larger policy process. For example, in addition to interacting directly with policymakers, advocates might build coalitions with other organizations or develop relationships with journalists and editorial boards. Or they might aim to develop a network of community-based advocates who become active spokespersons.

It is fairly standard practice for evaluators who use logic models or theories of change (and many of us do) to identify interim or intermediate outcomes that set the stage for longer term outcomes. With advocacy, it is important not to assign second-class status to outcomes other than policy change. While policy change is usually the goal, other outcomes related to the broader advocacy strategy—such as whether new advocates like my friend emerge—can be as important as the policy change itself.

Another reason that it is important to look at multiple outcomes is that sometimes the desired policy change does not occur, perhaps for reasons unrelated to the quality of the advocacy effort. Assessing a range of outcomes ensures that the evaluation does not unfairly conclude that the whole advocacy effort was a failure if the policy was not achieved.

3. Design evaluations that advocates can and actually want to do.
Last year, the verb “google” was added to both the Merriam-Webster and Oxford English Dictionaries. Officially, it means to use the Google search engine to obtain information on the Internet, as in “She googled her date to see what she could learn about him.” The fact that the term has moved from being a trademarked product name to become part of our common lexicon—like Xerox, Kleenex, and FedEx—is one indicator of Google's success in cornering much of the search engine market.

There are many theories about the secrets to Google's success. At least one focuses on its clean and simple interface. By being uncluttered, Google offers users what they want (accurate search results) when they want it, rather than everything they could ever want (accurate search results, news headlines, the weather, sports scores, entertainment news, etc.), even when they don't.2

Evaluators of advocacy and policy change efforts can learn something from Google's approach to interface design. We need to think about advocates as evaluation users and find ways to give them what they want when they want it. We've already addressed the “when they want it” part in the discussion about real-time reporting. In tackling “what they want,” we need to consider how advocacy organizations look and operate.

Many advocacy organizations (like many nonprofits in general) are small operations with few staff and resources for evaluation. Most cannot afford external or highly involved evaluations and may instead find that, with evaluation, a little goes a long way. As Marcia Egbert and Susan Hoechstetter advise in their article, under these circumstances evaluators are wise to “keep it simple” when identifying both what to evaluate and how.

Rather than put together complex evaluation plans that require extensive technical expertise and offer single point-in-time assessments (which quickly can become outdated), we might instead help advocates identify which parts of their strategies to evaluate, rather than assume they should or want to evaluate everything, and identify simple but useful ways of tracking data internally to inform their work. For example, we might use our logic models to help advocates step back from their strategies and determine where evaluation can be most useful. Maybe they feel their coalitions and media outreach already are functioning well, but their new public education campaign could benefit from assessment. We can facilitate those choices.

4. Be creative and forward looking.
A couple of years ago an article in The Washington Post caught my eye. Titled “On Capitol Hill, the Inboxes are Overflowing,” the article's message was that, while we may feel sorry for ourselves with the number of emails we get in our inboxes every day, we actually should pity the poor Congress! They receive an estimated 200 million constituent messages annually, most of them electronic. With 535 members of Congress, that's a yearly average of almost 375,000 emails per member, or more than 1,000 emails per day.3

Innovation Network's Advocacy
Evaluation Project


With support from The Atlantic Philanthropies, JEHT Foundation, and Annie E. Casey Foundation, the Innovation Network's Advocacy Evaluation Project is creating a dynamic exchange of knowledge and ideas about evaluating advocacy. It is serving funders, evaluators, and practitioners facing the unique evaluation challenges that advocacy poses. The Advocacy Evaluation Project intends to move the field of advocacy evaluation beyond assessing policy change alone into one that considers the fundamental components of advocacy efforts—capacity building, network formation, relationship building, communication, issue framing, leadership development, and other key components.

The Project has two main components:

• The online clearinghouse has a wide array of annotated resources on evaluating advocacy efforts, including reports, articles, tools, and frameworks. Many resources are drawn from other notable organizations also engaged in advocacy evaluation, such as The California Endowment, Alliance for Justice, Women's Funding Network, Just Associates, and the Communications Consortium Media Center. New resources are added each week. Materials are categorized by primary audience (funder, evaluator, or practitioner), region (domestic versus international), and by topic (general advocacy evaluation, network evaluation, communication evaluation, etc.).

• The e-newsletter focuses on the challenges of evaluating policy advocacy initiatives. It is helping to build the advocacy evaluation field and conversation through articles, interviews with practitioners, resources, and references. The Advocacy Evaluation Project team is soliciting input from the advocacy and evaluation field on ideas and articles that explore their experiences and lessons learned. The inaugural issue currently is in development.

To learn more about the Advocacy Evaluation Project, review clearinghouse materials, suggest additional resources, or sign up for the e-newsletter, visit www.innonet.org.

A good portion of this volume results from savvy electronic advocacy efforts. Advocates set up websites that allow like-minded supporters to quickly fill out forms that then send emails to lawmakers expressing their position on an issue. Voila! Democracy from our desktops.

But here's the kicker. The email volume is growing so large and so fast that Congress is finding ways to thwart it by putting up roadblocks on the information superhighway. For example, some lawmakers' email programs ask senders to solve a basic math problem before the email goes through to prove that they are real humans and not a machine spamming them. Others require senders to reveal their contact information before the email is delivered. Consequently, a source in the article was quoted as saying, “Unfortunately there is strong evidence that much of the electronic mail that citizens assume is reaching Congress is ending up in an electronic trash can.”

The relevance of all this for evaluators is at least twofold. First, we need to keep in mind that advocacy tactics are constantly changing and growing. For example, advocates are growing sophisticated in their use of electronic advocacy through email, blogging, smart phone messaging, and other rapidly evolving techniques. We need to constantly monitor the advocacy field to stay current on such techniques so that we know how to evaluate them.

Second, as advocacy tactics evolve, we need to make sure that the measures we use to assess them are meaningful. With email advocacy, for example, an obvious and common measure is the number of emails that actually get sent after a call to action is issued. On one hand, there is a question of how to judge that number—what is a good response rate? One percent? Sixty percent? (The article by Karen Matheson helps answer that question). On the other hand, we have to question whether the measure itself has evaluative worth. We've learned now that all of the emails sent might not get through, and even if they do, there are questions about whether lawmakers and their staff actually pay attention to them. Consequently the number of emails sent may say very little about an advocacy strategy's success. We need to make sure the measures we use and create actually have interpretive value.

Because the nature of advocacy work often differs in important ways from direct services and other programs, we need to examine how evaluation can be most useful in this context. This does not mean inventing a whole new way of doing evaluation; it means adjusting our approaches in ways that make evaluation relevant and useful within the advocacy and policy context.

1 See, for example, the American Evaluation Association's Guiding Principles for Evaluators at www.eval.org.
2 Hurst, M. (2002, October 15). Interview: Marissa Mayer, Product Manager, Google. Retrieved on January 3, 2007, from http://www.goodexperience.com/blog/archives/000066.php
3 Birnbaum, J. (2005, July 11). On Capitol Hill, the inboxes are overflowing. The Washington Post; Birnbaum, J. (2006, October 2). Study finds missed messages on Capitol Hill. The Washington Post.

Julia Coffman, Senior Consultant, HFRP

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project