You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


Representatives from four foundations discuss their expectations and approaches for assessing their advocacy and public policy grantmaking.

The use of advocacy to inform public policy or systems change is an important grantmaking strategy for many foundations dedicated to achieving sustainable social change. However, as many articles in this issue attest, advocacy grants are not easily assessed using traditional program evaluation techniques. Foundations are eager for evaluation tools and approaches that help them make informed funding decisions and help advocacy grantees to assess their progress and effectiveness.

Until recently, few resources existed to guide evaluation in this area. In the last year, however, several foundations have supported the development of guiding principles and practical tools—many of which are featured in this issue—that are helping to push the field forward, grounding it in useful frameworks and a common language. In addition, several have started an informal “collaborative” to share funding ideas and coordinate their efforts.

This article features interviews with staff at four foundations—The California Endowment, The Atlantic Philanthropies, the Annie E. Casey Foundation, and the W. K. Kellogg Foundation—that are helping to build the advocacy evaluation field. We asked each foundation the following questions about their advocacy and public policy grantmaking and evaluation:

  1. What role does advocacy play in your grantmaking?
  2. What do you want to know from evaluation about your advocacy investments?
  3. How are you supporting grantees on advocacy evaluation?
  4. How are you helping to build the larger advocacy evaluation field?

The California Endowment
Astrid Hendricks-Smith, Director of Evaluation, and Barbara Masters, Director of Public Policy

What role does advocacy play in your grantmaking?

The California Endowment's mission is to make quality health care more accessible and affordable for underserved individuals and communities and to make fundamental improvements in the health status of all Californians. We recognize that doing this in a significant and sustainable way requires policy and systems change. As a result, public policy work cuts across all of our grantmaking. In addition to funding advocacy directly, we encourage our direct service grantees to consider how they can contribute to the policy process.

We fund advocacy at the local, state, and national levels and support a variety of activities to inform policy, including research, community organizing, coalition building, and communications. Our funding also connects advocates, grassroots organizations, and researchers to achieve collectively the kinds of policy and systems changes we're seeking.

What do you want to know from evaluation about your advocacy investments?

Our evaluation interests lie at multiple levels. On one level, we want to know how and where our grantees are having an impact in the policy process. On another, we want to know which of the advocacy strategies we fund are more or less successful. Finally, we want to know which organizations are most effective and why so that we can learn how to help all our grantees become better advocates.

Methodologically, we fund a spectrum of evaluation approaches. We expect all evaluation to credibly and defensibly assess what grantees have accomplished, but in some respects, our expectations differ across grants. For example, we expect the external environment to factor in differently for our advocacy and direct service grants. With direct services, we examine start up and scale up and consider the factors that affect our grantees' operating environments. With advocacy, we know that advocates can't control everything within the policy process. Consequently, we sort out what they can control and monitor that to see if they have been effective.

How are you supporting grantees on advocacy evaluation?

A major takeaway from the work we've done so far on advocacy evaluation is that evaluation is a tool and should be integral to the overall advocacy strategy. We don't want grantees to do evaluation after the fact and rely on memory to assess impact, and we want to get away from the notion that evaluation is punitive. Rather, we want evaluation to be seen as a means to help the grantee reflect, in real time, on their advocacy strategies and assess whether they're working. Whether theirs is a multigrant initiative or an individual organization, we want grantees to develop evaluative skills and build evaluation into their day-to-day work. We are now trying to take this idea from theory to reality.

We also recognize that we have to be partners with grantees to help them develop the needed skills to achieve policy and systems change. To that end, through the Center for Healthy Communities, we have developed a Health Exchange Academy, which offers training modules on advocacy, communications, and evaluation.

Similarly, we are helping our evaluators and program officers develop and utilize the tools to support advocates and understand the policy world. We are giving them a better sense of how the policy process works and instruction on how to help organizations use evaluation for monitoring and management purposes in unpredictable environments.

Lastly, in conjunction with our policy advocacy grantees that receive general operating grants, we are carrying out an evaluation to determine the benefits and downsides of this grantmaking tool. This evaluation will help us learn what kinds of capacities organizations need in order to be good advocates. The Endowment provides general support to policy and advocacy organizations very selectively, primarily to those organizations that are best able to utilize the flexibility they provide.

How are you helping to build the larger advocacy evaluation field?

We are excited to be involved in this field and are doing several things to help shape it. Internally, our evaluation and public policy departments do this work collaboratively, which has enabled us to bring our respective expertise and perspectives to the project.

We're funding the development of research and tools that advocates, evaluators, and funders can use. For example, we supported the prospec-tive evaluation framework that Blueprint Research and Design developed for advocacy and policy change evaluation (featured in this article). We're also funding the development of case studies because we tend to learn and want to try new things when we see other people doing them. Recently we published a case study about the evaluation of our obesity prevention efforts that resulted in state laws banning junk food and soda sales in the state's public schools.

Also, it is important that the field have cross-sector conversations about what we're learning. We need evaluators, policy people, advocates, and funders engaged in this dialogue. We all speak different languages, and only through these conversations can we break down barriers and develop evaluation that is acceptable for all stakeholders. Last year, we held a meeting that involved all of these groups, and we plan to continue this cross-sector dialogue whenever possible.

Finally, we want to support other foundations that are not yet funding advocacy. The California Endowment already is committed to public policy work and sees it as an effective vehicle for social change. But, for other foundations that are not yet convinced, we need evaluations that enable funders to understand how progress is measured and to see its value to achieving the foundation's goals.

The Atlantic Philanthropies
Jackie Williams Kaye, Strategic Learning and Evaluation Executive

What role does advocacy play in your grantmaking?

The Atlantic Philanthropies are dedicated to making lasting changes in the lives of disadvantaged and vulnerable people. We focus on critical social problems related to aging, disadvantaged children and youth, population health, and reconciliation and human rights. Improving the lives of intended beneficiaries in these areas requires enhancing their currently marginalized voices. We also need to increase their access to high-quality services. So, to us, advocacy is important on both accounts.

Also, in keeping with the philanthropic philosophy of our founder, Atlantic believes that committing resources over a limited period will maximize impact and plans to complete active grantmaking by 2016. Therefore, we seek changes that will endure beyond the foundation. Policy change is a strategic component for achieving that objective.

Atlantic supports several kinds of advocacy for both policy change and increased access to effective service delivery models. We support judicial advocacy through strategic litigation and campaigns, legislative advocacy to enact and implement policy, targeted advocacy campaigns to reach specific decision makers, and broader awareness and education campaigns.

What do you want to know from evaluation about your advocacy investments?

We want to help our grantees improve their work and to increase Atlantic's effectiveness as we spend down. This translates into wanting practical knowledge that can be applied. Our approach is action oriented rather than academic (although we believe that action should be research based). We want data that can be used quickly so people can make decisions and shift strategies as appropriate.

When we explore possible grants with organizations, we don't expect them necessarily to have a strong evaluation system in place because we understand that there are resource issues involved. Instead, we look for a commitment to evaluation and an ability to articulate the questions they'd like answered. When we find that mindset, we support evaluation that helps answer their questions.

Evaluation of advocacy is interesting because the end goal usually is clear and easy to measure. Less clear is what happens along the way and the lessons for advocates working toward different policy outcomes. We want advocacy and advocacy evaluation to have a clear rationale and theory of change, but we also recognize that the most useful learning comes from understanding how advocacy campaigns can be flexible in their operations and tactics. Now we are seeing theories of change that reflect contingencies about how policy change might occur. People are thinking about what could happen and the various pathways they might take to achieve intended policy outcomes. Evaluation brings a mindset and ability to think about those strategic issues; it elicits a “what if” mentality.

How are you supporting grantees on advocacy evaluation?

My personal desire is to eventually have evaluation integrated into nonprofit work, so that on a grantee's logic model, for example, the activities column includes evaluation as a core organizational activity that supports all others.

Integration to me means more than developing internal evaluation capacity to replace external evaluations. Grantees can integrate evaluation by partnering with a good external evaluator. Grantees should have the internal systems and skills so that they don't have to rely on an external evaluator to help them access data when they need it. But I also believe that, too often, we expect organizations to take on more than they are able. There are reasons that evaluators have specialized skills; they have education and training that often make it useful to have external evaluators step in. We want Atlantic program executives and grantees to have the evaluation skills and knowledge that help them decide when to do things themselves and when more expertise is needed.

How are you helping to build the larger advocacy evaluation field?

I think our approach to grantmaking and evaluation is helping to build this field with other funders. There are three elements I think funders should consider.

First is time frame. We need fewer funders asking for multiyear outcomes with single-year grants. Either funders should provide support over multiple years or align expectations with the grant's time frame. We want grantees' evaluation plans to be realistic. We understand the long-term nature of policy change, and we give grantees permission to focus on intermediate outcomes. We also think this focus helps build the advocacy field, because how advocates achieve their intermediate outcomes is often where the most transferable learning lies.

Second, Atlantic provides capacity-building support, not just project support. It is hard for grantees to integrate evaluation if we fund only project-based evaluation. Project evaluations give no incentive to invest in evaluation systems that are useful in the long term.

Third, we provide direct evaluation support. Overall in philanthropy, very few dollars go directly for evaluation. Certainly there are project evaluations, but funders often fail to help organizations really commit to evaluation. For example, Atlantic supports information technology systems, which few people think of as evaluation related. If we want grantees to use data to make decisions, they need the systems that enable them to do that.

The Annie E. Casey Foundation
Thomas Kelly, Evaluation Manager

What role does advocacy play in your grantmaking?

The Annie E. Casey Foundation believes that policy and systems change are avenues for achieving large-scale results for vulnerable children and families. For that reason, advocacy to achieve such change is a central part of our grantmaking.

Many of our initiatives, place-based grants, and individual grants support community, state, or national advocacy. For example, our major initiative, KIDS COUNT, is a network of state child advocates in all 50 states, the District of Columbia, Puerto Rico, and the Virgin Islands. Our grantmaking funds a range of advocacy activities, including community organizing, outreach campaigns, targeted issue advocacy (e.g., child health insurance, predatory mortgage lending), and research.

What do you want to know from evaluation about your advocacy investments?

Our desired outcomes foundation-wide are in three areas—impact, influence, and leverage. Our challenge is to more clearly define what these areas mean and how to measure them.

Currently, we are looking at impact, influence, and leverage across all of our major investment areas, including advocacy. We know we can't be clear about our expectations for our advocacy grantees until we're clear about our expectations for ourselves. We want evaluation to help us be more transparent about our work and to instruct us on how best to invest our limited resources. For example, we want to know which outreach strategies not only raise public awareness but also generate the public will that helps moves issues forward in the policy process.

Because we know that advocacy is complex to measure, our measurement expectations for advocates have been fairly low. We're in the process now of applying the same rigor that we use for our more traditional service delivery work to advocacy. Rigor doesn't mean prescribing a specific approach; it means getting clearer about evaluation outcomes, measures, methods, audiences, and uses.

How are you supporting grantees on advocacy evaluation?

Our evaluation conversations are tailored to the different kinds of advocates we fund. Many grantees are not traditional advocacy organizations but are neighborhood service providers who can be politically influential.

The more traditional advocates, like KIDS COUNT grantees, already are having conversations about getting better at measuring progress (see the article with Kay Monaco, a New Mexico KIDS COUNT grantee). Because they don't want cookie-cutter approaches that feel like directives, they are very engaged in helping us figure out our approach. They generate evaluation ideas and provide feedback on what we're developing, such as our new tool, A Guide to Measuring Advocacy and Policy (featured in this article). As we get clearer about our approach to advocacy evaluation, this group will be the first to test it.

For the less traditional advocates, our conversations begin in a different place. We use evaluative questioning to help them define their advocacy approaches. This gets them focused on, for example, not only what they are advocating for, but who their target audiences are and who they need to work with. Then, we move onto how to measure what they are doing.

In the future, we'd like to use evaluation to help shape our training and technical assistance for grantees new to advocacy. Knowing more about effective tactics and strategies will help us know what knowledge and skills to transfer. To date, we've been highly reliant on smart advocates who know how to do this intuitively. Now, we want to identify more systematically how and why they are effective in a way that is teachable.

How are you helping to build the larger advocacy evaluation field?

As a field, we shouldn't be handcuffed by the fact that we don't yet have the perfect advocacy evaluation approaches and measures. We need to start somewhere and test our ideas. That's where we are as a foundation.

We also are communicating and collaborating with other funders on what we're doing and learning. One thing that that these discussions should do that they don't now is challenge the assumption that all advocacy is good. Advocacy is accountable for achieving policy-related outcomes. But should it also be accountable to the communities and constituencies it serves? Should it strengthen participatory democracy, for example, so that even if a policy isn't achieved, more residents will have been involved in the political process? We need to examine advocacy's accountability to audiences beyond funders.

A foundation colleague raised another question that I think is important for the field. How can advocacy evaluation help people make more strategic choices around what to advocate for and not just how to do it? For example, is advocating for new child care service funding every year the best way to support working families, or should we look at expanding child care tax credits to reach even larger numbers of families? The field currently is focused on using evaluation to do advocacy better and rightly so. But whether and how evaluation can help us make strategic choices about advocacy positions are field-level questions that deserve a placeholder for future deliberation.

W. K. Kellogg Foundation
Sheri Brady, Director of Public Policy

What role does advocacy play in your grantmaking?

Public policy and advocacy play an important role at the Kellogg Foundation because many of the initiatives across our main program areas (health, food systems and rural development, youth and education, and philanthropy and volunteerism) are working toward systems change, which often requires changes in policy. We fund grantees' efforts to realign public and private systems in ways that benefit the communities they represent and serve.

We support advocacy that leads to long-lasting changes, leverages resources, strengthens the voices of communities, and helps the Foundation to achieve its mission and grantees to reach their goals. As a part of this support, we educate grantees on how to comply with the rules and regulations that govern public policy activities.

What do you most want to know from evaluation of your advocacy investments?

Evaluation has always been important to the Kellogg Foundation and is expected for all of our grants, including those involving advocacy. Different types of work, however, call for different ways of looking at evaluation.

With advocacy, we have to be careful. We can't count how many people were fed or how many kids got shots. Ultimately, we want to know whether an appropriation increased, if an existing policy position was sustained, or whether a new policy was enacted. But getting there sometimes takes a long while. For that reason, we need to look at what happens during the life of the project, capturing markers or indicators that tell us if we are on the right track.

Many people are critical of the indicators advocates often track, saying they are too output focused and not meaningful. For example, they often pick on things like the number of newspaper mentions or number of legislators at a briefing. I don't like the term “meaningful indicators.” Whether indicators are meaningful depends on the organization doing the advocacy, how difficult their issue is given the current policy climate, and their strategy. For example, say an advocate had 10 legislators attend a briefing. For some issues, this number might be low. For others, especially issues not currently at the top of the policy agenda, it might be a major win. It might mean that a new issue is gaining momentum and that the grantee is a recognized expert in that area. Judging an indicator without context is dangerous. We shouldn't necessarily assume that there are measures that are meaningful across all advocacy.

The Kellogg Foundation also expects advocacy to involve the people being affected by the policies in question. One of our core values is that all people have the ability to effect change in their lives and their communities. We want to know whether and how advocacy efforts make that happen.

Related to this, we are interested in whether communities are better off when our grants end. We not only want to know whether grantees informed policy; we want to know the actual or likely effects of those policies on people and communities.

How are you supporting grantees on advocacy evaluation?

First, we make sure that all of our advocacy and public policy work is within the guidelines of what the IRS allows. We educate both staff and grantees about available advocacy options.

We also support evaluation. On one level this means with dollars; we can't expect people to do evaluation if we don't pay for it. On another level this means giving grantees the tools and technical assistance to do it. This includes making sure they see a clear reason for doing evaluation in the first place.

Most advocates won't advocate for evaluation. I was recently on a panel, talking about this topic. The speaker was introducing the panel and by the time he said “evaluation,” people already were asleep.

We don't want to put more and more pressure on grantees to always do more. But it's revealing that evaluation typically is considered a bur-den while other kinds of “asks” like communications or collaboration generally are not. We need to help advocates understand evaluation's benefits and how it will help them figure out what they are doing right and where to adjust. Right now, either we're not making that case effectively, or many advocates aren't hearing it.

How are you helping to build the larger advocacy evaluation field?

We try to demonstrate with our grantmaking that doing policy work and systems change requires longer term investments. Funders tend to make decisions based on their own grantmaking cycles rather than the needs of the fields their investing in. With policy and systems work, we can't just say arbitrarily, “Okay, it's been 3 years, and it's time to move on.”

Also, the Kellogg Foundation has developed a number of tools on evaluation generally that I think are useful in this arena. Moving forward, we are interested in being involved in field-level discussions about how to adapt evaluation to the unique needs of advocates, and about the roles funders can play in continuing to build the field.

HFRP Staff
Email: hfrp_pubs@gse.harvard.edu

‹ Previous Article | Table of Contents | Next Article ›

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project