You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.

www.HFRP.org

The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.

Terms of Use ▼


This brief is based on: Watson, S. (2000). Using results to improve the lives of children and families: A guide for public-private child care partnerships. Child Care Partnership Project. This publication is available at nccic.org/ccpartnerships/results.pdf [Acrobat file].

Introduction

What is a Logic Model?

Logic models are a concise way to show how a program is designed and will make a difference for a program's participants and community.

On one sheet of paper, a logic model summarizes the key elements of your program, reveals the rationale behind your approach, articulates your intended outcomes and how they can be measured, and shows the cause-and-effect relationships between your program and its intended outcomes.

Why Develop One?

Logic models have numerous uses and benefits. A logic model can be used for:

Strategic and Program Planning - Developing a logic model is a form of strategic planning. The process forces you to identify your vision, the rationale behind your program, and how your program will work. This process is also a good way to get a variety of program stakeholders involved in program planning and to build consensus on the program's design and operations.

Effective Communications - Logic models allow you to provide a snapshot view of your program and intended outcomes to funders, staff, policymakers, the media, or other colleagues. They are particularly useful for funding proposals as a way to show that what you are doing is strategic, and that you have a plan for being accountable.

Evaluation Planning - A logic model provides the basic framework for an evaluation. It identifies the outcomes you are aiming for—based on your program's design—and puts those outcomes in measurable terms.

Continuous Learning and Improvement - A completed logic model provides a point of reference against which progress towards achievement of desired outcomes can be measured on an ongoing basis.

What Does a Logic Model Look Like?

There is no one “right” way to construct a logic model. There are many approaches and a logic model can take on many forms. One possible approach is presented in the two following pages. The first page [85KB Acrobat file] offers a generic logic model and explains its components. The logic model on the second page [95KB Acrobat file] illustrates what some of those components might look like for out-of-school time (OST) programs.

Logic Model Elements and Development

Developing the logic model essentially means filling out its elements or boxes with details that are based on your OST program. While there is no “correct” order in which to do this, it is suggested that you start with the left side or column of the logic model, and move counterclockwise, as represented in this figure [50KB Acrobat file].

Step One: Describe the OST Program

The program side of the logic model has four elements. Use organizational documents you already have to help you. Useful materials may include strategic planning documents, mission statements, grant proposals, work plans, recruitment announcements, brochures, or training materials.

1. Desired Results
The logic model starts in the upper left-hand box with the program's desired results or vision. Ask yourself: What is my long-term vision or goal for children, adults, or families in my community, or for my community as a whole? Use your mission statement as an aid. State the answer in one or two sentences. Keep in mind that an OST program alone usually cannot accomplish the desired results, but it should contribute to them (e.g., improve children's academic development and performance).

2. Motivating Conditions and Causes
Next think about the reasons your OST program was created. Ask yourself: Why and how do I know my community needs an OST program? What are the factors, issues, or problems that my program is trying to improve or eliminate? Community needs assessments, data and research on the issues your program addresses, and lessons learned about what works may be helpful aids (e.g., children with unstructured and unsupervised time in the after school hours, low academic performance among low-income children).

3. Program Strategies
Strategies are the broad approaches that your OST program uses to affect the conditions or causes behind your program's existence. They are the general methods or processes you use to achieve your desired results or vision. Ask yourself: What are the broad categories of services or approaches that my program provides? Strategies are higher-level categories than activities, which are described below. Grant proposals may be useful aids for identifying strategies (e.g., youth development and leadership, academic enrichment).

4. Activities
Activities are the individual services or interventions your program uses to implement your strategies. Ask yourself: On a day-to-day basis, what do staff in my organization do? What services do we provide? Work plans may be useful for identifying this list (e.g., homework help and tutoring, mentoring, rap sessions).

Step Two: Identify the Outcomes

Once you describe your program, the next step is to specify the intended outcomes your program is striving for. Outcomes are defined here as the measurable results of your program. This part of the logic model will force you not only to identify what the results of your program are, but also how you will measure them. Use the elements of the logic model that you have just completed to describe your program as a reference as you go through this process. Remember that what you are doing in your program should drive how you assess it.

5. Performance Measures
Performance measures assess your program's progress on the implementation of your strategies and activities. They assess the results of your OST program's service delivery. Ask yourself: In the work that my program does, what do we hope to directly affect? What results are we willing to be directly accountable for producing? What can we realistically accomplish?

There are two types of performance measures:

Measures of Effort - Also commonly known as outputs, these are measures of the products and services generated by program strategies and activities. Ask yourself: What does my program generate (e.g., publications, training materials), what levels of activity do we produce (e.g., the number of children served or products developed), and what will measure the quality of our services (e.g., customer satisfaction)? Measures of effort assess how much you did, but do little in terms of explaining how well you did it or how well your program ultimately worked for the target population you are working with. These are the easiest of all the evaluation measures to identify and track (e.g., number of children served in the OST program and participant demographics, number of classes/sessions/trainings held).

Measures of Effect - These are changes in knowledge, skills, attitudes, or behaviors in your target population. Ask yourself: How will I know that the children or families I work with in my OST program are better off? What changes do I expect to result from the strategies and activities my program provides? Remember that measures of effect reflect changes that your program acting alone expects to produce (e.g., increased social competence, higher self-esteem and confidence, improved study habits).

6. Indicators
Indicators are measurable elements of the OST program's desired results or vision that reflect substantial changes in people, policies, or systems across an entire community. The OST program acting alone usually cannot achieve changes in indicators. Usually they require efforts from other programs or institutions that are also working toward similar results.

The distinction between indicators and performance measures is important. Remember that indicators take a whole community to affect, not just the OST program. This distinction helps to lay out what is realistic given the resources programs have and the limited time they have available with children or families.

For example, academic outcomes are a hot button issue for OST programs. The logic model allows you to make academic outcomes one of the indicators to be tracked. But, at the same time, doing so makes the point that while your OST program is expected to have an impact on academic achievement, the relationship is an indirect one, since academic achievement is influenced by a number of factors, programs, and individuals—not just your program. It makes you accountable only for what you can reasonably expect to affect.

Indicators can be expected to change in the short-term or take many years to change. There are two types of indicators:

Interim Indicators
These are measures of short-term community-wide progress toward your program's desired results. They reflect the status of community-wide populations in the short-term. Ask yourself: If my program is successful, what changes do I expect to see in my community in the next few years (e.g., improved test scores in reading, math, or science, reduced #s of anti-social behaviors or behavior problems, decreased student suspensions)?

Ultimate Indicators
These are measures of long-term community-wide progress toward your program's desired results. They usually require significant resource investments to affect. Ask yourself: In the long-term, how will I know if my program's desired results have been achieved? Acting in concert with schools, parents, and other organizations, what do we expect to achieve in our community? The performance measures and interim indicators you have already identified should contribute to movement on the ultimate indicators (e.g., reduced substance use rates among teens, reduced dropout rates, reduced teen pregnancy rates).

Keep in mind that not all indicators are created equal. While you can likely generate a long list of possible indicators, some of them will make more sense to track than others. For example, some will require fewer resources. Consider these questions as you choose your indicators.¹

  • Is the indicator relevant—does it enable you to know about the expected result?
  • Is the indicator defined and data collected in the same way over time?
  • Are data available?
  • Will the indicator provide sufficient information about a condition or result to convince both supporters and skeptics?
  • Is the indicator quantitative?

Step Three: Plan to Evaluate and Learn From the Data

The primary purpose of this brief is to develop a logic model that helps you describe your OST program and identify outcomes and measures that will help you assess your progress and results (i.e., steps one and two). This third step is offered briefly to make the point that the next step is to move forward with the evaluation in terms of putting plans in place to collect data on the measures you've identified and to use that data and the logic model for learning. This figure [60KB Acrobat file] identifies four additional elements toward that end that can be added to the logic model.

7. Data Sources and Methods
The sources for the data needed to track indicators and performance measures. Ask yourself: Now that I have identified my measures, how will I get the data needed in the most resource-efficient way? If you used the criterion that data should already be available for the indicators you have chosen, then you should already know their data sources and how often they are available. However, you also need to determine how often to report out that information and how and who will get it. Performance measures will likely require additional data collection that either you or your evaluator conducts. Some of that information, such as the measures of effort, you can probably track on your own. However, you may need to use an external evaluator to collect data on the measures of effect (e.g., sources: standardized testing, state or local government databases; methods: surveys, focus groups, interviews).

8. Evaluation Questions
The questions you want to have answered by the data or decisions that you want to make based on your data. You should be able to make decisions based on your indicators and performance measures. Ask yourself: What strategic decisions can I make based on the information that will be generated? What consequences should be tied to achievement for good or bad performance? (E.g., are the indicators moving and, if not, does that mean the OST program needs to be modified?)

9. Stakeholders
The individuals who have a vested interest in the OST program and need to know the answers to the evaluation questions and to be involved in learning from the data being collected. Ask yourself: Who is interested in or will benefit from knowing my program's progress on its indicators and performance measures (e.g., board members, funders, collaborators, program participants, community members, and other individuals or organizations)?

10. Mechanisms for Learning
The periodic or regular opportunities that exist for pulling together and reporting out the data being collected, and bringing together stakeholders to learn from and make decisions based on the data. Ask yourself: What opportunities exist or need to be created to focus on and learn from the evaluation (e.g., staff, stakeholder, or board meetings, regular evaluation reports, strategic retreats)?

Glossary

Activities
What has to happen or what you have to do to run your program. The specific set of actions, interventions, or services your program is undertaking.

Outcomes
A program's measurable results.

Outputs
Also referred to as “Measures of Effort,” they are the measurable products of a program that point to what and how much a program accomplishes. They can include anything that can be counted such as people, activities, materials, time, etc. Outputs measure quantity, but not quality.

Indicators
Measures for which data are available, which help quantify the achievement of the desired results for community-wide populations. Indicators can be short-term (interim) or long-term (ultimate).

Logic Model
A framework that shows the relationship between the program's ultimate aim (its results) and the strategies and activities it is using to get there, along with how it will measure progress along the way. The logic model summarizes the key elements of your program, reveals the rationale behind your approach, articulates your intended outcomes and how they can be measured, and shows the cause-and-effect relationships between your program and its intended outcomes.

Performance Measures
Measures connected directly to your program on the level of activity, efficiency, capacity, or quality of the services or interventions being offered. A program acting alone can affect performance measures. Measures of effort are the direct outputs or program strategies and activities. Measures of effect are changes in your target population that come about as a result of program strategies and activities.

Results
The overall long-term vision or goal for your community as a whole or for the children, adults, and families living in your community. Results usually cannot be achieved by one program alone, but are produced by many factors, individuals, and organizations working toward the same general ends.

Stakeholders
The board members, program participants, funders, collaborators, community members, and other individuals or organizations with a vested interest in your program and performance.

Strategies
The broad approaches that the program will use to affect the conditions or causes that are the reason behind the program's creation and that are needed in order to reach the desired results.

Download Logic Model Worksheet [85KB Acrobat file]

¹ Horsch, K. (1997). Indicators: Definition and use in a results-based accountability system. Cambridge, MA: Harvard Family Research Project.

Free. Available online only.

© 2016 Presidents and Fellows of Harvard College
Published by Harvard Family Research Project