Jump to:Page Content
You are seeing this message because your web browser does not support basic web standards. Find out more about why this message is appearing and what you can do to make your experience on this site better.
The Harvard Family Research Project separated from the Harvard Graduate School of Education to become the Global Family Research Project as of January 1, 2017. It is no longer affiliated with Harvard University.
January 1999 Learning From Logic Models: An Example of a Family/School Partnership ProgramJulia Coffman |
This brief offers a step-by-step approach for developing and using a logic model as a framework for a program or organization’s evaluation. Its purpose is to provide a tool to guide evaluation processes and to facilitate practitioner and evaluator partnerships. The brief is written primarily for program practitioners, but is also relevant and easily applied for evaluators.
About This Series
These short reports are designed to frame and contribute to the public debate on evaluation, accountability, and organizational learning.
A logic model provides the basic framework for an evaluation. It is a graphic that describes a program or organization in evaluation terms. It illustrates a program’s theory of change, showing how day-to-day activities connect to the results or outcomes the program is trying to achieve. Similar to a flowchart, it lays out program activities and outcomes using boxes, and, using arrows to connect the boxes, shows how the activities and outcomes connect with one another.
Developing a logic model should be one of the first steps in an evaluation. Once the model is completed, the evaluation can be designed to determine whether the program is working as shown in the logic model. The logic model can also become a tool for learning when evaluation data are applied directly to the model.
A logic model can be created by anyone with knowledge of the program or organization that is to be evaluated. It is helpful for program personnel and evaluators to work together to create a logic model because program personnel can offer the expertise needed to describe the program and its intended results accurately, and evaluators can help to translate this knowledge into evaluation terms.
The advantages in graphically displaying a program’s activities and outcomes using a logic model as opposed to simply defining and listing them are at least threefold. First and foremost, there is power in visual representation. Visual displays or graphics are proven and effective learning instruments. Second, a logic model ensures that a program’s process is not overlooked in an evaluation. The model makes the evaluator accountable for looking at both program process and outcomes. Third, a logic model can enhance the process of learning through evaluation. As data are collected, the logic model can be used to put the data in perspective, examine the theory that underlies the program, and make program midcourse corrections if needed.
This brief offers a step-by-step approach for developing and using a logic model as a framework for a program or organization’s evaluation. Its purpose is to provide a tool to guide evaluation processes and to facilitate practitioner and evaluator partnerships. The brief is written primarily for program practitioners, but is also relevant and easily applied for evaluators.
The example of a hypothetical program, the Family Involvement Project (FIP), is used throughout the text to provide a realistic context for and help clarify each of the steps described. In addition, a completed logic model for the program is presented, from which examples throughout the text are taken.
Before you begin the brief, examine the full FIP logic model below carefully. Note whether the model adds to your understanding of the program and whether it helps you to understand how the program’s process connects to the outcomes. This logic model example should not only give you an idea of what a completed model looks like, but it should also illustrate the value in using the logic model format.
Schools and community-based organizations are implementing family involvement programs throughout the country. Sponsors of these programs, as well as the implementing organizations themselves, share an interest in understanding and learning from the outcomes of these efforts so they can identify the most effective approaches or program designs. Implementing organizations are especially interested in seeking ways to improve their program strategies or activities so they can increase the likelihood of achieving their desired outcomes.
The Family Involvement Project (FIP) is a national organization that has a vision of improving outcomes for children by increasing family member (with a primary focus on parents) involvement in their children’s education. To achieve this, FIP recruits and trains parent leaders in communities using a series of workshops. The workshops teach parent leaders how to get involved in their children’s education and also how to train other parents in their community to get involved. FIP then supports ongoing training in communities by providing parent leaders with technical assistance on training and disseminating materials on family involvement to all parents who go through the workshops.
FIP also works within communities to build a system of family involvement and training that is sustainable over time. FIP builds relationships with schools to ensure that family involvement is welcomed and supported. In addition, FIP builds coalitions of organizations locally that are interested in sustaining and building a family involvement agenda within their community.
See the full FIP logic model [30KB Acrobat file]
The first step in logic model construction is to determine the appropriate scope for your model. Decide whether your logic model should focus on a specific component of your work or broadly cover the entire program or organization. Your answer should be driven by your evaluation or information needs.
For example, consider the issue that FIP, our hypothetical organization, may be experiencing. Program staff could find that parent leaders are not recruiting and training as many parents in their communities as they originally estimated. This could stem from a number of issues, including a problem with the content of FIP’s training for parent leaders, a problem with their recruitment criteria for parent leaders, or FIP staff’s unrealistic expectations about parent capacity and availability for recruitment and training. To determine the answer to their question, they can construct a logic model like the one below that is smaller in scope than the one above and lays out the parent training component in detail. This model will help them to design a system for collecting data to determine the source of their specific problem and identify possible solutions.
See the smaller FIP logic model [10KB Acrobat file]
Working within your chosen scope of work, begin to construct your model’s main components—or the information that will go in the boxes on your model. This is the most time-intensive part of this process, but, as shown below, this can be done in stages by starting with basic components and adding more detail later.
Use organizational documents you already have to help you construct your components. Refer to strategic planning documents, mission statements, grant proposals, work plans, recruitment announcements, marketing/public relations materials, training materials, or publications. Any document that describes the work you do will be helpful.
Start at a basic level by identifying your model’s core components and their relationships. Starting with the four components described below—inputs, activities, short-term outcomes, and long-term outcomes—will help you clarify the overall structure for your model. Their corresponding parts from the FIP logic model serve as examples.
See FIP's logic model components [30KB Acrobat file]
Choose the order for identifying the components of your model that works the best for you. You can start with your inputs and move toward your outcomes or vice versa. For example, if you have gone through a process like strategic planning in which you have already identified your long-term outcomes, you may want to start there and work back toward your activities and inputs.
Now that you have defined the model’s basic components, consider adding more detail. If you are already satisfied that you have captured everything you need, move onto the next step. If you feel you need to add more detail to explain the program better, consider adding any of the following components to your model. Although these components are not included on the full FIP model above, the diagram below illustrates what they might be if added.
See the additional FIP components [20KB Acrobat file]
Once you have identified the components of your model, the hard part is over. The next step is to take what you have done and put it in graphic form, putting boxes around the components and attaching arrows to show the relationships between them.
Consider these criteria about the “look” of your logic model as you develop it:1
When you have finished a draft of the model, ask others to review it for accuracy and readability. Refine and revise it until both you and others who have provided feedback are satisfied.
Now that you have your logic model, you can begin to use it as a framework for your evaluation. Because the focus of this brief is on logic models and not on evaluation principles and techniques,2 the implementation of these next steps is described only briefly below. While typically an evaluator will lead the implementation of these steps, this should be a participatory approach with program personnel providing input to guide the evaluator’s direction.
Develop indicators for your logic model components. Indicators are measures used to determine if the boxes, or components, in your logic model have been achieved. When applied and interpreted together, they will help you determine whether your program is operating as shown in your logic model.
Develop indicators for each component on the logic model.3 You will need multiple indicators for each component of your model in order to understand fully whether the component has or has not been achieved. Multiple indicators will also strengthen an argument you may need to make later that your program is working as shown in the logic model.
Use process indicators to measure your activities. For example, to look at the activity on the FIP logic model of providing technical assistance to parent leaders, these indicators can be used:
Use outcome indicators to measure your short- or long-term outcomes. For example, to measure the FIP long-term outcome of increasing involvement of parents associated with FIP in their children’s education, the following indicators can be used:
In addition to process and outcome indicators, you may need to develop indicators to track the relationships between the components on your logic model. These will help you determine if the arrows you drew are accurate and meaningful. For example, indicators to determine whether increasing parent knowledge on how to become involved in their children’s education can lead to increased involvement may include the following:
Keep in mind that not all indicators are created equal. While you can likely generate a long list of possible indicators for each component on your logic model, some of them will make more sense for you to track than others. For example, some will require fewer resources. Or you might be able to use a single indicator for multiple components on your model.
Consider these questions as you choose your indicators.4
Track your indicators. Once you have identified your indicators, you are ready to determine the data sources and methods for tracking them and collecting the information you need. Common data sources include program records, program participants, community members, or trained observers. Common methods include document/record review, questionnaires, surveys, interviews, focus groups, and checklists.
Remember that you might already be collecting some of the data needed to track your indicators. For example, a review of FIP program records would likely reveal that staff are already collecting useful data, such as the number of parents trained by each parent trainer or measures of satisfaction from parents regarding participation in the trainings.
At regular intervals in your data collection process, apply your indicator data to your logic model. Lay out the data directly onto the model so you can get a complete picture of whether your program is working as intended. Determine which parts of the model are working well and which are not. Determine whether you need to make programming changes or whether your model needs to be revised to portray your program more accurately.
In addition, periodically use your data to revisit and reexamine your overall theory of change. You may find that you need to modify some of the assumptions on which you based your original model and that, as a result, the model needs to be revised.
For example, consider again the FIP logic model. As described in Step 1, the program could be experiencing the problem that parent leaders are not recruiting and training as many family members or other parents in their communities as they originally estimated. The evaluation may uncover the finding that parent leaders need support in their outreach efforts beyond the technical assistance that FIP staff members provide. In response to this finding, FIP staff may feel that they need to add to their activities the development of peer networks in communities. The addition of this component will change the overall program theory and, therefore, the logic model.
The important point is to use the logic model as a learning tool throughout the evaluation. Do not set it aside once the model is completed and the evaluation designed. When your data is applied directly to your model, you will find the data is easier to interpret and the findings easier to apply and use. The goal is to use the logic model as a feedback and learning tool—with the model initially informing the data and then the data ultimately informing the model.
Developing and Using Logic Models/Theories of Change for Evaluation
Connell, J., & Kubisch, A. (1996). Applying a theories of change approach to the evaluation of comprehensive community initiatives: Progress, prospects and problems. New York: The Aspen Institute, Roundtable on Comprehensive Community Initiatives for Children and Families.
Julian, D. (1997). The utilization of the logic model as a system level planning and evaluation device. Evaluation and Program Planning, 20(3), 251–257.
Milligan, S., Coulton, C., York, P., & Register, R. (1996). Implementing a theories of change evaluation in the Cleveland Community Building Initiative. Cleveland, OH: Center on Urban Poverty and Social Change.
Evaluation Techniques
United Way of America. (1996). Measuring program outcomes: A practical approach. Alexandria, VA: Author. [To order, call Sales Service/America at 800-772-0008.]
W. K. Kellogg Foundation. (1998). W. K. Kellogg Foundation evaluation handbook. Battle Creek, MI: Author. [To order, contact Collateral Management Company, 1255 Hill Brady Road, Battle Creek, MI 49015, 616-964-0700. Ask for item number 1203.]
1 Adapted from Tufte, E. (1983). The visual display of quantitative information. Cheshire, CT: Graphics Press.
2 For a more in-depth discussion of evaluation techniques see references cited under Additional Resources.
3 For a more in-depth discussion of the types of indicators, see another brief in the Reaching Results series: Horsch, K. (1997). Indicators: Definition and use in a results-based accountability system. Cambridge, MA: Harvard Family Research Project.
4 Horsch, K. (1997). Indicators: Definition and use in a results-based accountability system. Cambridge, MA: Harvard Family Research Project.
Free. Available online only.