Sopact is a technology based social enterprise committed to helping organizations measure impact by directly involving their stakeholders.
Useful links
Copyright 2015-2025 © sopact. All rights reserved.

New webinar on 3rd March 2026 | 9:00 am PT
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
Theory of change vs logic model: not rival frameworks. One scaffolds causal reasoning, the other operational design. Which your nonprofit needs—or both.A logic model describes your program. A theory of change argues for it. Side-by-side comparison, decision framework, and when to use both.
A foundation program officer asks for a one-page Logic Model by Friday. Your M&E consultant has spent two months facilitating a 14-page Theory of Change. Your board chair says they're the same thing with different diagrams. Your grants manager says you need both but can't say why. The team builds a Logic Model, and three months later the same funder asks how you know your program actually caused the outcomes on it — a question no Logic Model is built to answer.
Last updated: April 2026
This is The Scaffold Confusion: treating a Theory of Change and a Logic Model as interchangeable frameworks when they scaffold different layers of the same program. A Logic Model scaffolds the operational layer — inputs, activities, outputs. A Theory of Change scaffolds the causal layer — mechanisms, assumptions, and outcomes. Pick one, and you've left half the building unsupported. This guide shows nonprofits exactly which layer each framework covers, how to sequence them, and how to connect both to living participant data rather than leaving them as static diagrams in a slide deck.
The difference between a theory of change and a logic model is what each scaffolds. A logic model scaffolds the operational layer of a program — inputs, activities, outputs, and outcomes — in a compact left-to-right diagram. A theory of change scaffolds the causal layer — the mechanisms that make activities produce outcomes and the assumptions those mechanisms depend on. A logic model describes what a program does. A theory of change explains why that should produce the change claimed.
The W.K. Kellogg Foundation framing puts the distinction plainly: a logic model is a program management and accountability tool designed to communicate program structure to funders and reviewers. A theory of change is an evaluation design tool that surfaces assumptions and enables programs to test whether their reasoning holds. The frameworks sit at different altitudes of planning — and nonprofits that use them as if they were interchangeable end up with a program description on one altitude and nothing on the other.
A theory of change is a causal framework that explains how and why a specific program is expected to produce specific outcomes for specific participants under specific conditions. Every arrow in a theory of change carries a named mechanism — the reason the link should hold — and every level of outcome carries at least one stated assumption that must remain true for the chain to work. A theory of change is testable: each causal claim can be confirmed or disconfirmed with participant-level data over time.
For a full definitional treatment of the framework, its components, and sector examples, see the theory of change hub guide. What matters for the comparison here is what a theory of change includes that a logic model does not: the mechanism behind each arrow, the assumption behind each outcome, and the evidence plan that connects both to data. A theory of change that ends at a diagram is incomplete — the framework is only useful when the assumptions can be monitored and the mechanisms can be observed through actual participant data.
A logic model is a one-page compact diagram that maps a program's resources to the results it expects to produce, reading left to right: Inputs → Activities → Outputs → Short-Term Outcomes → Medium-Term Outcomes → Long-Term Outcomes. Some variants add a Situation column at the left and an Impact column at the right. The format is standardized by the W.K. Kellogg Foundation and used by most U.S. federal funders, state agencies, and private foundations as the required format for grant applications.
A logic model is a communication and compliance instrument. It answers three questions: what are we committing, what are we doing, and what do we expect to produce? It does not answer why the activities should produce the outcomes, nor does it name the assumptions that must hold. A logic model labeled "Job training → Employment" is descriptively correct and causally silent — it shows the link without arguing for it. That silence is the feature, not the bug: logic models are designed to be skimmed by reviewers who read forty applications a month. Their compactness is their value, and their value ends where causal reasoning begins.
Theory of change vs logic model is not a rivalry — it is a layering question. Most nonprofit programs need both, sequenced correctly, and connected to the same participant data. The common failure mode is building the logic model first because a funder asked for it, then retrofitting a theory of change to match the boxes already drawn. This produces a theory of change with the shape of a logic model and the reasoning of a summary — useless for evaluation design and redundant for funder communication.
The correct sequence is the opposite. Build the causal argument first: what mechanism connects your activities to your outcomes, what assumptions does that mechanism depend on, what evidence would disconfirm it. Then compress that argument into the logic model your funder wants. The logic model is a reporting output, not a design input. Nonprofits that follow this sequence find that the same causal framework also drives their monitoring and evaluation design and their longitudinal data architecture — one framework doing four jobs.
Theory of change vs logic model for nonprofits is the most-searched version of this question because the nonprofit sector sits at the exact intersection where the confusion is most consequential. Funders ask for logic models because they standardize grant review. Evaluators push for theories of change because they enable learning. Boards want whichever is simpler to read. Program staff want whichever is faster to build. And none of those four audiences are asking the same question, so producing a single document that satisfies all of them is impossible — yet nonprofits try anyway, and The Scaffold Confusion is the result.
The nonprofits that solve this stop treating the two frameworks as deliverables and start treating them as layers of a single program architecture. The theory of change is the internal working document that the M&E team, program team, and leadership use to design data collection, monitor assumptions, and interpret outcomes. The logic model is the external communication tool generated from that architecture, updated as the causal reasoning evolves. The two documents stay consistent because one is derived from the other — not because they were built independently to describe the same program from different angles and happen to align.
Build the theory of change before the logic model. This is the single sequencing decision that determines whether the two frameworks scaffold a coherent program or describe two parallel programs that happen to share participants. The theory of change captures the argument — the mechanism, the assumptions, the testable predictions. Everything downstream, including the logic model itself, derives from that argument. A theory of change built after a logic model is retrofitted to fit boxes that were drawn without reasoning behind them.
The causal layer has three components that a logic model will not hold: named mechanisms on each arrow, explicit assumptions at each level, and monitoring questions that connect each assumption to data. Name the mechanism — not "training leads to employment" but "portfolio-based technical skills combined with employer network access produces hiring because our employer partners have committed to portfolio review." State the assumption — "employer partners continue to prioritize portfolio review over credential screening." Specify the monitoring question — "do our employer partners report portfolio review as a primary hiring criterion in quarterly check-ins?" This is what a theory of change holds that a logic model cannot.
The logic model is a compression of the theory of change into the standardized format funders require. Done correctly, this compression takes an afternoon — not a three-week process — because the causal argument already exists and the logic model is a summary of it. The inputs and activities come from the program design inside the theory of change. The outputs are the immediate, countable results of those activities. The outcomes are the first and second layers of the theory of change's outcome chain. The assumptions — all of them — compress into the "external factors" or "context" box at the right of most logic model templates.
What does not compress is the mechanism. A logic model has no column for the reason a causal link holds, which is why the mechanism remains in the theory of change and the logic model reads as a program description. This is acceptable and expected: the logic model serves funder communication, which does not require the mechanism to appear. The theory of change serves evaluation design, which does. Keep the documents consistent by deriving one from the other — not by writing two separate documents that describe the same program and hoping they align.
[embed: comparison-table]
A theory of change and a logic model are both diagrams. Neither generates evidence. The document that closes the gap between frameworks and evidence is the participant record — a longitudinal, unique-ID record that connects every data point from intake through long-term follow-up into a single traceable stream. Without that record, a theory of change is a claim you cannot test and a logic model is a description you cannot verify. Most nonprofit programs have both frameworks and neither record, which is why funders eventually ask the question no framework alone can answer: how do you know it worked?
The participant record is what Sopact Sense is built to produce. A unique stakeholder ID is assigned at first contact — application, enrollment, or intake, whichever comes first — and every subsequent data point attaches to that ID rather than to a date, a form, or a spreadsheet tab. The theory of change tells you what to measure. The logic model tells you how to report what you measured. The participant record is what you actually measure. Pair the three and the cycle closes: causal claim → operational description → longitudinal evidence → evidence-informed revision of the causal claim.
Most nonprofit monitoring systems track outputs — how many people were trained, how many sessions were delivered, how many meals were served. Monitoring outputs tells you whether the activity happened. It does not tell you whether the assumption behind the activity still holds. Assumption monitoring is the work that closes the loop between theory of change and logic model, and it is the work most programs skip because their logic model does not demand it and their theory of change ended at a diagram.
Every assumption in a theory of change needs three things: a monitoring question, a data source, and a threshold. Monitoring question: "do employer partners continue to prioritize portfolio review?" Data source: quarterly employer check-in embedded in the Sopact Sense participant record. Threshold: if fewer than 70% of partner employers confirm portfolio review as a primary criterion, the assumption has weakened and the theory of change needs revision. A logic model cannot hold this structure because it has no slot for assumptions linked to live data. A theory of change holds it only if the implementation plan connects each assumption to a specific, repeatable measurement. This is the work that turns frameworks into evidence.
The most common mistake is building the logic model first because a funder asked for it, and then trying to write a theory of change that fits the same structure. This collapses the causal layer into the operational layer and produces a theory of change that is just a logic model with longer text boxes. Build the causal layer first, then derive the summary.
The second most common mistake is writing a theory of change without assumptions. A diagram that lists inputs, activities, mechanisms, and outcomes but skips assumptions is not testable — it is a claim, not a theory. Name every assumption explicitly, connect each one to a monitoring question, and specify the threshold at which the assumption weakens.
The third common mistake is compressing a 12-page theory of change into a one-page logic model and losing the causal specificity that made it valuable. The two documents are meant to serve different audiences. Keep them separate. Compress the argument into the logic model funders require. Preserve the full argument in the theory of change your M&E system uses.
The fourth mistake is treating both frameworks as one-time deliverables. A theory of change is a living document that updates as evidence accumulates and assumptions are tested. A logic model follows. If the theory of change in your drawer has not been updated in two years, it is not a theory of change — it is a description of what the program looked like two years ago. For the right cadence of updates and the connection to evaluation practice, see the theory of change in monitoring and evaluation guide.
A logic model describes what a program does — inputs, activities, outputs, outcomes — in a compact one-page format. A theory of change explains why those activities should produce those outcomes, naming the causal mechanism and stating the assumptions that must hold. A logic model answers "what"; a theory of change answers "why." Nonprofits typically need both, with the theory of change built first and the logic model derived from it.
Most mature nonprofit programs need both. The logic model is usually required by funders for grant applications and reporting. The theory of change is required for evaluation design, assumption monitoring, and defensible impact claims. Building the theory of change first and deriving the logic model from it is faster and more coherent than building them independently and trying to reconcile them afterward.
Build the theory of change first. It contains the causal argument — the mechanism and the assumptions — from which the logic model is a compressed summary. Building the logic model first produces a program description without reasoning behind it, and any theory of change built afterward has to be retrofitted to fit boxes that were drawn without the argument in mind.
The Scaffold Confusion is treating a theory of change and a logic model as interchangeable frameworks when they scaffold different layers of the same program. A logic model scaffolds the operational layer — activities, outputs, short-term outcomes. A theory of change scaffolds the causal layer — mechanisms, assumptions, testable predictions. Nonprofits that pick one end up blind at the other altitude; those that use both, correctly sequenced, produce frameworks that connect directly to participant-level evidence.
No. A logic model describes program structure and cannot carry the causal mechanism or the named assumptions that a theory of change requires. A logic model can summarize a theory of change for funder communication, but it cannot substitute for one in evaluation design, assumption monitoring, or defensible impact claims. Funders who ask how a program knows it works are asking a theory-of-change question, not a logic-model question.
A theory of change is as long as the causal argument requires — typically 4 to 12 pages for a single-program nonprofit, longer for multi-program organizations. Compressing it to fit a one-page format defeats the purpose. If a funder requests a one-page summary, derive a logic model from the theory of change rather than truncating the theory of change itself. The two documents serve different audiences and have different length expectations.
A theory of change should be updated whenever evidence significantly confirms, weakens, or disconfirms a core assumption — typically once per program year at a minimum, more often during early-stage programs where assumptions are still being tested. A theory of change that has not been updated in two years is not a theory of change but a historical description. The logic model updates downstream as the theory of change evolves.
Most nonprofits build theories of change and logic models in documents, slides, or diagramming tools — static artifacts that do not connect to participant data. Sopact Sense is an AI-native data collection platform designed around the theory-of-change structure: outcome stages map to collection instruments, assumptions map to monitoring questions, and persistent participant IDs connect every data point into a longitudinal record. The logic model is produced as a reporting output from the same architecture.
Yes — and this is the point of building the theory of change first. When data collection is structured around the theory of change, the same participant records that validate causal claims also populate the logic model's output and outcome columns. Without a unified data system, nonprofits end up maintaining two reporting processes, one for theory-of-change evaluation and one for logic-model compliance, duplicating effort and risking inconsistency between the two documents.
A theory of change is the primary design input for a nonprofit's monitoring and evaluation framework. Each outcome level in the theory of change maps to an indicator, each assumption maps to a monitoring question, and each monitoring question maps to a data instrument administered at a specific point in the participant journey. This is what turns an evaluation plan from a list of indicators into an architecture. The theory of change in monitoring and evaluation guide covers the full mapping.
Purpose-built nonprofit impact measurement platforms typically range from a few hundred dollars per month for basic survey tools to several thousand dollars per month for full longitudinal data architectures with AI-native analysis. Sopact Sense is positioned in the mid-tier at approximately $1,000 per month and is designed specifically for nonprofits that need to connect theory of change, logic model, and participant data in one system. Cost varies with program count, participant volume, and integration needs.
The W.K. Kellogg Foundation logic model is the most widely referenced template in the nonprofit sector, published in the Kellogg Foundation's Logic Model Development Guide. It organizes program design into five columns: Resources/Inputs → Activities → Outputs → Outcomes → Impact. Most U.S. funders, whether private foundations or federal agencies, accept logic models in this format or close variants. The Kellogg guide is a program description standard — it is not a theory of change framework, and the two should not be confused.
A logic model and a logframe (logical framework) are close relatives but not identical. A logic model is simpler and more visual, typically used in U.S. nonprofit and foundation contexts. A logframe is more structured and includes verifiable indicators, means of verification, and risks in a four-by-four matrix — used more widely in international development and by agencies like USAID and the UN system. See our logframe guide for the full comparison.
[embed: cta]