Logic Model: Framework, Components, Examples & How to Make It Work
A program director pulls the logic model out of a three-year-old grant folder for a board meeting. The diagram is clean — five columns, arrows between them, color-coded inputs through impact. The board chair asks the obvious question: "How do you know the inputs actually led to those outcomes?" The room goes quiet. Nobody in the organization has tested a single arrow on that diagram since the day it was drawn. That silence has a name: The Arrow Debt — the accumulated debt of untested causal claims in a logic model, where every arrow between columns is a hypothesis the program never pays down.
Last updated: April 2026
This guide treats the logic model the way it was originally meant to be treated — as a testable hypothesis, not a deliverable. You will learn what a logic model framework is, what each of its five components does, how to read and build examples, and — more importantly — how to connect every arrow to evidence so the model actually works when a funder or board asks whether it does.
Logic Model Framework · Programs
A logic model is a testable hypothesis — not a deliverable
Every arrow between inputs, activities, outputs, outcomes, and impact is a causal claim. Most programs draw the arrows and file the diagram. The evidence that would test them never arrives. That is The Arrow Debt.
Tested arrow — evidence collectedArrow Debt — hypothesis never tested
Ownable concept · this article
The Arrow Debt
Every arrow between columns in a logic model is a causal claim — a hypothesis. The debt is the accumulation of untested arrows across program cycles. Old arrows stay unpaid while new ones are added, and by year three a program can have three years of unpaid arrows and no way to prove any of them held.
5
CORE COMPONENTS
inputs, activities, outputs, outcomes, impact
11–14
DAYS PER REPORT
average nonprofit time reconciling logic model data
4
ARROWS TO PAY
causal claims every logic model must test with evidence
Six principles · Logic model practice
What separates a living logic model from a filed diagram
Not every logic model accumulates Arrow Debt. The six disciplines below are how practitioner-tested programs keep every arrow connected to evidence — cycle after cycle.
Start with the long-term change the program exists to create. Then outcomes. Then activities. Inputs last. Designing forward from activities traps the model in describing what staff already do.
△
If you start with a workshop and work forward, every workshop survives the design process — including the ones nothing depends on.
02
Test
Treat every arrow as a hypothesis
The arrow between mentorship and confidence is a claim. The arrow between confidence and job placement is another. Each one needs an instrument — a baseline question, an exit question, a follow-up measure. Untested arrows are debt.
△
A logic model with five arrows and zero instruments is a flowchart, not an evaluation framework.
03
Schema
Attach a data field to every box
Inputs become resource tags. Activities become milestone events. Outputs become automatic rollups. Outcomes become baseline-to-endline comparisons. Impact becomes longitudinal follow-up. The logic model is the schema — not documentation alongside it.
△
A box with no corresponding data field is a promise the program cannot keep.
04
Distinguish
Outputs are not outcomes
"We trained 25 people" is an output. "18 gained job-ready skills and 12 secured employment" is an outcome. Outputs confirm delivery. Outcomes confirm change. Funders want the second — most programs only instrument the first.
△
If the outcomes column reads like a list of activities, the program will fail its next funder review.
05
Assumptions
Surface every assumption
Participants must have reliable internet. Employers must value bootcamp credentials. Mentorship must address confidence, not just skill. Every assumption is an invisible arrow. Write them down; watch for when they break.
△
Programs discover broken assumptions 12 months late — in a final evaluation report, when it's too late to adjust.
06
Longitudinal
Test outcomes across time
Persistent participant IDs across intake, exit, 90-day, and 180-day follow-up. The same ID across every instrument. Without that, short-term outcomes cannot be linked to long-term ones — and the last two arrows of the model stay untested forever.
△
SurveyMonkey and Google Forms capture data points but cannot test arrows. Each survey is an isolated event, not a linked observation.
What is a logic model?
A logic model is a visual framework that maps the causal pathway from what a program invests (inputs) to what it does (activities), what it produces (outputs), what changes for participants (outcomes), and the long-term transformation it contributes to (impact). It was developed at the W.K. Kellogg Foundation as a program planning and evaluation tool and is now the most widely required framework in nonprofit grant applications and government funding.
Unlike a mission statement or strategic plan, a logic model is structured around testable claims. Each arrow between columns is a hypothesis: this input enables this activity, this activity produces this output, this output contributes to this outcome. A logic model that names those arrows without collecting evidence to test them is the Arrow Debt in visual form — a diagram that looks rigorous but proves nothing.
Sopact Sense treats the logic model as the data schema, not the deliverable. Every outcome in the model maps to a specific question, instrument, and participant ID from the day the program opens — which is the only way the arrows ever get tested.
What is a logic model framework?
A logic model framework is the structured five-column format — Inputs, Activities, Outputs, Outcomes, Impact — that nonprofits, foundations, and public programs use to articulate program theory. The framework was formalized in the W.K. Kellogg Foundation Logic Model Development Guide and has been adopted by the CDC, USAID, United Way, and most major foundations.
The framework is not a template to fill in. It is a discipline: every box must justify its existence by connecting to a measurable change. Generic templates in Word documents or PowerPoint slides encourage treating the framework as a compliance artifact — boxes filled, arrows drawn, model saved as PDF. A logic model template built for measurement instead treats the framework as a data schema, where each column is connected to a question in the instrument and a field in the participant record before program enrollment opens.
Platforms like SurveyMonkey and Google Forms treat the logic model framework as a document that lives outside the data. The framework and the measurement system live in different tools — which is how the Arrow Debt accumulates.
What are the 5 components of a logic model?
A logic model has five components: Inputs (resources invested), Activities (what the program does with those resources), Outputs (the direct, countable products of activities), Outcomes (the changes in knowledge, skills, behavior, or conditions for participants), and Impact (the long-term systemic change the program contributes to). Some versions add a Situation or Problem column at the far left; the W.K. Kellogg framework treats these five as core.
The critical distinction sits between outputs and outcomes — and it is where most logic models fail. We trained 25 people is an output. 18 participants gained job-ready skills and 12 secured employment within six months is an outcome. Funders want outcomes. Most organizations track outputs because their data systems only capture what was delivered, not what changed. The Arrow Debt between outputs and outcomes is the single largest debt in the logic model — and the one most programs never pay.
In Sopact Sense, every component of the logic model maps to structured data at the point of collection: inputs become resource tags, activities become milestone events linked to participant IDs, outputs become automatic rollups, outcomes become baseline-to-endline comparisons across disaggregation variables, and impact becomes longitudinal follow-up tied to the same persistent IDs.
▶ Masterclass · 6 min
The logic model architecture most nonprofits get wrong
Logic model examples: workforce, health, education, social work
The fastest way to understand a logic model is to see one. Below are four sector examples showing the five components in context. Each example uses the same backward-design logic: impact first, then outcomes, then activities, then outputs, then inputs.
Workforce development example.Inputs: $180K budget, 3 FTE staff, curriculum licenses, employer partnerships, Sopact Sense platform. Activities: 12-week coding bootcamp with mentorship and portfolio workshops. Outputs: 120 enrolled, 85% completion rate, 48 mock interviews conducted. Outcomes: Confidence score improvement from 2.1 to 4.3, 12 participants employed within six months at median wage $52K, professional network size increase measurable via LinkedIn connection growth. Impact: Economic mobility in underserved communities through living-wage tech employment.
Public health example (smoking cessation).Inputs: Public health nurse FTE, counseling curriculum, nicotine replacement therapy funding, referral partnerships with primary care clinics. Activities: Eight weekly group counseling sessions, individual coaching calls, NRT distribution. Outputs: 60 participants enrolled, 45 completing the eight-week program, 320 coaching calls logged. Outcomes: Quit attempts at 90 days, verified abstinence at six months, reduced cigarettes-per-day among non-quitters. Impact: Reduced tobacco-related morbidity in the target population over 5–10 years.
Education example (literacy program).Inputs: Reading specialists, evidence-based curriculum, classroom space, assessment tools. Activities: Small-group tutoring four times per week, parent engagement workshops, progress monitoring. Outputs: 200 students enrolled, 3,200 tutoring hours delivered, 95% attendance rate. Outcomes: Grade-level reading proficiency gains, reduced reading gap for English learners, teacher-reported classroom reading behavior changes. Impact: Third-grade reading proficiency as a predictor of long-term academic and economic outcomes.
Logic model examples social work.Inputs: Licensed social workers, case management software, community partnerships. Activities: Intake assessment, service coordination, safety planning, referral management. Outcomes: Housing stability at 90 days, employment engagement, reduced crisis service utilization, self-reported safety and wellbeing measures. Impact: Breaking intergenerational cycles of housing insecurity. Social work logic models differ from training-based models because outcomes are often non-linear — stabilization is itself an outcome, not a stepping stone to it. A nonprofit program evaluation system needs to capture that non-linearity rather than forcing social work into a workforce-style outcome chain.
Step 1: Design your logic model backwards from impact
Most logic model failures begin before the first arrow is drawn, because teams design left to right — starting with activities ("We run workshops") and then trying to connect those activities to outcomes. That order traps the program in describing what it already does instead of building evidence for what it should produce.
Backward design reverses the order. Start with impact: what long-term systemic change does the program exist to create? Then identify the outcomes required for that impact to occur. Then design the activities that produce those outcomes. Then define the outputs that prove activities happened. Then, finally, list the inputs needed. Every component in the model now justifies its existence by pointing forward to the next — and by the time inputs are listed, the causal chain is already in place.
Three nonprofit shapes · one failure mode
Whichever way your nonprofit is shaped — Arrow Debt forms the same way
Three common archetypes, three different operational realities, one shared structural failure: the last two arrows of the model — outputs→outcomes and outcomes→impact — are where the evidence stops arriving.
A multi-program nonprofit runs workforce, youth, and community health under one roof. Each program has its own logic model — often drawn by a different team, in a different template, with incompatible outcome language. The result: three logic models, three survey tools, three participant ID schemas, and no way to roll up outcomes to the organization level. The Arrow Debt in this archetype compounds horizontally — across programs — as well as over time.
01
Program design
Each program drafts a logic model in a separate Word template
02
Collection
Separate survey tools, separate spreadsheets per program
03
Organization report
CEO asks for org-wide outcomes — 3 weeks of reconciliation
Traditional stack
×Each program builds its logic model in isolation, with different outcome language
×Participant IDs reset per program — no cross-program view of shared beneficiaries
×Outcomes reconciled manually at year-end for the annual report
×Board asks cross-program questions — program directors cannot answer
With Sopact Sense
✓Shared outcome taxonomy across all programs — inputs, activities, outputs, outcomes mapped once
✓Organization-wide participant IDs — one person, one ID across programs they touch
✓Board-ready rollups generated continuously — zero reconciliation step
✓Every arrow in every logic model tested with its corresponding instrument
A partner-delivered network — headquarters defines the logic model, implementing partners deliver the program locally. The HQ logic model is elegant. The partners' spreadsheets are incompatible. Partner A tracks employment at 30 days; Partner B tracks it at 90 days; Partner C doesn't track it at all. The logic model has four clean arrows. The data pipeline has fifteen broken ones. Arrow Debt is paid by HQ but accumulates in the network.
01
HQ designs
One logic model, shared outcome indicators across all partners
02
Partners collect
Each partner uses their own tool, own cadence, own definitions
03
HQ reports
Outcomes do not aggregate — every partner's data needs a translator
Traditional stack
×HQ logic model exists as a PDF; partners build their own local spreadsheets
×Partners collect outcomes at different cadences and with different definitions
×HQ M&E team spends 60% of time translating partner data, 40% analyzing it
×Funder-facing impact report lags the partner cohort by 6–12 months
With Sopact Sense
✓Single shared instrument — partners collect into the same schema as HQ
✓Network-wide participant IDs — HQ sees every beneficiary across every partner
✓Partner-level views preserved; HQ rollups automatic — no translation step
✓Every arrow in the HQ logic model is testable across every partner simultaneously
A single-program nonprofit — one bootcamp, one cohort, one logic model — is where every arrow of debt hurts the most. The organization has no "other programs" to hide behind. If outputs-to-outcomes cannot be proven, the whole case for the program collapses. The good news: single-program nonprofits have the clearest path to a living logic model, because there is only one model to instrument properly. The Arrow Debt is visible in full — and payable within one cycle.
01
Design
One model, built backwards from the program's singular impact claim
02
Run
Every cohort tests every arrow — baseline, exit, follow-up
03
Iterate
Evidence revises the model before the next cohort opens
Traditional stack
×Logic model lives in a Word doc; surveys live in Google Forms; attendance in Excel
×Outputs tracked, outcomes estimated; follow-up rarely happens at all
×Funder asks "did the program work?" — director points to attendance stats
×Next cohort runs unchanged because no evidence came back to revise the model
With Sopact Sense
✓Logic model, instruments, and tracking live in one connected data architecture
✓Every cohort's arrows tested with the same instruments — comparable across cycles
✓Funder sees outcome evidence with provenance — not estimates from attendance
✓Next cohort design informed by last cohort's evidence — the model actually learns
The Arrow Debt begins the moment an activity enters the model that does not connect to a specific outcome. If a program offers a workshop series because "we have always run workshops," and no outcome in the model depends on that workshop, the workshop is accumulating debt. Either redesign it so it produces a measurable outcome, or remove it from the model. Generic templates in Word or PowerPoint encourage keeping every activity — there is no cost to leaving a box in the diagram. A logic model template designed for measurement forces every box to connect to a data field, which is the only discipline that prevents Arrow Debt from forming.
Step 2: Every arrow is a hypothesis — design the data to test it
A logic model is not a flowchart. It is a set of causal hypotheses, written in visual form. The arrow between mentorship pairing and professional confidence is a claim. The arrow between professional confidence and job placement is a claim. If the program runs for a year without collecting data that could confirm or disconfirm those claims, the Arrow Debt doubles — the program now has both an untested hypothesis and a year of activity that cannot be evaluated.
Every arrow in a strong logic model should map to at least one survey question, milestone event, or outcome measurement. A question like "How confident are you in your technical interview skills?" asked at intake and again at exit is the instrument that tests the activity-to-confidence arrow. A field logging each mentor session is the instrument that tests the dosage-to-outcome arrow. In Sopact Sense, the logic model is the data schema — when a program manager adds an outcome to the model, the platform generates the question and attaches it to the instrument before the first participant enrolls. There is no separate "build the survey" step.
The alternative is the common pattern: logic model written in Word, survey built in Google Forms, participant tracking in Excel, analysis in Tableau. Four tools, four participant ID schemas, zero way to trace an arrow back to evidence. This is the structural reason nonprofits spend 11–14 days reconciling data per reporting cycle instead of analyzing it.
Step 3: Choose logic model software that treats the model as data, not documentation
"Logic model software" returns a mix of diagramming tools (Lucidchart, Visio, Miro, Canva), document platforms (Word templates, Google Docs), and methodology tools (MissionMet, DoView). All of them share a common gap: they help draw the diagram but do nothing to collect, link, or analyze the data that tests it. The output is a file, not a measurement system.
Sopact Sense is not a diagramming tool. It is a data collection platform where the logic model structure is the schema. Every outcome defined in the model becomes a question in the instrument, a column in the participant record, a variable in the disaggregation analysis, and a row in the funder report — automatically. The logic model and the measurement system are the same system, not two documents that need reconciliation at reporting time.
Tool comparison · Logic model software
Diagramming tools draw the model. Measurement architecture tests it.
Most "logic model software" is diagramming software with program-planning templates. The arrows look elegant and the diagram exports beautifully — but none of these tools collect the evidence that would test whether the arrows actually hold.
Risk 01
Disconnected tool stack
Logic model lives in Word. Surveys live in Google Forms. Participant tracking lives in Excel. Analysis lives in Tableau. Four tools, zero link between them.
The arrow-to-evidence chain can never be drawn.
Risk 02
No persistent IDs
Intake responses cannot be matched to exit responses, and exit responses cannot be matched to follow-up data. Each survey is an isolated event.
Outcomes→impact arrow is untestable in principle.
Risk 03
Retrofit at reporting time
Team spends 80% of a reporting cycle cleaning and merging spreadsheets, 20% analyzing. The logic model gets rebuilt each cycle to match what data happens to exist.
The model becomes a rationalization, not a hypothesis.
Risk 04
No longitudinal view
Short-term outcomes are captured at exit. 180-day outcomes are never captured. The long-term outcomes column is theoretical in perpetuity.
The full Arrow Debt is never paid down.
Logic model tooling · head-to-head
Diagramming tools vs. measurement architecture
Capability
Diagramming tool Lucidchart · Visio · Miro · Word
Sopact Sense measurement-first architecture
Framework structure
Draw the 5-column diagram
Inputs → Activities → Outputs → Outcomes → Impact
Yes — core capability
Templates, shapes, arrows, color coding
Yes — and it is the schema
The five columns become the data structure
Multiple templates (Kellogg, CDC, USAID)
Pre-built formats for common funder requirements
Yes
Template libraries available
Yes — all outputs render to funder formats
Data structure is independent of output format
Export to PDF / PowerPoint
Share the diagram with funders and boards
Yes — primary output
The export is the deliverable
Yes — plus live dashboards
Static export and live view from one source
Data architecture (where Arrow Debt forms)
Attach a data field to every box
Inputs become tags; outputs become rollups
Not supported
Diagramming tools only hold shapes, not data
Built in
Every box connects to a field, question, or rollup
Collect data against the model
Intake forms, surveys, milestone tracking
Not the tool's job
Requires a separate survey platform
Native collection
Forms generated from the logic model — not built separately
Persistent participant IDs
Same person, same ID across every instrument
Not applicable
No participant data exists in diagramming tools
Assigned at first contact
IDs persist across intake, exit, follow-up
Longitudinal follow-up
90-day, 180-day, one-year outcome capture
Not applicable
Diagram is static; no data pipeline exists
Automated
Follow-up surveys sent and linked to the same ID
Analysis — testing the arrows
Baseline-to-endline comparison
Quantify change for the outputs→outcomes arrow
Manual — in a separate tool
Export data, build comparison in Excel or Tableau
Automatic
Pre/post view per outcome, per cohort, per segment
Disaggregation by segment
Outcomes by gender, age, program type, geography
Requires data export
Must pivot data manually in another tool
Built in
Disaggregation structured at collection, not retrofit
Qualitative evidence synthesis
Open-ended responses, interview notes, narrative
Not the tool's job
Qualitative data lives elsewhere and is rarely analyzed
AI-assisted theming
Themes form across responses as data arrives
Reporting — the arrows in action
Time from program end to funder report
How long does reconciliation add to the cycle?
11–14 days average
Most time spent matching data across tools
Report-ready at program close
No reconciliation step — data already linked
Answer "did the program work?" on demand
Board member or funder asks in the middle of a cohort
Requires a new analysis project
Spreadsheet pull, manual merge, days to answer
Answer in the same conversation
Live dashboard, every arrow tested against current data
Every row above is an arrow on your logic model diagram. The question is which ones get tested with evidence and which ones accumulate as Arrow Debt.
The choice is not which tool draws the better diagram. The choice is whether the diagram connects to evidence. A diagramming tool plus a survey tool plus a spreadsheet will never test an arrow the same way a single data architecture will.
This is the difference between diagramming software and measurement architecture. Diagramming software helps produce the deliverable. Measurement architecture ensures the deliverable is backed by evidence. Most programs pay for the first and skip the second — which is how the Arrow Debt becomes institutional.
Step 4: Test outcomes longitudinally — a logic model is a living hypothesis
A static logic model made once and filed is worthless for learning. A living logic model is tested at each program cycle, updated when evidence breaks an arrow, and extended with longitudinal follow-up to catch outcomes that only emerge months or years after program exit.
Longitudinal testing requires persistent participant IDs. The same ID at intake, exit, 90-day follow-up, 180-day follow-up, and one-year follow-up. Without persistent IDs, follow-up outcomes cannot be linked back to intake data — and the arrow between short-term outcomes and long-term outcomes cannot be tested. This is the specific failure mode of SurveyMonkey, Qualtrics, and Google Forms at the logic model level: each survey is an isolated event, not a linked observation in a longitudinal record. They can capture a data point but cannot test a causal arrow.
In Sopact Sense, unique stakeholder IDs are assigned at first contact and persist across every instrument, every cycle, and every follow-up. When a board asks whether short-term confidence gains predicted long-term employment, the data is already linked — no reconciliation step, no manual matching of spreadsheets. The arrow between short-term and long-term outcomes can actually be tested because the ID is the same across both measurements.
▶ Masterclass · 4 min
Output vs outcome — the 7 rules that separate lost funders from renewed ones
Step 5: Common logic model mistakes — and how to avoid them
Five failure patterns account for most Arrow Debt. First: activities listed as outcomes. "Delivered 25 workshops" appears in the outcomes column. This is an output. Outcomes measure change in participants, not actions by staff. Second: missing assumptions. The model omits the conditions that must hold for the arrows to work — employer partner engagement, labor market stability, participant commitment capacity. When an assumption breaks, the program does not know which arrow failed. Third: retrofit outcomes. Outcomes are written after the program is already running, to match data the team happens to have. The model becomes a rationalization, not a hypothesis. Fourth: disaggregation added at reporting time. The intake form never captured gender, age, or geography, so outcomes by subgroup cannot be reconstructed. The equity analysis the funder asked for cannot be produced. Fifth: no follow-up. The model's "long-term outcomes" column has no data because no one collected it. The 180-day column is theoretical in perpetuity.
Each of these failures is an unpaid arrow. Each unpaid arrow compounds — a model with five unpaid arrows in cycle one becomes a model with fifteen unpaid arrows by cycle three, because old debt never gets paid and new debt accumulates with every program iteration. Strong program evaluation practices pay arrow debt continuously — every outcome is tested at the end of each cycle, every assumption is checked against current conditions, and the model is revised based on what the evidence shows.
Logic model vs theory of change — which do you need?
A logic model describes a program. A theory of change explains why the program should work. They are not synonyms. Most funders require a logic model in grant applications because it summarizes program design in a compact visual. Program staff and M&E practitioners need a theory of change because it makes the causal reasoning explicit — not just which activities produce which outcomes, but why they should, and what must be true for the arrows to hold.
Organizations that treat the two tools as interchangeable end up with logic models that are too verbose for funder communication and theories of change that are too shallow for program learning. Use the logic model to communicate design. Use the theory of change to design measurement. Build both from the same data architecture so the arrows in the logic model are the same arrows being tested in the theory of change.
Frequently Asked Questions
What is a logic model?
A logic model is a visual framework that maps the causal pathway from program inputs through activities, outputs, outcomes, and impact. Developed at the W.K. Kellogg Foundation, it is the most widely required program evaluation framework in nonprofit and public-sector grant applications. Sopact Sense treats the logic model as a data schema — every outcome in the model maps to a structured question and participant ID from day one.
What is a logic model framework?
A logic model framework is the five-column structure — Inputs, Activities, Outputs, Outcomes, Impact — used to articulate program theory in nonprofits, foundations, and public programs. The framework was formalized by the W.K. Kellogg Foundation and adopted by the CDC, USAID, and most major funders. It is a discipline, not a template: every box must connect to a measurable change for the framework to hold.
What are the 5 components of a logic model?
The five components of a logic model are Inputs (resources invested), Activities (what the program does), Outputs (direct countable products like sessions delivered or participants enrolled), Outcomes (changes in knowledge, skills, behavior, or conditions), and Impact (long-term systemic change). The distinction between outputs (what the program did) and outcomes (what changed for participants) is the most critical — and most commonly missed.
What is a logic model example?
A workforce logic model example: Inputs — $180K budget, 3 staff, curriculum. Activities — 12-week coding bootcamp with mentorship. Outputs — 120 enrolled, 85% completion. Outcomes — confidence score improvement from 2.1 to 4.3, 12 participants employed within six months. Impact — economic mobility through living-wage tech employment. Every arrow between columns is a hypothesis the program must test with data.
What is a logic model in social work?
A logic model in social work organizes case-based intervention programs into the same five-column structure — inputs, activities, outputs, outcomes, impact — but outcomes are often non-linear because stabilization is itself an outcome, not a step toward another outcome. Housing stability at 90 days, employment engagement, reduced crisis service utilization, and self-reported safety are common social work outcome categories.
What is the difference between outputs and outcomes in a logic model?
Outputs are the direct countable products of program activities — sessions delivered, participants served, hours of instruction. Outcomes are the changes in knowledge, skills, behavior, or conditions that result from those activities. We trained 25 people is an output. 18 participants gained job-ready skills and 12 secured employment is an outcome. Funders want outcomes; most programs only track outputs because their data systems capture delivery, not change.
What is The Arrow Debt?
The Arrow Debt is the accumulated debt of untested causal claims in a logic model — where every arrow between columns (inputs → activities → outputs → outcomes → impact) is a hypothesis the program drew but never tested against evidence. The debt compounds across program cycles: unpaid arrows from year one are still unpaid when year two's arrows are added. Sopact Sense prevents Arrow Debt by connecting every outcome in the model to a data field at the point of collection.
What is a logic model template?
A logic model template is a pre-structured form — usually in Word, PowerPoint, or PDF — with the five columns already drawn, ready to be filled in. Most templates are compliance artifacts rather than measurement tools. A nonprofit logic model template built for measurement connects each column to a data field so the template becomes a schema instead of a deliverable.
What is logic model software?
"Logic model software" typically refers to diagramming tools like Lucidchart, Visio, Miro, or Canva, or methodology platforms like MissionMet and DoView. These tools help draw the logic model diagram but do not collect, link, or analyze the data that tests it. Sopact Sense is a different category — a data collection platform where the logic model is the schema, not documentation produced alongside a separate measurement system.
What is the difference between a logic model and a theory of change?
A logic model describes the program — what it uses, what it does, what it produces. A theory of change explains why the program should work — the causal mechanisms, assumptions, and contextual conditions that must hold for activities to produce outcomes. Logic models serve funder communication; theories of change serve internal learning. Strong programs use both, built from the same data architecture.
How do you create a logic model?
Design backwards from impact. Start with the long-term change the program exists to create, then identify required outcomes, then design activities that produce those outcomes, then define outputs that prove the activities happened, then list inputs needed to support the activities. Connect every component to a data field before program enrollment opens — this is the discipline that prevents Arrow Debt. A program evaluation system built on this principle produces models that actually generate evidence.
How much does logic model software cost?
Diagramming tools range from free (Google Docs, Canva free tier) to $10–$50 per user per month (Lucidchart, Visio, Miro). Methodology platforms like MissionMet and DoView run $50–$500 per user annually. These prices buy the diagram, not the measurement system. Sopact Sense, which replaces the fragmented stack of diagramming tool + survey tool + tracking spreadsheet + reporting tool, starts at $1,000 per month for unlimited users and includes the data architecture that tests the model's arrows.
What are logic model assumptions?
Logic model assumptions are the conditions that must hold for the arrows between columns to work — employer partners must engage, labor market conditions must be stable, participants must commit the required time, curriculum must be culturally appropriate. Most programs list assumptions as a side note and never test them. When evidence contradicts an assumption, strong programs update the model. Weak programs discover the broken assumption 12 months later in a final evaluation report.
Ready when you are
Stop filing logic models. Start testing them.
Sopact Sense is the data collection platform where the logic model is the schema — not documentation sitting alongside a separate measurement system. Every arrow gets an instrument. Every instrument links to a persistent participant ID. Every cycle pays down the Arrow Debt instead of adding to it.
Persistent stakeholder IDs assigned at first contact — intake through follow-up linked automatically
Every outcome in your model maps to a structured question before enrollment opens
Longitudinal follow-up (30/90/180-day) tied to the same ID — long-term arrows finally testable