play icon for videos

Theory of Change: Model, Components & Training

Build a theory of change model that drives decisions. Learn components, create diagrams, compare vs logic models. 5-step framework with real examples

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 1, 2026
360 feedback training evaluation
Use Case
Framework

A theory of change explains how a program creates change.

It maps the steps from a program's activities to the outcomes those activities are meant to produce, and names what has to be true for the steps to connect.

This guide explains the framework in plain terms: what each of the six components measures, why the named assumptions are the part that matters most, and how to recognize whether your theory of change holds when data starts arriving. Worked examples come from workforce training programs, education initiatives, and impact funds. No prior background needed.

Example: a workforce training program
A claim
"Our training helps people get jobs."

A statement. No structure. Nothing to test.

A pathway
Training
Skills
Job placement

Steps named. But why each step leads to the next is left unsaid.

A theory of change
Training
Skills
Job placement
Because: employers recognize the credential, and we can check whether they do.

Each level adds one layer of structure. Only the third names why each step leads to the next, in a form data can confirm or refute.

The pathway

The causal pathway: problem to impact

Every theory of change diagram, regardless of sector or framework tradition, threads the same six components in the same order: problem, inputs, activities, outputs, outcomes, impact. Underneath them runs a separate band of assumptions, each tied to one of the arrows. The diagram below shows the structure once. The component definitions follow in the next section.

Causal pathway
P
Problem
Who is affected, and why
01
Inputs
What you commit
02
Activities
What you deliver
03
Outputs
Direct countable products
04
Outcomes
Change in stakeholders
05
Impact
Long-term systemic change
Assumption layer
Need is real
Resources arrive
Activities delivered
Outputs taken up
Change persists
Conditions hold

Each assumption is anchored to a monitoring question that can fail it. A theory of change becomes a theory only when these can be tested.

The same six components appear across most published frameworks (Center for Theory of Change, Better Evaluation, NPC). What separates a working framework from a documented one is the band underneath: each assumption tied to a monitoring question that can fail it. The five definitions in the next section unpack each of the six components in turn.

Definitions

Five definitions, one stack

The phrase theory of change is used in adjacent ways across sectors. Five definitions follow, each addressing a head-term variant: the concept, the meaning, the model, the framework, and the version used in monitoring and evaluation.

01 · CONCEPT

What is a theory of change?

A theory of change is a written explanation of how and why a program is expected to produce change in the people it serves. It names the problem the program addresses, the activities meant to address it, the outcomes those activities should produce, and the assumptions linking each step to the next.

Carol Weiss coined the term in the 1990s while working with the Aspen Institute's Roundtable on Comprehensive Community Initiatives. She framed it as a tool for making beliefs explicit enough that data could confirm or disconfirm them. Without that testable form, a theory of change is a narrative, not a theory.

The operational test for whether a framework qualifies: name three causal links and, for each, the specific condition under which the link would fail. If you cannot, the framework is decoration. If you can, the conditions become monitoring questions and the framework becomes something data can test.

02 · MEANING

What does theory of change mean?

Theory of change means a documented hypothesis about cause and effect inside a program. The word theory is used in its scientific sense: a structured account of why something happens, written in a form data can support or refute.

The phrase distinguishes it from a list of activities, a mission statement, or a logic model. Each of those describes what a program does. A theory of change explains why doing it produces the change. The if-then chain is the form. The because clause is the substance.

03 · MODEL

What is a theory of change model?

The theory of change model is the standard structure that organizes the explanation: inputs, activities, outputs, outcomes, impact, and the assumptions that connect them. Some versions include a problem statement at the front. Others split outcomes into short, medium, and long term.

The model itself is shared across sectors. Workforce training programs, early childhood literacy initiatives, and impact funds all use the same six-component structure. What varies is the content placed inside each component and the rigor with which each assumption is named.

The model becomes operational when each component is connected to a measurement instrument. Without that connection, the model is a diagram. With it, the model becomes a framework.

04 · FRAMEWORK

What is a theory of change framework?

A theory of change framework is the operational version of the model: the diagram plus the indicators that measure each component, the instruments that collect those indicators, and the monitoring questions that test each named assumption. The model is the picture. The framework is the picture plus everything that makes it testable.

A complete framework names a measurable indicator for each of the six components, attaches a monitoring question to each assumption, and connects every instrument to persistent stakeholder identifiers that link baseline, program, and follow-up records. The framework is operational when each part can be tested. It is decoration when the indicators are vague, the assumptions are missing, or the instruments are designed after the framework is signed off.

This is the difference between a theory of change framework and a theory of change diagram. The diagram is a picture. The framework is the picture plus the data layer that makes the picture testable.

05 · M AND E

What is theory of change in monitoring and evaluation?

In monitoring and evaluation, the theory of change is the bridge that connects program design to the indicators and instruments. Each outcome stage in the theory becomes a measurable indicator. Each indicator becomes a question on a baseline, midline, or endline survey. Each named assumption becomes a monitoring question embedded in a mid-cycle check-in.

Without this connection, monitoring and evaluation produces aggregate counts that cannot test the theory. Output-heavy reports are increasingly discounted by funders for exactly this reason: the count tells the funder what you did, not whether the activity produced the change the theory predicted. With the connection in place, every cycle produces evidence the theory can be revised against.

The full mechanics of sequencing baseline, midline, and endline instruments against a theory of change are covered in the theory of change in monitoring and evaluation guide. The architecture for tracking the same individuals across waves is covered in the pre and post surveys guide.

Adjacent terms

Four related-but-different frameworks

A theory of change is often confused with terms that look similar on a whiteboard. Four short distinctions follow.

01 · LOGIC MODEL
Logic model

A left-to-right matrix describing what a program does: inputs, activities, outputs, outcomes. A logic model is descriptive. A theory of change is causal. The logic model says the cohort will receive twelve weeks of training. The theory of change says the training will produce a credential employers value, assuming employers continue to recognize it.

02 · LOGFRAME
Logical framework (logframe)

A formal four-by-four matrix used in international development, originating with USAID in the 1970s. The logframe formalizes a logic model into goal, purpose, outputs, activities, with verification means and assumptions written into separate columns. A theory of change can feed into a logframe; the two serve different reporting audiences.

03 · RESULTS FRAMEWORK
Results framework

USAID terminology for a hierarchical diagram of strategic objectives, intermediate results, and sub-results. A results framework focuses on the destination structure (what we want to achieve, broken into sub-targets). A theory of change focuses on the causal mechanism (why this combination of activities is expected to produce that result). The two are usually paired.

04 · IMPACT FRAMEWORK
Impact framework (IMP, IRIS+)

Standardized reporting structures used by impact investors and foundations. The Impact Management Project's Five Dimensions (Who, What, How Much, Contribution, Risk) and the IRIS+ catalog of indicators are alignment layers, not the theory itself. A theory of change is mapped to them at the reporting stage. The theory is internal; the framework is external.

Most watched · Fundamentals

An introduction to theory of change

A plain-English walk-through of what a theory of change is, what each of the six components actually measures, and how to tell a working framework from one that exists only to satisfy a grant application.

Presented by Unmesh Sheth.

Design principles

Six principles that decide whether the theory holds

The model is shared across sectors. The discipline is in how each component is built. Six principles, applicable to any framework regardless of sector, funder, or template tradition.

01 · STARTING POINT

Begin with the problem, not the activity

The activity is a hypothesis about how to address the problem.

Frameworks that begin with what the program does invert the causal logic. The problem statement, named precisely (who is affected, what causal conditions produce the problem, why existing approaches have not solved it), is the evidence base for why the program exists. The activity is one possible response, not the starting point.

Why it matters: programs that begin with the activity have no fallback if the activity does not produce the change.

02 · CATEGORY DISCIPLINE

Distinguish outputs from outcomes

If stopping the program makes the metric disappear, it is an output.

Outputs are direct countable products of activities (sessions held, materials distributed, certificates earned). Outcomes are observable changes in stakeholders that persist beyond the activity (confidence gained, behavior changed, status improved). The categories look adjacent on a diagram. They measure entirely different things.

Why it matters: output-heavy reports are increasingly discounted by funders who want to see change, not activity volume.

03 · ASSUMPTIONS NAMED

Name every assumption explicitly

Every arrow in the diagram carries an assumption.

Skills lead to confidence. Employers value the credential. Participants have transportation to the training site. Some of these will break. A framework that never names its assumptions cannot be improved when they fail. Each assumption gets a sentence in the document and a monitoring question in the data plan.

Why it matters: a broken unnamed assumption stays in the framework year after year because no one built the question that would catch it.

04 · INSTRUMENT ANCHORED

Anchor each component to an instrument

Indicators without instruments are not measurable.

Each of the six components needs a measurable indicator, and each indicator needs a question on an actual instrument: a baseline survey, a midline check-in, an endline interview, a follow-up call. The arrows in the diagram correspond to data flows, not just causal claims. If the instrument is not designed, the indicator is rhetoric.

Why it matters: frameworks signed off without instruments produce indicators no one can answer at year-end.

05 · EVIDENCE CADENCE

Update the theory with evidence, not on a calendar

A working theory is revised the moment evidence contradicts it.

The annual review cycle was an artifact of paper-based monitoring. With continuous instruments, assumptions can be tested as data arrives, and revisions belong at the cycle that produced the contradicting evidence, not on the calendar. A framework that only changes once a year accumulates dead assumptions.

Why it matters: the calendar update preserves the framework. The evidence update preserves the theory.

06 · DIAGRAM IS INDEX

Treat the diagram as the index, not the artifact

The poster is a navigation tool. The framework is everything underneath it.

The diagram on the wall is what most people see. The framework is the indicators, instruments, monitoring questions, and longitudinal data that sit underneath. Teams that polish the diagram and skip the layer underneath have a poster, not a theory. Teams that build the layer first can regenerate the diagram in minutes.

Why it matters: the diagram alone shows what you believe. The framework proves whether the belief holds.

Playlist opener · Five-step series

Designing a theory of change against data, not before it

The opener of a five-video playlist on building a theory of change while the program is running, rather than ahead of it. Walks through the operational sequence: collect under persistent identifiers from day one, let the framework take shape against arriving evidence, revise as assumptions are tested.

For fund managers, accelerator directors, and program evaluators who have used the front-loaded workshop pattern and want a different sequence.

Method choices

Six choices that decide whether the framework holds

A theory of change is a sequence of decisions. The same six rows decide whether the framework can be tested, regardless of sector or template. The first column names the choice. The middle two name what most teams do versus what works. The last names the consequence.

The choice
Broken way
Working way
What this decides

Where the framework starts

Activity-led versus problem-led drafting

Broken

Open with what the program does. Build the diagram backward from the activities. Treat the problem statement as a paragraph to write later. The result is a description of the program, not a theory of why it produces change.

Working

Open with the problem and the population. Describe the causal conditions producing the problem and what existing approaches have failed at. Only then introduce the program as one possible response, with named assumptions about why it would work.

Decides whether the framework has a fallback if the activity does not produce the change. The activity-led version has none.

How outcomes are defined

Aggregate count versus per-stakeholder change

Broken

Define outcomes as totals. Hours of training delivered, certificates issued, participants reached. Report the totals at year-end. The numbers are large, the question of whether the same people changed across the program is not answered.

Working

Define outcomes as observable changes in the same individuals across baseline, midline, and endline. Track each one under a persistent identifier. Aggregate at the cohort level only after individual-level change has been measured.

Decides whether you can answer did these specific people change versus did the cohort have nice averages.

How assumptions are tracked

Buried in narrative versus monitoring questions

Broken

Mention assumptions in the narrative section of the framework document. Do not list them. Do not connect them to instruments. When an assumption breaks, the framework looks fine; the program just somehow stops producing the outcome.

Working

List each assumption as a sentence. Tie each to a monitoring question on a mid-cycle check-in or follow-up call. When data shows the assumption breaking, the framework is revised at that cycle, not retrospectively.

Decides whether the framework can be improved when something breaks, or just keeps reporting against assumptions that no longer hold.

When the framework is built

Pre-program workshop versus iterative refinement

Broken

Run a multi-day workshop before the program starts. Sign off on the framework. Build data collection separately afterward. By the time the framework is finalized, the data architecture cannot answer its questions, and the framework cannot be revised without redoing the workshop.

Working

Sketch a draft framework. Build instruments alongside it. Refine the framework at each cycle as evidence arrives. Treat the framework as a working hypothesis rather than a deliverable, with cycle-level revisions documented as the program runs.

Decides whether the framework matches the data the program is collecting or sits parallel to it.

How indicators connect to data

Mapped at year-end versus wired at design

Broken

Define indicators in the framework. Hope the survey questions can be matched to them later. At year-end, build a spreadsheet that joins indicator names to whatever questions seem closest. Some indicators have no matching data; some have several with different wordings.

Working

Wire each indicator to a specific question on a specific instrument at design time. The instrument carries the indicator code. The data flows directly into the framework's reporting structure without retrospective matching. Year-end is read-only.

Decides whether your year-end is weeks of reconciliation or hours of analysis.

Update cadence

Annual review versus continuous learning

Broken

Review the framework once a year. Document any changes in the year-end report. Carry the same framework through the next twelve months even when mid-cycle data has already shown that an assumption no longer holds. The calendar wins; the evidence waits.

Working

Review the framework when evidence contradicts it. Document the change at the cycle that produced the evidence. Treat the framework as a living artifact, with version notes that show what changed, when, and what the data was that prompted the change.

Decides whether the framework reflects the program as it is or the program as it was at the last workshop.

Compounding effect

These choices compound in order. Pick the activity-first opening, and outputs disguise themselves as outcomes. Pick aggregate counts, and assumptions stay buried. Each broken choice closes off the visibility the next choice would need to be made well. The first row in this matrix is the one that controls all the others.

Worked example

A workforce training program, mid-cohort

A worked example shows how the principles play out in one program. Below, a workforce training program at the mid-cohort point: framework already drafted, instruments running, an assumption beginning to break.

Our theory of change said employer-recognized credentials would lead to placement within six months. We are at week eight of cohort three. Placement data from cohorts one and two shows participants getting interviews but not offers. The credential is recognized; employers are asking for portfolios that we never built into the curriculum. The framework is not wrong. The assumption underneath one of the arrows is. We need to know which one and what to do this cycle.

Workforce training program lead, mid-cohort cycle, urban Midwest US.

Quantitative axis

Activity output

Sessions delivered, certificates issued, attendance rates. The activity layer of the framework, measured in counts.


bound at collection
Qualitative axis

Outcome attribution

Why a participant got an offer or did not, in their own words. The mechanism layer of the framework, measured in interview transcripts.

What Sopact Sense produces here
  • Persistent identifier per participant Same record at intake, week-eight check-in, exit, and six-month follow-up. Change is measured at the individual level.
  • Assumption named in the framework The assumption that credentials are sufficient is documented as a sentence and tied to a specific monitoring question.
  • Theme extraction from open responses The phrase portfolio surfaces across employer and participant transcripts before week eight, flagged as a recurring theme.
  • Mid-cycle revision documented The framework is updated at week nine of cohort three, not at year-end. Cohort four launches with portfolio sessions added.
Why most setups miss this
  • No persistent identifier across waves Intake is one form, exit is another, follow-up is a third. Matching is by typed name and email, which drift between waves.
  • Assumption is implicit The framework lists outcomes but never names the assumption that credentials produce them. When the assumption breaks, no monitoring question catches it.
  • Open responses read at year-end Employer and participant transcripts are filed for later analysis. The portfolio theme surfaces in February when the year-end report is being drafted, not in week eight when it could change cohort design.
  • Framework revised once a year Cohorts three and four run with the same broken assumption because the calendar review cycle has not arrived yet.
The architectural difference

The integration is structural, not procedural

In Sopact Sense, the persistent identifier, the assumption layer, and the theme extraction are wired into the data model from intake. The mid-cycle revision is not a separate exercise; it is a query against records that share an identity. In a stack of disconnected forms and exports, the same revision requires a manual reconciliation project that is rarely done in time to change anything.

The framework does not get smarter because the team is more diligent. It gets smarter because the data underneath it is structured to test it.

Across program shapes

The same architecture, three program shapes

The six-component model applies across sectors. What changes is where the cycles run, what counts as an outcome, and which assumptions break first. Three program contexts, each with its typical shape and what the architecture has to support.

EXAMPLE 01

Workforce training program

Cohort cycle, four to twelve waves

A workforce training program runs cohort after cohort, each twelve to twenty-four weeks long. The theory of change ties classroom instruction plus internships to placement outcomes at six and twelve months. Indicators include credential earned, interview rate, offer rate, retention at twelve months, wage at placement.

Without the architecture, intake is one form, exit is another, follow-up is a third. The cohort closes before anyone matches the records. By the time year-end analysis runs, two more cohorts have launched with the same curriculum. Mid-cycle correction is not impossible; it is just not built into the workflow, so it does not happen.

With the architecture, the same identifier carries each participant from intake through twelve-month follow-up. Open responses are read against named assumptions as they arrive. Curriculum changes for the next cohort are based on what cohort one and two showed, not on a year-end summary that arrives after cohort five has already started.

A specific shape

A 14-week welder upskilling program with a 90-participant cohort. Mid-cycle assumption review at week eight surfaces a recurring portfolio gap. Cohort four adds two portfolio sessions; offer rate moves from roughly 40 percent to roughly 65 percent across the next two cycles.

EXAMPLE 02

Education or nonprofit program

Multi-year, multi-site

An early childhood literacy program runs across thirty schools and three years. The theory of change ties teacher training plus parent engagement plus age-appropriate materials to reading proficiency at the end of grade three. Outcomes are measured against assessment scores, teacher fidelity logs, and parent participation rates.

Without the architecture, each school's data lives in its own spreadsheet. Some schools track parent engagement, others do not. Comparing across sites at year-end requires assembling thirty different files, each with different column conventions. Differences between sites get lost in the assembly rather than visible in the analysis.

With the architecture, every school operates on the same instrument structure. Site-level identifiers carry across years. The framework can be tested at the site level (does this school's pattern match the theory) and at the program level (does the cohort overall show the predicted gain). Assumption breaks become visible at the school they originated in.

A specific shape

A literacy initiative across 30 schools, three cohorts. Site-level analysis surfaces that schools with stable teacher rosters show roughly two-thirds of the cohort gain; schools with high turnover show one-third. The framework's assumption about teacher continuity moves from implicit to explicit, with teacher retention added as a monitoring indicator.

EXAMPLE 03

Impact fund or accelerator

Portfolio, twelve to twenty-four investees

An impact fund holds a portfolio of fifteen to twenty-five investees across two or three thematic areas. Each investee has its own theory of change. The fund needs to roll those up into a portfolio-level narrative for limited partners, while preserving theme-specific drill-down for internal review.

Without the architecture, each investee reports in its own format. The fund team rebuilds a portfolio rollup spreadsheet every quarter, manually mapping each investee's indicators to IRIS+ and IMP categories. The rollup arrives just in time for the LP report and is obsolete by the next quarter, so the manual work compounds.

With the architecture, each investee operates on instruments structured against the same indicator catalog. Cross-portfolio rollup is automatic against IRIS+ codes; theme-specific drill-down is preserved beneath. The full architecture is in the pre and post surveys guide for cohort-level mechanics, and in the partner intelligence pillar for fund-level rollup.

A specific shape

A 22-investee impact fund with three sector themes. Cross-portfolio "stakeholders reached" indicator rolled up automatically against IRIS+ OI.6398; theme-specific drill-down preserved beneath. Quarterly LP report assembly time dropped from weeks to under three days in cycle two.

A short note on tools and resources

Where Sopact fits, and where it does not

The published resources from the Center for Theory of Change, Better Evaluation, and NPC are well-maintained and free. They cover the model thoroughly. They are the right starting point if you are writing a framework on paper. They are not measurement tools; they do not collect data, retain identifiers, or test assumptions.

Sopact Sense is the data layer underneath the framework. Persistent identifiers from intake through follow-up. Indicators wired to instruments at design time. Open responses scanned against named assumptions as they arrive. The framework gets smarter because the data underneath it is structured to test it. If you are at the stage of drafting the framework, the resources above are sufficient. If you are at the stage of testing it, the architecture is what matters.

FAQ

Theory of change questions, answered

Thirteen of the most-searched questions about theory of change, answered in plain language. Every answer mirrors the page's JSON-LD schema, so AI Overview and SERP rich results draw from the same source.

Q.01

What is a theory of change?

A theory of change is a written explanation of how and why a program is expected to produce change in the people it serves. It names the problem, the activities meant to address it, the outcomes those activities should produce, and the assumptions linking each step. Carol Weiss coined the term in the 1990s, framing it as a tool for making beliefs explicit enough that data could confirm or disconfirm them. Without that testable form, a theory of change is a narrative, not a theory.

Q.02

What is a theory of change model?

The theory of change model is the standard structure that organizes the explanation: inputs, activities, outputs, outcomes, impact, and the assumptions that connect them. Some versions include a problem statement at the front. Others split outcomes into short, medium, and long term. The model itself is shared across sectors. What varies is the content placed inside each component and the rigor with which each assumption is named.

Q.03

What is a theory of change framework?

A theory of change framework is the operational version of the model: the diagram plus the indicators that measure each component, the instruments that collect those indicators, and the monitoring questions that test each named assumption. The model is the picture. The framework is the picture plus everything that makes it testable. A framework without indicators or instruments is decoration; the data layer underneath is what makes it work.

Q.04

What does theory of change mean?

Theory of change means a documented hypothesis about cause and effect inside a program. The word theory is used in its scientific sense: a structured account of why something happens, written in a form that data can support or refute. The phrase distinguishes it from a list of activities, a mission statement, or a logic model. Each of those describes what a program does. A theory of change explains why doing it produces the change.

Q.05

What are the six components of a theory of change?

The six components are inputs, activities, outputs, outcomes, impact, and assumptions. Inputs are what you commit before activities begin. Activities are the designed interventions. Outputs are the direct countable products. Outcomes are observable changes in stakeholders. Impact is the long-term systemic change you contribute to. Assumptions are the conditions that must hold for one stage to lead to the next, and they are the component most often missing from a written framework.

Q.06

What is the difference between a theory of change and a logic model?

A logic model describes what a program does in a left-to-right matrix: inputs, activities, outputs, outcomes. A theory of change adds the causal explanation and the assumption layer underneath. The logic model says the cohort will receive twelve weeks of training. The theory of change says the training will produce a credential that employers value, assuming employers continue to recognize that credential and assuming participants can travel to the training site. The two are usually paired.

Q.07

What is theory of change in monitoring and evaluation?

In monitoring and evaluation, the theory of change is the bridge that connects program design to indicators and instruments. Each outcome stage becomes a measurable indicator. Each indicator becomes a question on a baseline, midline, or endline survey. Each named assumption becomes a monitoring question embedded in mid-cycle check-ins. Without that connection, monitoring produces aggregate counts that cannot test the theory. With it, every cycle produces evidence the theory can be revised against.

Q.08

Can you give a theory of change example?

A workforce training program example: inputs are funding, instructors, and a curriculum partner. Activities are twelve weeks of classroom instruction plus an employer-matched internship. Outputs are completed modules and earned credentials. Outcomes are participants placed in roles paying above the local living wage within six months of completion, and retained at twelve months. Impact is reduced reliance on public assistance across the cohort. Assumptions include employer recognition of the credential, participant transportation, and stable housing through the program.

Q.09

How is a theory of change diagram structured?

A theory of change diagram is read left to right. Problem and inputs sit on the left, activities and outputs in the middle, outcomes and impact on the right. Arrows mark the causal direction. The assumption layer runs underneath as a separate band, with each assumption tied to the arrow it supports. Some diagrams use vertical layout for poster format, but the left-to-right horizontal version is the most widely recognized and the one funders typically expect.

Q.10

How do you write a theory of change statement?

A theory of change statement is a single sentence that names the program, the population, the change expected, and the mechanism. The standard form: if we deliver this activity to this population, then this change will occur, because this mechanism is in place. The because clause is the part most teams skip. Without it, the statement describes activity, not theory. Writing this sentence first surfaces every assumption the longer document then has to defend.

Q.11

What is a theory of change template?

A theory of change template is a pre-structured grid or canvas with labeled boxes for each component. Templates from the Center for Theory of Change, NPC, and Better Evaluation are widely used and free to adapt. The template gets a team to a draft quickly. The template is not the same as the framework: the template provides the structure; the team has to supply the indicators, instruments, and monitoring questions that turn the structure into something data can test.

Q.12

How is theory of change used in education?

In education, a theory of change typically maps an instructional intervention to learner-level outcomes. Inputs are curriculum design, teacher time, and assessment instruments. Activities are sessions or modules. Outputs are completion and assessment scores. Outcomes are observed gains in skills, behavior, or self-reported confidence. Impact is sustained academic or career trajectory change. Assumptions cluster around teacher fidelity, learner attendance, home support, and the relevance of the assessment to the outcome being measured.

Q.13

Can I use Google Forms or SurveyMonkey to test a theory of change?

These tools collect responses well. They do not retain a persistent identifier across baseline, midline, and endline waves, so when you try to measure change at the individual level, the records have to be matched manually after the fact. That post-hoc matching is where most theory of change measurement breaks down. The forms do their job. The architectural gap is identity continuity across waves, which has to be solved upstream of the form layer.

Walk through your theory

Bring a diagram. Leave with a tested version.

A working session, not a demo. We sit with your current theory of change, name the assumptions you have not yet tested, and sketch the instrument that would test each one. The output is a revised diagram with data sources attached to outcomes.

Format

60-minute video call with Unmesh Sheth, founder of Sopact and author of this guide.

What to bring

Your current theory of change diagram, logframe, or results framework. A draft is fine.

What you leave with

A revised diagram with named assumptions and a sketch of the instrument that would test each one.

Training Series Theory of Change — Full Video Training
🎓 Nonprofit & Foundation Teams ⏱ Self-paced Free
Theory of Change Training Series — Sopact
Ready to build your own Theory of Change? Sopact Sense turns every outcome statement into a live measurement instrument — no spreadsheets, no manual reconciliation.
Watch Full Playlist