play icon for videos

Impact Survey Questions: 60+ Questions by Framework

Impact survey questions organized by Theory of Change, Logic Model, and IRIS+. 60+ examples covering outputs, outcomes, and long-term impact for nonprofit programs.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 9, 2026
360 feedback training evaluation
Use Case
IMPACT SURVEYS

Impact survey questions show whether a program changed the people it was meant to change.

Most surveys ask whether the experience was good. An impact survey asks whether something downstream is different now: knowledge, behavior, confidence, income, opportunity. Different question, different design, different answers.

This guide walks through 60+ impact survey questions organized by framework (Theory of Change, Logic Model, IRIS+) and by outcome level (output, short-term outcome, long-term outcome). Examples come from workforce training, education, health, and social enterprise programs. Use it as a starter library or as a check on a survey you already wrote.

  • 01Why impact surveys differ from satisfaction surveys
  • 0260+ questions organized by framework
  • 03Pre-program vs post-program pairs
  • 04Likert scales, open-ended prompts, behavioral items
  • 05Six design principles
  • 06How Sopact analyzes open-ended responses
OUTCOME LEVELS

Every impact question lives at one of three outcome levels.

Confusing the levels is the most common impact-survey design failure. Output questions count what was delivered. Short-term outcome questions measure what changed during or right after the program. Long-term outcome questions measure whether that change held and translated into different lives. Each level needs a different question shape.

01
Output
What the program delivered
02
Short-term outcome
What changed for the participant
03
Long-term outcome
Whether the change held
Assumption layer

Because a working impact survey holds the same person across the levels and lets the data show whether outputs translated into outcomes.

Three levels, three question shapes. Output questions count. Outcome questions ask what is now different. Long-term questions ask whether it lasted. Source: Theory of Change practice (Weiss 1995, Kellogg 2004), IRIS+ outcome taxonomy.

DEFINITIONS

Impact Survey Questions: terms and meaning

What is an impact survey?

An impact survey is a structured questionnaire that measures whether a program changed the people who went through it. The questions ask about knowledge, behavior, confidence, skill, opportunity, or income: not about whether the experience felt good.

Impact surveys differ from customer satisfaction surveys in three ways. They ask about change rather than experience. They are administered at multiple points in time so the change can be observed. And they connect to a program theory that names what change is expected and why.

Impact survey meaning

An impact survey is the measurement instrument for a program's theory of change. The theory of change names the outcome the program is supposed to produce; the impact survey asks whether that outcome happened.

Reporters sometimes use impact survey to mean any survey conducted by an impact organization. In rigorous practice the term has a narrower meaning: a survey designed to detect change in the participant, deployed at baseline and again later, with question wording stable across administrations.

What does an impact measurement survey include?

An impact measurement survey covers four ingredients. A demographic and identity block so the same person can be tracked across waves. Outcome questions tied to specific theory of change steps (knowledge, attitude, behavior, condition). Open-ended prompts that surface stories and unexpected results. And, optionally, validated scales (PHQ-9, self-efficacy, financial well-being) if the outcome maps to a known measure.

Length matters. Field-tested impact surveys run twelve to twenty closed items plus three to five open prompts. Anything longer attrits at follow-up.

What is a social impact survey?

A social impact survey is an impact survey deployed by a social enterprise, foundation, or community-based program. The defining feature is the audience: the survey reaches participants whose well-being the program is trying to improve, and asks whether the program contributed to that improvement.

Social impact surveys often include questions about systemic conditions (housing stability, food security, employment quality) alongside questions about the individual change the program targets. Context matters because individual change is hard to observe without it.

Impact survey vs satisfaction survey

Satisfaction asks about experience. Impact asks whether something downstream is different now.

Impact survey vs needs assessment

Needs assessment runs before the program to identify the gap. Impact survey runs during and after to measure whether the gap closed.

Impact survey vs outcome survey

Outcome survey is one type of impact survey, scoped to short-term changes. Impact surveys can also reach long-term outcomes and contribution to broader social conditions.

Impact survey vs evaluation

Survey is one method inside an evaluation. A full evaluation often combines surveys, administrative data, and qualitative interviews.

DESIGN PRINCIPLES

Six principles for impact survey questions

01 · WORDING

Ask about the change, not the experience.

Impact in plain language

An impact question always has a before-and-after shape, even when only one wave has been collected. "How confident are you handling a difficult conversation?" works. "Was the workshop helpful?" is satisfaction.

Why it matters: Avoids capturing program love instead of program effect.

02 · ANCHORING

Pre and post wording must match exactly.

Wording stability

Run the same question with the same scale and the same anchors. Even small wording changes ("confident" vs "comfortable") destroy comparability across waves.

Why it matters: Lets simple paired-difference statistics reveal real change.

03 · SCALES

Use 5-point Likert with named anchors.

Calibrated scales

Five points with both ends labeled ("Not at all confident" / "Extremely confident") gives enough resolution without forcing false precision. Avoid 10-point scales unless the outcome literature uses them.

Why it matters: Reduces middle-pile clustering and clarifies what the numbers mean.

04 · OPEN-ENDED

Pair every closed item with one open prompt.

Closed plus open

A Likert score tells you the direction. The open prompt next to it tells you why and surfaces outcomes you did not anticipate. "Tell us about a time you used what you learned" produces evidence numbers cannot.

Why it matters: Open-ended responses become the citation in funder reports.

05 · TIMING

Administer at baseline, exit, and at least one follow-up.

Three waves minimum

Pre and post alone misses whether change held. A 90-day follow-up shows whether the change survived contact with the rest of life. Build the follow-up into the program design, not as an afterthought.

Why it matters: Short-term gains are common. Long-term outcomes are rare and worth measuring.

06 · IDENTITY

Bind every response to the same person across waves.

Persistent ID

Match by hand and you lose 30% of the sample to typos and email changes. Use a stable identifier: participant ID generated at intake and carried through every form.

Why it matters: Without it, change at the individual level is impossible to measure.

DESIGN CHOICES

The choices that decide whether impact survey questions produces useful data

Each row teaches one design principle. The broken way is the workflow most programs fall into; the working way is what mature impact teams move to. The compounding effect at the bottom is why the first decision controls all the others.

The choice
Broken way
Working way
What this decides
Question type
BROKEN

Open-ended only

WORKING

Closed scales paired with open prompts

Open-only buries decisions in qualitative review. Mixed methods give a number for the dashboard and the story for the funder.

Administration cadence
BROKEN

Once at exit

WORKING

Baseline at intake, exit at completion, follow-up at 60-90 days

Exit-only measures only state, not change. Three waves shows the slope.

Sample frame
BROKEN

Open link, anyone can submit

WORKING

Authenticated link tied to participant ID

Open links lose the ability to detect change in the same person. Authenticated links preserve identity.

Question count
BROKEN

30-50 items

WORKING

12-20 items plus 3-5 open prompts

Long surveys attrite at follow-up. Twelve well-chosen items cover the theory of change and still complete in eight minutes.

Scale design
BROKEN

Mixed scales (3-pt, 5-pt, 7-pt) per question

WORKING

Consistent 5-point scale across the survey

Inconsistent scales confuse respondents and break dashboards. Pick one and hold it.

Open-ended analysis
BROKEN

Read at the end of the year

WORKING

Coded continuously as responses arrive

End-of-year reviews surface themes too late to act on. Continuous coding turns qualitative into a live signal.

Demographic block
BROKEN

Optional, varies by wave

WORKING

Required at baseline, optional and confirmed at follow-up

Demographics changing across waves is a wave-matching failure. Lock at baseline, verify at follow-up.

COMPOUNDING EFFECT

These six choices compound. Picking the right administration cadence is wasted if the sample frame loses identity across waves. Picking the right question type is wasted if the open-ended responses sit unread for months.

WORKED EXAMPLE

A workforce training program asks whether participants used what they learned.

We trained 240 people across four cohorts. We had baseline surveys, exit surveys, and a 90-day follow-up. The numbers looked great at exit: 4.6 out of 5 on every confidence item. At the 90-day mark, the same items dropped to 3.4. Without the follow-up wave we would have published the 4.6 and missed the real story: most people lost confidence in the first eight weeks back at work, and our follow-up coaching was the difference.

Workforce training program lead, post-cohort review
QUANTITATIVE AXIS

Confidence rating on a 5-point scale, asked verbatim at baseline, exit, and 90 days. Aggregated to a per-cohort change score and segmented by participant type.

bound at collection
QUALITATIVE AXIS

Open prompt at follow-up: "Tell us about a time you tried to use what you learned and it did or did not work." Coded for context, support, and barrier.

Sopact Sense produces

  • Wave-matched change scores per participant. Each person's baseline-to-90-day delta calculated automatically. Cohort summary updates as data arrives.
  • Open-ended responses coded continuously. Themes (manager support, peer pressure, lack of time) tagged as responses come in. No end-of-year review needed.
  • Funder report drafts itself from the data. Outcome rollups, representative quotes, and demographic breakdowns assemble without manual cleanup.
  • 90-day signal triggers coaching. Participants whose confidence drops below threshold get follow-up outreach automatically.

Why traditional tools fail

  • Spreadsheets matched by hand. Exit and follow-up files in separate Google Sheets. Three days of typo correction before any analysis.
  • Open responses read at year-end. By the time themes surface, the cohort is gone. Lessons land in next year's curriculum, not this one.
  • Reports built from scratch each cycle. Same 12 hours of formatting every quarter. Funder asks a follow-up question; another six hours.
  • Drop-off invisible until exit interview. Participants who needed follow-up support already churned. Pattern only visible in retrospect.

The integration matters more than the dashboard. When wave-matching, qualitative coding, and report generation happen in the same system, a confidence drop at week 8 triggers an action at week 9. When they live in separate tools, the drop is found at the end of the year.

PROGRAM CONTEXTS

Where impact survey questions actually live

Three different program shapes. Same architectural backbone, different operational realities. Each block names typical shape, what breaks, what works, and a specific example.

01

Workforce training and credentialing programs

Adult learners, 8-16 week programs, employment outcomes

Typical shape. Typical shape: Open enrollment, cohort-based, completion certificate. Outcome targets: credential earned, job placed, wage gain at six months.

What breaks. What breaks: Exit surveys hit 90% completion; six-month follow-up surveys hit 30%. The drop kills longitudinal analysis. Phone numbers go stale. Email open rates collapse.

What works. What works: Persistent ID at enrollment, multi-channel follow-up (SMS plus email plus employer outreach), stipend or gift card for the six-month survey. Outcome questions kept identical across waves so deltas are real.

A SPECIFIC SHAPE

A specific shape: Workforce program with 240 enrollees per year. Six-month survey hits 65% with stipend and SMS plus email. Outcome rollup updates monthly. Funder report generated in two days, not three weeks.

02

Education and youth development programs

K-12 and post-secondary, multi-year arc, multiple stakeholders

Typical shape. Typical shape: Cohort moves through grades or program years. Surveys reach students, teachers, parents, and program staff. Outcomes span academic, social-emotional, and post-program trajectories.

What breaks. What breaks: One survey for everyone results in low quality from each audience. Reading-level and language considerations get ignored. Parent surveys end up in inboxes that do not check email; student surveys assume tech literacy that does not exist.

What works. What works: Audience-specific question banks with shared identity binding. Mobile-first with multilingual rendering at the point of fielding. Plain-language reading-level checks before deployment. Open prompts under closed items so the story is captured even when the rating is unrevealing.

A SPECIFIC SHAPE

A specific shape: Out-of-school-time program serving 600 youth. Quarterly student survey, semi-annual parent survey, annual teacher survey. Cross-stakeholder rollups by youth ID. Story patterns from open prompts inform staff coaching.

03

Foundation and impact-fund portfolios

Multiple grantees, common indicators, comparative reporting

Typical shape. Typical shape: A funder supports 12 to 60 grantees and wants to compare outcomes across the portfolio. Each grantee runs its own programs but shares a few common outcome indicators.

What breaks. What breaks: Each grantee submits its own format. The foundation team spends six weeks per cycle reconciling formats. Common indicators end up partially populated and inconsistently scaled. Aggregate rollups become hand-built slide decks.

What works. What works: A common impact survey question bank used across the portfolio for the small set of shared outcomes. Each grantee adds program-specific items. Identity-binding stays per-grantee; aggregation happens at the indicator level. Open-ended responses coded with a shared rubric.

A SPECIFIC SHAPE

A specific shape: 30-grantee portfolio with three shared outcome indicators (knowledge, behavior, condition). Each grantee runs its own surveys; portfolio rollup pulls indicator-level data automatically. Foundation produces a portfolio outcomes brief in days.

SurveyMonkeyQualtricsGoogle FormsTypeformSopact Sense

A note on tooling

The traditional survey vendors collect responses well. They were built for market research and customer satisfaction work, where each survey stands alone and the analyst takes the export to a spreadsheet. The architectural gap they share: response identity does not persist across waves, open-ended responses sit unanalyzed until someone exports a CSV, and there is no native concept of program outcomes connected to a theory of change.

Sopact Sense closes that gap by treating the survey as one moment in a longitudinal record. Every response is bound to a participant ID at collection. Open-ended responses are coded continuously as they arrive. Outcome rollups update without an export step. Funder reports draft themselves from the same data the program team works in every day.

FAQ

Impact Survey Questions questions, answered

Q.01

What is an impact survey?

An impact survey is a structured questionnaire that measures whether a program changed the people who went through it. The questions ask about knowledge, behavior, confidence, skill, opportunity, or income. Impact surveys are administered at multiple points in time (baseline, exit, follow-up) so change can be observed at the individual level rather than inferred from aggregate satisfaction.

Q.02

Impact survey meaning

Impact survey is the measurement instrument for a program's theory of change. The theory names the outcome the program is supposed to produce; the impact survey asks whether that outcome happened. The phrase has a narrower meaning in rigorous practice than in colloquial use: it implies a longitudinal design with stable wording across waves and persistent identity across responses.

Q.03

What does an impact measurement survey include?

An impact measurement survey includes a demographic and identity block, outcome questions tied to specific theory-of-change steps (knowledge, attitude, behavior, condition), open-ended prompts for unexpected outcomes, and optional validated scales when the target outcome maps to a known instrument. Field-tested impact surveys run twelve to twenty closed items plus three to five open prompts.

Q.04

What are good impact survey questions?

Good impact survey questions ask about the change, not the experience. They use stable wording that runs identically at baseline and follow-up. They pair every closed item with one open prompt that surfaces unexpected outcomes. They use a consistent 5-point Likert scale with named anchors at both ends. And they bind every response to a participant ID so change can be measured per person.

Q.05

How is an impact survey different from a satisfaction survey?

A satisfaction survey asks whether the experience was good. An impact survey asks whether something downstream is different now. The two have different questions, different cadences, and different audiences for the resulting data. Satisfaction informs program design; impact informs outcome reporting and funder accountability. Most programs need both, run as separate instruments.

Q.06

What is a social impact survey?

A social impact survey is an impact survey run by a social enterprise, foundation, or community program. The questions reach participants whose well-being the program targets and ask whether that well-being changed. Social impact surveys typically include questions about systemic conditions (housing, food, employment quality) alongside individual outcomes, because individual change is hard to interpret without context.

Q.07

How long should an impact survey be?

Twelve to twenty closed items plus three to five open prompts. The threshold past which response rates collapse is around 25 items or 10 minutes of completion time, whichever comes first. At follow-up, that threshold tightens. A 30-item baseline that hit 90% completion will see a 30% drop in response rate when run identically at follow-up. Treat brevity as a longitudinal design constraint, not a usability nicety.

Q.08

Pre and post survey questions: how do they differ from impact survey questions?

Pre and post survey questions are the most common impact survey design pattern. The same closed items run identically at baseline and at exit, paired so change can be calculated per person. Impact survey is the broader category; pre-and-post is one structure inside it. Some impact surveys add a follow-up wave at 60-90 days for durability and use mid-program waves for early-warning signals.

Q.09

What is the difference between an outcome survey and an impact survey?

Outcome survey is a narrower term inside the impact survey category. Outcome surveys typically measure short-term changes (knowledge gain, behavior change, confidence shift). Impact surveys can also extend to long-term outcomes (sustained behavior change, employment quality at 12 months, generational outcomes) and to systemic conditions. The distinction matters for evaluation design but the survey instruments overlap heavily.

Q.10

Can I use Google Forms or SurveyMonkey for impact surveys?

Yes for collection. The architectural gap is what happens after collection. Neither tool natively binds responses to a persistent participant ID across waves; neither codes open-ended responses continuously; neither rolls outcome data up against a theory of change. Programs that use Google Forms or SurveyMonkey usually export to spreadsheets and reconcile waves by hand. That step is the bottleneck.

Q.11

How do I analyze impact survey data?

Three layers. Quantitative analysis: paired-difference scores at the participant level, segmented by demographic and program subgroup, summarized by cohort. Qualitative analysis: open-ended responses coded against a small thematic rubric tied to the theory of change. Integrated reporting: outcome rollups paired with representative qualitative evidence so a funder reads numbers and stories together. Sopact Sense handles all three layers in one place; the manual alternative takes weeks per cycle.

Q.12

What is an impact survey template?

An impact survey template is a starter question bank organized by outcome level (output, short-term, long-term) and by framework (Theory of Change, Logic Model, IRIS+). A good template is a starting point, not a final instrument: each question needs to be adapted to the specific outcome the program is trying to produce. Sopact's question bank above gives you the structure; the program-specific adaptation is the work.

Q.13

How often should an impact survey be administered?

Three waves minimum: baseline at intake, exit at completion, follow-up at 60-90 days. Some programs add mid-program waves for early-warning signals; some long-arc programs add 12-month or 24-month follow-ups. The cadence is determined by the theory of change, not by the calendar: a program that expects behavior change three months after exit needs a wave at three months, regardless of what the dashboard wants.

Q.14

How does Sopact handle impact surveys?

Sopact Sense is built for the longitudinal shape of impact surveys. Persistent participant ID at intake; identical wording across waves; open-ended responses coded as they arrive; outcome rollups update without exports; funder reports draft from the same data the program team uses every day. The trade-off versus a generic survey tool is structure: Sopact assumes you have a theory of change and want to measure outcomes against it.

Q.15

What is the difference between an impact survey and an evaluation?

An evaluation is the broader assessment of whether a program worked, why it worked, and what to change. A survey is one method inside an evaluation. Most rigorous evaluations combine surveys, administrative data, and qualitative interviews. Impact surveys are a strong evaluation method when the outcome is observable through self-report and the population can be reached longitudinally.

WORKING SESSION

Bring your impact survey. See what longitudinal looks like.

A 60-minute working session. You bring an existing impact survey or the outcomes you are trying to measure. We map the theory of change, identify the wave-matching gaps, and load a working version into Sopact Sense. No procurement decision required, no slide deck, no follow-up sales sequence.

Format
60 minutes, screen share, working not pitching
What to bring
An existing survey or a one-paragraph theory of change
What you leave with
A loaded survey, identity binding configured, and a sample report