play icon for videos

Baseline Survey: Questions, Template & Examples (2026)

Complete baseline survey guide — questions, 6-section template, real examples, methodology, and report structure. Design a survey that proves real change

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 22, 2026
360 feedback training evaluation
Use Case

Baseline Survey: Questions, Template, Examples, and How to Run One

A workforce nonprofit director is presenting year-end numbers to her funders. "Our graduates gained 42% more confidence on core skills," she says. A funder asks: "Compared to what?" She flips to the baseline slide. The baseline asked about general satisfaction. The endline asked about specific skills. The two surveys never mirrored each other — and the 42% number falls apart in one question.

This is The Mirror Mistake — what happens when your baseline and endline surveys don't mirror each other exactly. Same people, same questions, same scales — if even one piece drifts, the comparison collapses. It's the single most common way baseline surveys fail the exact test they were designed to pass.

Last updated: April 2026

Most guides treat a baseline survey as a pre-program checklist you tick off before things get busy. This guide treats it as one half of a promise: whatever you measure before the program, you have to be able to measure again after — in the same people, with the same questions. Get that right and every finding you report later holds up.

Baseline Survey Guide

A baseline survey is one half of a promise. Here's the other half.

Every baseline survey exists to be answered again at endline. Get the design right and every claim about change becomes defensible. Get it wrong and no amount of analysis can recover the comparison.

Ownable Concept
The Mirror Mistake
What happens when your baseline and endline surveys don't mirror each other exactly. Same people, same questions, same scales — if any one piece drifts, the comparison collapses. Most baseline projects fail this rule without realizing it — and no amount of analysis later can recover what the design gave up.
4
types of baseline survey
5
pieces of methodology to lock
6
sections in every good template
1 ID
per person, start to finish
Interactive Example
See baseline questions tailored to your program type

Four tabs. Four different programs. Every question designed to be re-asked at endline.

12-week digital skills program for 200 adult learners. The baseline runs in week one — before the first class. The exact same questions will run again in week twelve.
01
Quant
On a scale of 1 to 10, how confident are you today working with spreadsheets for a specific task at work?
02
Quant
In the last month, how often have you used email to handle a work task? Never · Rarely · Sometimes · Often · Every day
03
Quant
Which of these have you done in the last 30 days? (Select all that apply) — attended a video call · completed an online form · reset a password · opened a PDF
04
Qual
Describe a specific time in the last month when you tried to do something at work with a computer but couldn't. What happened?
05
Qual
What's the one digital skill you most wish you had right now, and why?
Housing-stability program serving 400 residents in supportive housing. The baseline runs at move-in. The endline runs at 12 months — same questions, same people.
01
Quant
On a scale of 1 to 10, how stable does your housing feel to you today?
02
Quant
In the last three months, how many times did you worry about being unable to pay rent? None · 1 or 2 · 3 to 5 · More than 5
03
Quant
Which of these services have you used in the last 30 days? (Select all) — case manager · food pantry · transportation help · child care · health clinic
04
Qual
Walk us through what a typical week looks like for you right now, start to finish.
05
Qual
What's one thing about your life right now you most want to change in the next year?
Portfolio baseline across 22 investees at the start of a 3-year fund cycle. One lead contact per investee answers. The same questions run every six months for the life of the fund.
01
Quant
On a scale of 1 to 10, how ready is your team to scale your current product or service to a new market in the next 12 months?
02
Quant
How many full-time employees does your organization have today? (exact number)
03
Quant
Which of the following have you done in the last 90 days? — raised capital · launched a new product · entered a new market · hired a leader · none of these
04
Qual
Describe the single biggest thing holding your organization back from growth right now.
05
Qual
What's the one thing our fund could do in the next six months that would help you most?
Reading-confidence program running across 8 elementary schools. The baseline runs in September. The endline runs in May — same questions, same students, same teachers.
01
Quant
On a scale of 1 to 10, how much do you enjoy reading on your own today?
02
Quant
In a typical week, how many books do you read for fun (not for homework)? 0 · 1 · 2 · 3 · 4 or more
03
Quant
How confident are you today reading out loud to a small group? Not at all · A little · Somewhat · Very · Completely
04
Qual
Tell us about a book you've read recently that you really liked. What made it good?
05
Qual
What's the one thing about reading that's hardest for you right now?

What is a baseline survey?

A baseline survey is the first round of data collection, run before a program or intervention starts, to record where people stand so you can measure what changed after. It's the "before" in every before-and-after comparison. Without one, every claim about impact is an opinion — not evidence.

A baseline survey has one job: serve as the anchor point for a future comparison. That means everything about it — the questions, the scales, the response format — needs to be designed with the endline survey already in mind. If you can't run the same survey again later on the same people, what you collected is a snapshot, not a baseline.

What are the main types of baseline survey?

There are four main types of baseline survey. Each fits a different program or reporting need.

1. Pre-program baseline — run right before a training, workshop, coaching cycle, or intervention begins. Captures starting skills, confidence, attitudes, or conditions. Paired with an endline at the end of the program. This is the most common type for training providers and nonprofit program teams. See the full pre-post survey guide for paired design.

2. Portfolio baseline — run at the start of a grant, investment, or multi-year funding cycle. Captures the state of every grantee or investee organization before the work begins. Paired with follow-up waves every six or twelve months.

3. Community needs baseline — run before a service, facility, or resource is designed. Captures what people currently have, need, and struggle with. Used to shape the program itself, not just measure its effect.

4. Longitudinal cohort baseline — the first wave of a multi-year study following a defined group (a graduating class, a cohort of fellows, a year's worth of program entrants). Paired with multiple follow-up waves over time. See the full longitudinal survey breakdown.

The type you pick depends on your question. "Did our program change people?" needs a pre-program baseline. "How does a grantee's work evolve?" needs a portfolio baseline. The type is your first decision — not the questions.

Baseline survey example

Here's a concrete example. A workforce nonprofit runs a 12-week digital skills program for 200 participants. The director needs to show funders what the program changed — not just what participants thought of it.

The baseline runs in week one, before the first class. It includes:

  • A 1–10 scale on confidence with five specific digital tasks (spreadsheets, email, online forms, video calls, safe passwords)
  • A short multiple-choice item on current employment status
  • One open-ended question: "What's the one digital skill you most wish you had right now?"
  • Demographic fields used for later group comparisons
  • A permanent participant ID assigned the moment they fill it out

The endline runs in week twelve. Same scale, same five tasks, same open-ended question, same ID. Because the two surveys mirror each other exactly — and each person's ID links their week-one answer to their week-twelve answer — the team can calculate a per-person change score for every participant, not just a group average.

That's the example. Short, paired, mirror-matched. More ambitious baselines add employer follow-ups at three and six months, tied to the same participant ID. The rule doesn't change: baseline and endline must match, and the same ID must carry both.

Baseline survey methodology

Baseline survey methodology comes down to five pieces — all decided before you write a single question.

The population. Who is getting the baseline? The full cohort, a random sample, or a specific subgroup? Document who's in and who's out.

The timing. Exactly when does baseline close relative to the program start? One day before? One week? Too early and life events contaminate the reading. Too late and the program has already started affecting people.

The instrument. Which questions, which scales, which response format. This is what "locks" at baseline — you can't change wording or scale types at endline without breaking the comparison.

The identity system. How does each person get a permanent ID that carries from baseline through every follow-up? This is where most baseline surveys silently fail. If the same person gets a new ID each wave, their before and after never connect.

The analysis plan. Before any data comes in, write down exactly how you'll compare baseline and endline answers — what charts, what breakdowns, what groups. If you can't describe the analysis, the survey isn't ready to run. Related: survey analysis.

These five pieces are the whole methodology. Everything else — response rates, reminder emails, mode choices — is mechanics. Methodology is what makes the numbers defensible.

Masterclass
Running a baseline survey that actually holds up at endline
Baseline survey masterclass video thumbnail
▶ Masterclass Watch now
#baselinesurvey #impactmeasurement #prepost #longitudinal
Unmesh Sheth, Founder & CEO, Sopact Book a walkthrough →

Baseline survey questions

Baseline survey questions sit in two groups: quantitative (the scales, ratings, and multiple-choice items that produce numbers) and qualitative (the open-ended questions that produce stories). The best baseline surveys use both — and both must be repeatable at endline.

Quantitative baseline questions measure a specific skill, knowledge level, behavior frequency, or attitude on a fixed scale. Examples:

  • "On a scale of 1 to 10, how confident are you today with [specific task]?"
  • "How often in the last month did you [specific behavior]? Never / Rarely / Sometimes / Often / Always"
  • "Which of these [tools / concepts / resources] have you used in the last 30 days? [multi-select]"
  • "Today, how would you rate your [knowledge area]? Beginner / Working / Confident / Expert"

Every quantitative item must be specific. "How satisfied are you with life right now?" is not a baseline question — it's noise. "On a scale of 1 to 10, how confident are you today running a client intake meeting?" is a baseline question — because it's specific enough to be re-asked and re-measured at endline.

Qualitative baseline questions capture reasoning, context, and specific situations that numbers miss. Examples:

  • "Describe a specific time in the last month when you tried to [task]. What happened?"
  • "What's the one thing about [outcome area] you most wish you could change?"
  • "Walk me through what a typical [situation] looks like for you today."

For more on writing these well, see our open-ended survey questions guide. Keep each one short, specific, and answerable in two to four sentences.

Every baseline survey should include both types. A rating tells you what. The open-ended answer tells you why. Run them together and you have a mixed-method baseline that actually produces decisions at endline — not just a dashboard nobody acts on.

Baseline survey template

A baseline survey template has six sections. This is the structure every good baseline follows, whatever field you're in.

Section 1 — Participant ID and consent. The permanent ID is assigned here. Consent language confirms the person knows their answers will be re-collected at endline and what's done with them.

Section 2 — Demographic and context fields. Age range, role, location, or whatever group comparisons matter for your report. Collect these once at baseline so you're not re-asking at endline.

Section 3 — Core quantitative items. The scales, ratings, and multiple-choice items that will form the heart of your before-and-after comparison. Lock these. Every word matters.

Section 4 — Core qualitative items. Three to five open-ended questions paired with the quantitative ones. Keep them short and specific.

Section 5 — Self-identified goals or priorities. What the participant wants to get out of the program. This is what makes the endline comparison personal, not just programmatic.

Section 6 — Contact preferences for follow-up. How to reach this person for the endline wave. Phone, email, text, preferred time windows. Without this, endline response rates collapse.

This six-section structure works for workforce training, training evaluation, service-delivery nonprofits, and portfolio-level baselines for impact funds. The content changes. The structure doesn't.

Baseline survey report

A baseline survey report is written right after baseline closes — before endline runs. It has four parts.

Part 1 — Who responded. Number of people surveyed, response rate, and how the respondents compare to the target population. If certain groups are under-represented, say so clearly. Your endline comparison is only as representative as your baseline.

Part 2 — Starting-point summary. Descriptive statistics for each quantitative item. Means, distributions, the spread. This is what the group looks like before the program. Add a short paragraph per major item.

Part 3 — Qualitative themes. Summary of what people said in the open-ended questions, grouped by the main themes. AI-assisted coding can do this in minutes — manual coding takes weeks. Related: survey analysis.

Part 4 — Notes and limitations. Anything about the baseline that will affect how you interpret the endline. Who's missing. What was unusual about the timing. What questions might need fixing before the next wave (though you can't change them mid-study — flag them for the next cohort).

The baseline report is short — often under ten pages. Its job is to establish the "before" clearly so the endline report can show what changed. Related: survey report examples.

What is the difference between a baseline and endline survey?

A baseline survey is the first wave, run before a program starts. An endline survey is the final wave, run after the program ends. The two must be exact mirrors of each other — same people, same questions, same scales — so their answers can be compared directly.

The comparison is where the value lives. Neither a baseline nor an endline tells you much on its own. A baseline says where people stood. An endline says where they ended up. Together, they tell you what changed — and that's what funders, boards, and leaders are actually asking for.

Mid-program pulses can sit between baseline and endline. But the two bookends — baseline and endline — are the ones that make or break the report. Get them mirrored and tied to the same people, and every finding is defensible.

Baseline survey pdf

A baseline survey PDF is a printable or downloadable version of the instrument — usually used either as a field-collection format (for places without reliable internet) or as a record-of-what-was-asked document for funders and auditors.

Save your baseline survey as a PDF in two places. First, the blank instrument — so you can show exactly what was asked. Second, the report version — a one-page summary of starting-point results you can share with leadership. Sopact Sense exports both automatically.

Best Practices

Six rules that make any baseline hold up at endline

The hero covers what to ask. These six cover how to run whatever you're asking so the before-and-after comparison actually works.

01
Rule 01
Design the endline before you launch the baseline

A baseline only exists to be answered again later. Write both surveys side by side — same questions, same order, same scales. If you can't name what the endline will ask, the baseline isn't ready to go out.

Teams who skip this step end up writing endline questions that don't line up with what they asked at baseline.
02
Rule 02
Lock every word and every scale

Once the baseline is out in the field, don't rewrite questions or change rating scales. A 1–5 scale and a 1–10 scale are different questions — even if the words around them are identical. Version your instrument. Nothing changes mid-study.

Swapping "confident" for "comfortable" between waves looks like editing. It's actually invalidating.
03
Rule 03
Assign one permanent ID at first contact

Every person gets a permanent ID the first time they fill out anything. That same ID carries through baseline, endline, and every follow-up. Emails change. Names get shortened. Handwritten forms have typos. Only a permanent ID survives.

Without this, "Maria Garcia" at baseline becomes "M. Garcia" at endline — and the two records never connect.
04
Rule 04
Close the baseline before the program starts

A baseline collected in week two isn't a baseline — it's a first pulse. Whatever the program already did shows up in the number. Close baseline collection before any program contact happens. Not the same day. Before.

Late baselines are the silent reason so many program evaluations produce smaller-than-real change scores.
05
Rule 05
Include both numbers and stories

A rating tells you where people stand. An open-ended answer tells you why. Pair them at baseline — and repeat them at endline. The "why" is where the best findings come from, and it's almost free to collect when the rating is already there.

Scale-only baselines produce dashboards that don't explain themselves. Mixed-method baselines produce decisions.
06
Rule 06
Write the analysis plan before anything comes in

Before the first baseline response arrives, write down exactly how you'll compare baseline to endline — which chart, which group breakdown, which comparison. If a question doesn't have an analysis plan, it doesn't belong in the baseline.

Teams who skip this spend three weeks cleaning data for insights they could have designed in from the start.
Every one of these six is built into Sopact Sense — permanent IDs at first contact, locked instruments, AI-coded open answers, and analysis plans that link to live dashboards.
See it in action →

Common baseline survey mistakes

Mistake 1 — Running baseline after the program starts. A baseline collected in week two isn't a baseline — it's a first pulse. Whatever the program already changed is baked into the number. Baseline must run before any program contact.

Mistake 2 — Changing questions between baseline and endline. Rewording "confidence" as "self-assurance" between waves invalidates the comparison. Lock question wording at baseline. If you must change something later, document it and start a fresh cohort — don't overwrite the old one.

Mistake 3 — No permanent ID for each person. If the same participant gets a new ID at each wave, baseline and endline never connect. You'll have two sets of averages — not a per-person change.

Mistake 4 — Only asking quantitative items. Scales without open-ended follow-ups tell you that scores moved, but not why. Pair every rating with one short "why" question.

Mistake 5 — Writing the baseline after the program is designed. The baseline shapes what you can later claim about the program. If you design the program first and then write the baseline to match the story, you've lost the test. Baseline comes first.

Frequently Asked Questions

What is a baseline survey in simple words?

A baseline survey is the first survey you run — before a program starts — to record where people stand. Later you run the same survey again to see what changed. Without the first one, you can't prove anything changed. Sopact Sense ties baseline and endline to the same person automatically through permanent IDs.

What is a baseline survey example?

A baseline survey example: a training nonprofit asks 200 participants five 1-to-10 confidence questions and two open-ended questions in week one. In week twelve, they ask the exact same questions. Each person carries the same ID across both waves, so the team can compare each participant's week-one answer to their week-twelve answer.

What are the types of baseline survey?

The main types of baseline survey are: pre-program baseline (before a training or intervention), portfolio baseline (across grantees or investees), community needs baseline (before a service is designed), and longitudinal cohort baseline (first wave of a multi-year study). Each fits a different kind of program and reporting need.

What is baseline survey methodology?

Baseline survey methodology covers five pieces: the population you're surveying, the timing, the instrument (questions and scales), the identity system that links people across waves, and the analysis plan written before data arrives. All five lock before baseline goes live.

What questions go in a baseline survey?

Baseline survey questions include quantitative items (rating scales, multiple choice, behavior frequency) and qualitative items (short open-ended questions). Every question should be specific enough to ask again at endline. Generic items like "how satisfied are you with life" don't work. Specific items like "rate your confidence running a client intake meeting" do.

What is a baseline survey template?

A baseline survey template has six sections: participant ID and consent, demographic fields, core quantitative items, core qualitative items, self-identified goals, and contact preferences for follow-up. The same structure works across training programs, nonprofit services, and portfolio-level impact funds.

What is the Mirror Mistake?

The Mirror Mistake is what happens when your baseline and endline surveys don't mirror each other exactly. Same people, same questions, same scales — if even one piece drifts, the before-and-after comparison collapses. Most baseline projects fail on this rule without realizing it.

What's the difference between a baseline survey and an endline survey?

A baseline survey is the first wave, run before the program starts. An endline survey is the final wave, run after the program ends. They must be exact mirrors so their answers can be compared. Together they tell you what changed. Alone, neither says much.

How do you write a baseline survey report?

A baseline survey report has four parts: who responded and response rate, a starting-point summary of key numbers, themes from open-ended answers, and notes on limitations. Keep it short — usually under ten pages. Its job is to set up the endline comparison clearly, not to analyze change.

Can I download a baseline survey PDF?

Yes, most baseline surveys can be saved as a PDF for field collection or audit records. A PDF version of your instrument shows exactly what was asked. A PDF version of the baseline report summarizes starting-point results for leadership. Sopact Sense exports both formats automatically.

How does Sopact Sense help with baseline surveys?

Sopact Sense assigns a permanent ID to every participant at first contact. That same ID carries through baseline, endline, and every follow-up wave — with no manual matching. Open-ended answers are coded by AI the moment they arrive, and baseline-to-endline comparisons update live, so no one waits weeks for the analysis.

How much does baseline survey software cost?

Free tools like Google Forms handle basic one-time surveys but can't link baseline and endline to the same person automatically. Mid-range tools like SurveyMonkey run $30–$100 a month. Platforms built for paired and longitudinal work sit higher because they include permanent IDs, AI analysis, and dashboards in one system.

Next step

Run your baseline and endline as one connected study

Sopact Sense was built for the Mirror Mistake — one permanent ID per person, locked instruments across waves, and baseline-to-endline comparisons that update live. No three-week reconciliation window between collection and reporting.

  • Permanent IDs that carry from baseline through every follow-up wave
  • Locked instruments so baseline and endline stay mirrored
  • AI analyzes open-ended baseline answers as they arrive, with citations