play icon for videos

Baseline Survey: Questions, Template & Examples (2026)

Complete baseline survey guide — questions, 6-section template, real examples, methodology, and report structure. Design a survey that proves real change

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 6, 2026
360 feedback training evaluation
Use Case
Use Case · Survey methodology

A baseline survey is the first measurement of a program.

Captures the state of participants, companies, or communities before any intervention happens. Becomes the reference point every later measurement compares against.

This guide explains what a baseline survey is, what it captures, what questions to ask, and what the baseline survey report looks like. It also shows how a baseline anchors two different downstream designs: pre-post measurement for short-cycle interventions like workforce training, and longitudinal tracking for portfolio-level improvement waves like quarterly impact fund reporting. Worked example follows an impact fund baselining 40 corporate portfolio companies on a social impact and sustainability framework.

What this page covers
01Day 0 anchor and the comparison points
02Definitions, format, and meaning
03Six baseline survey best practices
04The baseline decision matrix
05A 40-company corporate ESG baseline
06Frequently asked questions
Where the baseline sits in the program timeline
Day 0 · Anchor

Baseline survey

Captures the state before any intervention. Locks the indicators every later wave will measure.

Day 90 · Wave 1

First re-measurement

Compared back to baseline. Pre-post endline, or first quarterly wave.

Day 365 · Wave 4

Year-one assessment

Same indicators, compared back to baseline. The reference point does not move.

Day 730 · Wave 8

Year-two assessment

Cumulative change measured against the original Day 0 values.

The baseline does not move. Every later wave compares back to it. Change the baseline indicators after fielding, and every comparison loses its anchor.

The lifecycle role of a baseline survey

Day 0 sets the reference. Every later wave compares back to it.

A baseline survey is not the only measurement. It is the first measurement, and the one that anchors all the others. Programs sometimes treat the baseline as a one-time setup task. The lifecycle view below shows what the baseline actually does: it stays fixed across the entire program timeline so every later wave has something to compare against.

Program timeline · Day 0 anchor and four comparison waves
Day 0 · Anchor

Baseline survey

The first measurement. Locks the indicators, frames the population, assigns identifiers, captures the state before any intervention. Stays fixed for the rest of the program.

Day 90 · Wave 1

First re-measurement

Same indicators. Same identifiers. Compared back to baseline values.

Day 180 · Wave 2

Mid-cycle wave

Quarterly cadence. Direction of change visible against the baseline anchor.

Day 365 · Wave 4

Year-one assessment

Annual cycle complete. Cumulative change measured against Day 0 values.

Day 730 · Wave 8

Year-two assessment

Long-cycle finding. The reference point has not moved. The change has.

Where comparison happens

Every later wave is a measurement of change against baseline, not a measurement of state. The baseline value for indicator X is fixed at Day 0; the wave value for indicator X at Day 365 has meaning only because the baseline gave it a reference point. Change the baseline after fielding (add an indicator, swap a question, redefine the population) and the comparison line breaks for that indicator across every wave that follows.

The baseline plays the same anchoring role across pre-post designs (one comparison wave) and longitudinal designs (many comparison waves). The number of waves changes; the baseline's role does not. Both downstream designs depend on the baseline being captured cleanly the first time.

Definitions

What a baseline survey is, what it captures, and how to format the report.

The terms below are the ones practitioners hear most often. The format and methodology questions matter because the baseline is the document every later report compares against. Get the format wrong and the comparison is harder for years.

What is a baseline survey?

A baseline survey is the first measurement of a program. It captures the state of participants, companies, or communities before any intervention happens. The baseline becomes the reference point that every later measurement compares against.

Without a baseline, a later survey can describe what is happening but cannot describe what has changed. The word baseline carries the structural meaning. A baseline is the line you compare back to. Whatever the program does after Day 0 (a training cycle, a portfolio investment, a policy rollout) gets measured against the baseline values, not against an absolute standard.

What does baseline survey mean?

Baseline survey meaning is structural rather than thematic. The meaning sits in the role the survey plays, not in any specific topic. A baseline survey on workforce readiness, a baseline survey on corporate sustainability indicators, and a baseline survey on community health all share the same meaning: each is the first measurement that anchors everything that follows.

Practitioners new to evaluation sometimes interpret baseline survey to mean a one-time exercise that is over once it is done. The opposite is true. The baseline becomes more important the longer the program runs because it is the reference point for every later finding. A baseline survey done well at Day 0 keeps producing value at Year 5; a baseline done poorly creates problems at every wave that follows. The same concept appears in older program documents written as base line survey (two words); the spelling varies, the structural meaning does not.

What is a baseline survey methodology?

Baseline survey methodology covers six decisions made before the baseline is fielded. Lock the indicators so later waves measure the same things. Choose the framework that organizes the indicators (IMP Five Dimensions, IRIS+, GRI, SASB, or a bespoke framework). Frame the population that will be re-surveyed. Assign stable identifiers from Day 0 so later waves can match. Pilot the questions for measurement validity. Document the framework version so later waves use the same definitions or formally migrate.

The methodology is not separate from the survey. The methodology IS what makes the survey usable as a baseline rather than a one-time snapshot. See the survey methodology guide for the full five-error framework that applies across every survey type.

What is the format of a baseline survey?

Baseline survey format follows the program. A short single-program baseline measuring one outcome construct typically runs 15 to 30 questions. A multi-dimensional impact framework baseline runs 100 indicators or more across 10 to 15 themes.

The structure that holds across formats is the same three-part anatomy. Identifiers and demographics first, so respondents can be matched at later waves. Outcome indicators second, tied to the program theory of change. Context and attribution variables third, so later waves can interpret why outcomes might have changed. The format also documents the framework version, the population frame, and the response rate by subgroup, all of which the baseline survey report references back to.

Baseline survey vs pre-post survey vs longitudinal survey

These three terms describe the same family of designs at different scales. A baseline survey is the first measurement. A pre-post survey is a two-wave design that pairs the baseline with a single endline. A longitudinal survey runs the baseline plus multiple later waves over time.

The baseline is shared across all three. What changes is how many later waves compare back to it. A workforce training program typically uses a pre-post (baseline at intake, post at completion). An impact fund tracking 40 portfolio companies on a sustainability framework typically uses a longitudinal (baseline at portfolio constitution, plus quarterly waves for years). The baseline is the same kind of document; the downstream design depends on the program rhythm.

Related but different

Distinct from
Pre-post survey

A two-wave design where the baseline is the pre and a single endline is the post. The baseline-survey page is for the anchor; the pre-and-post-surveys page covers the matched design.

Distinct from
Longitudinal survey

A multi-wave design where the baseline anchors quarterly, annual, or multi-year re-measurement. Same baseline, more comparison points. See the longitudinal survey guide.

Distinct from
Needs assessment

A needs assessment scopes a problem before a program is designed. A baseline survey measures the population a program will work with after the design is set. They sometimes use overlapping questions; their roles differ.

Distinct from
Endline survey

An endline is the comparison wave that runs at program close. It uses the baseline indicators verbatim. Endlines without a matching baseline can describe a state but cannot describe change.

Six baseline survey best practices

Six principles that protect every later wave from a broken baseline.

Baseline survey best practices are not procedural niceties. Each principle below prevents a specific failure mode that breaks comparability for later waves. The cost of getting these right at Day 0 is hours; the cost of getting them wrong is years of compromised comparisons.

01 · Indicator lock

Lock the indicators before measurement

Once the first wave is in the field, the indicators are fixed.

The most consequential decision is which indicators to measure. Once the baseline ships, those indicators are the comparison points for every later wave. Adding a new indicator after baseline means later waves measure something the baseline never did, and there is no Day 0 value to compare against. Programs sometimes feel pressure to add indicators after baseline to satisfy a new funder or a new framing question. The right pattern is to add it as a new indicator family that begins at the wave it was added, not as an extension of the baseline.

Prevents: indicator drift across waves. Caught early by a written indicator-locking decision before fielding.

02 · Anatomy

Capture demographics, outcomes, and context together

Three-part structure, every baseline.

A baseline survey has three layers. Identifiers and demographics so respondents can be matched at later waves. Outcome indicators tied to the program theory of change. Context and attribution variables that explain why outcomes might change. Programs that capture only outcomes find themselves at Year 2 unable to interpret what changed because the context variables were never measured at baseline. Programs that capture only demographics produce a participant directory rather than a measurement anchor.

Prevents: uninterpretable change. All three layers go in at Day 0.

03 · Identifiers

Use stable identifiers from Day 0

Every later wave matches back via the identifier.

The identifier is what lets the Day 365 record connect to the Day 0 record. Email is fragile (people change jobs). Phone is fragile (numbers change). Name is fragile (Sarah Johnson becomes S. Johnson, then becomes Sarah Johnson-Smith). The cleanest identifier is one assigned at baseline by the program itself: a participant ID, a portfolio company ID, a household ID. The identifier travels through every later wave, so matching across waves becomes a join, not a manual reconciliation.

Prevents: identifier drift across waves. See longitudinal survey for multi-wave matching.

04 · Framework

Document the framework version

IRIS+ v5 is not IRIS+ v6. Write down which one.

Most baselines organize indicators around a published framework. IMP Five Dimensions of Impact, IRIS+ from the GIIN, GRI Standards, SASB Standards, TCFD for climate, or a bespoke framework. Frameworks have versions. IRIS+ v5 differs from v6 in indicator definitions. SASB updates standards by sector. The framework version captured at baseline becomes the version every later wave either matches or formally migrates from. Undocumented framework versions create silent drift that surfaces only when a Year 2 reviewer asks why the indicator definition changed.

Prevents: silent indicator drift through framework updates. Documented at the baseline survey report.

05 · Population frame

Survey the population that will be re-surveyed

Baseline frame and re-survey frame are the same population.

The baseline frame defines who counts as the program. If the baseline samples 40 portfolio companies and Year 1 samples 28 because some exited, the comparison is valid only against the 28 that appeared in both waves. Programs sometimes baseline a wide frame, then re-survey only a convenient subset, then report change as if the full frame were tracked. The honest report names which subset the change refers to. Better: frame the baseline with the re-survey population in mind, and follow nonresponse aggressively at every later wave.

Prevents: attrition bias mistaken for change. Caught early at frame design.

06 · Reporting

Report the baseline as a finding, not an anchor

The baseline report tells you where the program starts.

A baseline survey report is a finding in itself. It tells the funder, the board, or the portfolio committee where the program starts. It describes the population, the indicators, the response rate by subgroup, the framework version, and the values. Treating the baseline as administrative overhead (something to clear so real reporting can begin at Wave 1) misses the point. The Day 0 state is information. A baseline report template that surfaces this state cleanly does double duty: it stands alone as a starting-point finding, and it becomes the reference document every later report compares against.

Prevents: baselines that are captured but never read. Required regardless of program length.

The baseline decision matrix

Six baseline survey decisions. The broken way and the working way.

Each row below names a baseline survey methodology decision and the failure mode it controls. The broken-way column describes the workflow that compounds into broken comparability across years. The working-way column describes the practice that protects every later wave. The decides column names what the choice actually controls in every later report.

The choice
Broken way
Working way
What this decides

Indicator selection

Which metrics get measured at Day 0.

Broken

Indicators chosen by whoever has time to draft them. Some come from a previous program, some from a funder template, some from a copied questionnaire. The list is never formally locked. Six months in, a new request adds three indicators that have no Day 0 value.

Working

Indicators selected against the program theory of change. The list is formally locked in a baseline methodology document. New indicator requests after Day 0 get added as a new family with their own baseline at the wave they appeared, not retrofitted into the original baseline.

Whether later waves can compare back to baseline cleanly or have to footnote which indicators were added when.

Framework choice

Which standard organizes the indicators.

Broken

Framework is implicit. Indicators are named after themes (workforce, environment, governance) without referencing any published standard. Two years later, a reviewer asks which framework the indicators come from, and nobody can answer cleanly.

Working

Framework is named explicitly: IMP Five Dimensions, IRIS+, GRI, SASB, or a documented bespoke framework. The framework version is recorded in the baseline survey report so later waves use the same definitions or formally migrate.

Whether the baseline can be audited against an external standard or has to be defended on its own terms at every funder review.

Identifier strategy

How records get matched at later waves.

Broken

Email is the identifier. People change jobs. Companies change registered domains. By Wave 4 the matching team is reconciling 200 records by hand, and 30 cannot be matched at all.

Working

Stable identifier assigned at baseline by the program itself: a participant ID, a portfolio company ID, a household ID. The identifier travels through every later wave so matching across waves becomes a join, not a manual reconciliation.

Whether multi-wave analysis runs in minutes or weeks, and whether attrition gets named or hidden.

Population frame

Who counts as the program at Day 0.

Broken

The frame is whoever was around when baseline fielding happened. Late enrollees missed it; the frame is later expanded informally without documentation. Year 1 attrition gets reported as if every later wave were tracking the original frame.

Working

Frame defined formally at baseline: 40 portfolio companies as of Q3 2026, plus rolling additions logged separately. Every later wave reports against the original frame plus an explicit rolling addition layer. Attrition vs. addition is named.

Whether change at Year 1 reflects actual program effect or composition change in who is being measured.

Question piloting

Whether questions get tested before fielding.

Broken

Survey launches the same day questions are finalized. Confusing items surface when 30 baseline responses arrive with the same write-in clarification. Fixing the question now means losing comparability with the responses already collected.

Working

Pilot with 5 to 10 respondents from the target population. Cognitive interviews ask each respondent what each question meant to them. Wording fixed before launch, so the baseline ships with measurement validity intact.

Whether baseline values actually measure what they claim to measure or capture confusion masquerading as data.

Documentation

What the baseline survey report records.

Broken

No formal baseline survey report exists. The data sits in a spreadsheet. Two years later, the methodology gets reconstructed from emails, with framework version and population frame approximated rather than recovered cleanly.

Working

Baseline report written within weeks of fielding. Reports the population, the indicators, the framework version, the response rate by subgroup, and the values. Stands alone as a finding and serves as the reference document every later report compares against.

Whether the baseline is a readable, audit-ready document or a spreadsheet plus folklore.

Compounding effect

Errors in baseline decisions compound across every later wave. A baseline with weak indicator selection produces ambiguous comparison at Wave 1, ambiguous comparison plus framework drift at Wave 4, and a Year 2 report that has to caveat half its findings. The first decision controls everything downstream, which is why the baseline is the highest-leverage measurement in the program lifecycle, even though it captures only the starting state.

A worked example · Baseline survey examples

A 40-company corporate baseline on a social impact and sustainability framework.

An impact fund constitutes a portfolio of 40 corporate companies. Before the fund's first year of active engagement, the team baselines every company against a multi-dimensional social impact and sustainability framework. The baseline becomes the reference point for quarterly tracking and for any sub-program evaluation that runs inside a portfolio company. The deeper multi-wave workflow lives in the longitudinal survey guide, and the workforce sub-program path is in the pre-and-post-surveys guide.

We had to baseline forty companies before we could rate any of them. The pressure was to skip ahead and start scoring, but every funder review eventually asks the same question: what was the starting point. Without a clean baseline, every later quarterly update is only a snapshot of where companies are, not a measurement of whether the portfolio is improving. We spent six weeks locking the indicators, mapping the framework to IRIS+ and SASB, and assigning portfolio company IDs. The baseline report came out at week ten and it has anchored every quarterly wave since.

Impact fund portfolio analytics lead, end of baseline cycle

The 40-company baseline anatomy

Population frame

40 portfolio companies as of fund constitution date. Frame closed on a specific quarter; rolling additions in later waves logged separately so attrition vs. addition can be named honestly.

Framework

Hybrid framework: IMP Five Dimensions for impact, IRIS+ v5 for indicator definitions, SASB for sector-specific environmental indicators. Framework version documented in the baseline report so every quarterly wave uses the same definitions or formally migrates.

Indicators

~120 indicators across 12 themes: emissions, water, waste, biodiversity, workforce diversity, community investment, supply chain labor practices, customer welfare, board governance, ethics, transparency, and impact-on-stakeholder pathways. Each indicator locked at Day 0.

Identifier strategy

Portfolio company ID assigned by the fund at constitution. Travels with every later wave. Not dependent on company name, registered domain, or contact email, all of which can change without warning.

Data mix

Quantitative metrics (emissions tonnes, percent diverse workforce), qualitative narrative (DEI strategy description, climate transition plan), and supporting documents (sustainability reports, board diversity disclosures). All three captured in a single submission per company so context travels with values.

Output

Baseline survey report at week ten: a per-company scorecard plus portfolio-level findings on where the 40 companies sit across the 12 themes. The report stands alone as a starting-state finding for the LP committee and serves as the reference document every quarterly wave compares against.

BASELINE-FIRST PRODUCES

A defensible Year-1 improvement number

Quarterly waves compare against a fixed Day 0 anchor. The portfolio-level claim "indicator X improved by Y percent" has a reference point reviewers can verify against the baseline report.

Sub-program evaluations that connect

When a portfolio company runs an internal workforce training program, the company's baseline values are already in the system. The sub-program's pre-post evaluation pulls Day 0 values without re-collecting them.

A documented framework version

When IRIS+ v6 ships, the team can decide whether to migrate or stay on v5. The decision is informed because v5 is documented as the baseline version. Migration when it happens is formal, not silent.

Quarterly waves that take days, not weeks

Identifier matching is automated through portfolio company IDs. Each quarterly wave updates values against the locked indicator list. Reporting the delta against Day 0 is a join, not a manual reconciliation.

BASELINE-LAST PRODUCES

Quarterly snapshots without comparison

Each wave reports where companies sit, but the team cannot say whether the portfolio is improving because the Day 0 reference does not exist or is incomplete. LP committees ask whether the fund is having impact, and the answer has to be hedged.

Sub-programs that re-collect baseline data

Every workforce training program inside a portfolio company has to do its own baseline because no portfolio-level baseline exists. Companies get surveyed twice for the same things. Response fatigue rises across the portfolio.

Framework drift discovered at audit

The framework version was never recorded. At Year 2 audit, half the indicators have shifted definitions because the team migrated to a new IRIS+ release without flagging it. The audit caveats half the findings.

Quarterly waves that take weeks

No stable identifier exists at the company level. Each wave starts with a manual reconciliation: matching this quarter's submissions against last quarter's records. Three companies always end up unmatched. The team spends more time matching than analyzing.

Why baselines run better in Sopact, structurally

Sopact Sense was designed to make the baseline a living anchor rather than a one-time spreadsheet. Stable participant or portfolio-company IDs are assigned at intake. Indicators are locked in the system schema, so adding a new one is a deliberate decision, not an accidental edit. Quantitative, qualitative, and document data live in one submission per record. Quarterly waves connect to the baseline through the same architecture, not through reconciliation work. The 40-company baseline above is built once and stays usable for the life of the fund.

Baseline survey in practice

How a baseline survey extends into pre-post, longitudinal, and grant-reporting designs.

The baseline anchors several downstream designs. The three contexts below differ in the rhythm and number of comparison waves, but each treats the baseline as a fixed reference rather than a moving target. The architecture is the same; the cadence differs.

01 · Pre-post extension

Workforce sub-program pre-post

One comparison wave. Short cycle. Discrete intervention.

The setup. One of the 40 portfolio companies in the corporate baseline runs an internal workforce training program for 250 frontline employees. The program lasts twelve weeks. The company wants to measure whether the training moved the needle on workforce-readiness indicators that were already part of the corporate baseline.

What goes wrong without a baseline. The training team designs its own pre-test on the day the program starts. The indicators do not match the corporate baseline, so portfolio-level rollup later cannot connect the workforce intervention to the company-level indicators. The training shows positive results in its own report; the impact fund cannot use those results in portfolio reporting.

What works. The training program pulls its pre-survey indicators from the corporate baseline schema, captures additional training-specific items as a new indicator family, and runs a single post-survey at twelve weeks. The corporate baseline's indicators are the pre; the post is fielded against the same workforce population. The full workflow lives in the pre-and-post-surveys guide.

A specific shape

250-employee training program inside one portfolio company, 12-week cycle. Pre-survey reuses corporate baseline schema for workforce-readiness indicators. Post-survey at week 12 produces a clean change measurement that rolls up into the portfolio-level finding.

02 · Longitudinal extension

Impact fund quarterly tracking

Many comparison waves. Long cycle. Continuous improvement.

The setup. The impact fund tracks all 40 portfolio companies on the same baseline framework every quarter for the life of the fund. The reporting question is not whether one intervention worked but whether the portfolio is improving across themes over time. LP committees expect quarterly updates.

What goes wrong without a baseline. Quarterly waves describe where companies are this quarter, but the team cannot describe whether the portfolio is improving because the reference point keeps drifting. Companies get added and removed without formal frame logging. By Year 2 the team has eight waves of data and no defensible improvement claim.

What works. The corporate baseline anchors every quarterly wave. The framework version is locked. Portfolio company IDs travel through every wave. New additions get logged as a separate frame with their own baseline at the wave they joined, and the original 40 are tracked against their Day 0 values continuously. The full multi-wave methodology lives in the longitudinal survey guide.

A specific shape

40-company portfolio, quarterly cycle, 8 waves over two years. Each wave compares back to Day 0 baseline values. Improvement claims at the LP committee carry a defensible reference point. Framework migrations between waves are formal events, not silent drift.

03 · Grant reporting

Government and foundation grant baseline

Statutory reporting. Audit-ready documentation. Multi-year cycle.

The setup. A grantee receives multi-year funding and must report against a baseline established in Year 1. Government workforce funders (WIOA, sector partnerships) require this. Foundation funders increasingly require it too. The baseline survey report becomes the document the grant program references in every annual report.

What goes wrong without a baseline. The Year 1 report describes program activities but cannot describe outcome change because no Day 0 reference exists. By Year 3, the funder asks for cumulative outcome data and the grantee cannot produce it cleanly. The grant ends with strong activity reporting and weak outcome reporting.

What works. A formal baseline survey report at Year 1, structured as a baseline report template the funder can review. Stable participant identifiers from intake. Annual waves that compare back to baseline values. Methodology documentation that survives staff transitions, since most multi-year grants outlast the original program manager.

A specific shape

Three-year workforce sector partnership grant, 800 participants over the cycle. Year 1 baseline report stands alone as a deliverable. Years 2 and 3 reports compare back to Year 1 values across the same locked indicators. Annual reporting is a join, not a rebuild.

Baseline survey tools

Where general survey tools end and baseline-anchor architecture begins.

SurveyMonkey Google Forms Qualtrics Excel and CSV Sopact Sense

General survey tools handle baseline collection competently. The architectural gap shows up at the second wave. A baseline captured in a tool that does not own a persistent participant or company record produces a one-time spreadsheet, not a living anchor. Every later wave has to be re-matched manually because the baseline tool exported a flat file and the analysis tool imports a different flat file. Identifier drift, framework version drift, and indicator drift all enter through the seam between tools.

Sopact Sense was designed so the baseline is the same record every later wave updates against, not a separate file. Stable participant or portfolio company IDs are assigned at baseline. Indicators live in a locked schema, so adding one is a deliberate decision that gets logged. Quantitative metrics, qualitative narrative, and supporting documents live together in one record per participant. Quarterly or annual waves connect to the baseline through architecture rather than reconciliation. The baseline becomes a living anchor for the life of the program.

FAQ

Baseline survey questions, answered.

The most common questions about baseline surveys, what they capture, what the report looks like, and how they connect to pre-post and longitudinal designs. Each answer is mirrored verbatim in the page schema for AI Overview surfacing.

Q.01What is a baseline survey?

A baseline survey is the first measurement of a program. It captures the state of participants, companies, or communities before any intervention happens. The baseline becomes the reference point that every later measurement compares against. Without a baseline, a later survey can describe what is happening but cannot describe what has changed.

Q.02What does baseline survey mean?

Baseline survey meaning is straightforward: it is the first measurement that anchors every later measurement. The word baseline carries the structural meaning. A baseline is the line you compare back to. Whatever happens after the baseline (a training program, a portfolio investment, a policy change) gets measured against the baseline values, not against an absolute standard.

Q.03What is a baseline survey methodology?

Baseline survey methodology covers six decisions made before the baseline is fielded: which indicators to lock, which framework to use, how to frame the population, how to assign stable identifiers, how to pilot the questions, and how to document the framework version. Each decision controls whether later measurements can be compared back to the baseline cleanly.

Q.04What is the format of a baseline survey?

Baseline survey format follows the program. A short program might use a 20-question single-instrument baseline. A multi-dimensional impact framework might run 100 to 150 indicators across 10 to 15 themes. The structure that holds across formats is the same: identifiers and demographics first, locked outcome indicators second, context and attribution variables third. The format also documents the framework version so later waves can match.

Q.05What questions should a baseline survey ask?

Baseline survey questions fall into three groups. First, identifiers and demographics so respondents can be matched at later waves. Second, outcome indicators tied to the program theory of change. Third, context and attribution variables that explain why outcomes might change. The exact questions depend on the framework. A baseline tied to the IMP Five Dimensions of Impact differs from one tied to GRI or SASB, but every baseline includes all three groups.

Q.06What is a baseline survey report?

A baseline survey report presents the findings from the baseline measurement. It reports baseline values for every locked indicator, describes the population framed, documents the framework version, names the response rate by subgroup, and serves as the formal reference point that every later report compares against. A baseline report template typically includes an executive summary, methodology section, indicator-by-indicator findings, sub-population breakouts, and a documentation appendix.

Q.07How is a baseline survey different from a pre-post survey?

A baseline survey is the first measurement. A pre-post survey is a two-wave design that pairs a baseline with a single endline measurement after a discrete intervention. The baseline is the structural foundation; the pre-post is one downstream design that uses it. A workforce training program typically uses pre-post, where the baseline runs at intake and the post runs at completion. The baseline-survey page is for understanding the anchor; the pre-and-post-surveys page covers the matched two-wave design.

Q.08How is a baseline survey different from a longitudinal survey?

A baseline survey is the first measurement. A longitudinal survey runs multiple waves over time, with every wave comparing back to the baseline. The baseline is the same anchor. Longitudinal extends with regular re-measurement at quarterly, annual, or multi-year intervals. An impact fund tracking 40 portfolio companies on a sustainability framework typically runs a baseline followed by quarterly re-measurement, which is a longitudinal extension of the baseline.

Q.09When does a baseline survey run?

A baseline survey runs at Day 0, before any intervention. For a workforce training program, that is intake. For an impact fund, that is the moment the portfolio is constituted. For a community program, that is before the program launches. Running the baseline after the intervention has started is one of the most common methodology errors because it captures a contaminated state, not the pre-intervention state, and breaks the comparison every later measurement depends on.

Q.10What is baseline data collection?

Baseline data collection is the fieldwork phase that captures the locked indicators across the full population frame at Day 0. It assigns stable identifiers so later waves can match, documents the framework version in use, captures both quantitative metrics and qualitative narrative where the indicators require it, and produces the dataset that becomes the formal anchor for the program. Baseline data collection is one of the four steps every later measurement quality depends on, alongside indicator selection, identifier strategy, and documentation.

Q.11How do I report baseline findings?

Baseline findings are reported as a state, not a comparison. The baseline survey report describes where the population sits at Day 0 against the framework. It does not yet describe change because no comparison wave exists yet. The report names the framework, the population, the indicators, the response rate by subgroup, and the values. It is the document every later report compares against. A baseline report template is structured to be read both as a standalone finding and as the reference against which every later report is interpreted.

Q.12What are baseline survey best practices?

Six baseline survey best practices hold across every program. Lock the indicators before measurement begins. Capture demographics, outcomes, and context together. Use stable identifiers from Day 0 so later waves can match. Document the framework version. Survey the population that will be re-surveyed. Report the baseline as a finding in itself, not only as an anchor for later comparison. Each practice prevents a different failure mode that breaks the comparison every later measurement depends on.

Q.13Can I add new questions after the baseline?

Adding questions after the baseline breaks comparability for those questions. The baseline measurement does not exist for them, so later waves describe a state but cannot describe change. Programs sometimes need to add questions, and the right pattern is to document the addition as a new indicator family that begins at the wave it was added, rather than treating it as part of the original baseline. The original locked indicators continue to compare back cleanly.

Q.14How do baseline surveys connect to impact frameworks?

Baseline surveys are typically organized around an impact framework. Common open frameworks include the IMP Five Dimensions of Impact, IRIS+ from the GIIN, GRI Standards, SASB Standards, and TCFD for climate. The framework decides what gets measured. The baseline captures values for every indicator in the chosen framework. The framework version gets documented at baseline so later waves use the same framework or formally migrate. An impact fund baseline on a multi-dimensional sustainability framework typically captures 100 to 150 indicators across 10 to 15 themes.

Q.15How long should a baseline survey be?

Baseline survey length follows the framework. A single-program baseline measuring one outcome construct (job readiness, financial literacy, agricultural yield) typically runs 15 to 30 questions. A multi-dimensional impact baseline measuring across themes (workforce, environment, governance, community) runs 100 indicators or more. The constraint is response burden against the population. A multi-day onboarding survey for corporate portfolio companies tolerates 100 indicators; a one-touch participant intake survey does not. The methodology is the same; the format scales to the program.

Related survey design guides

Where the baseline goes after Day 0.

The two cards below cover the downstream designs that the baseline anchors. The remaining four cover the methodology layer and the question types that shape what the baseline asks.

Bring your baseline draft

Lock the indicators before the first wave goes out.

Bring a draft baseline (a question list, a framework you are mapping to, or a previous baseline you want to refresh). We walk it against the six baseline survey best practices, name the indicators that should be locked vs. flagged, and sketch the architecture that lets one baseline anchor pre-post sub-programs and longitudinal portfolio tracking from the same record. No procurement decision required.

Format

A 60-minute working session, screen-share. Founder, not a sales rep.

What to bring

A draft baseline, an existing baseline you want to refresh, or the framework you plan to map indicators against.

What you leave with

A locked indicator list against the six best practices, plus the workflow sketch for connecting the baseline to pre-post or longitudinal designs downstream.