play icon for videos

Longitudinal survey: definition, types, design, and software

What a longitudinal survey is, the main types (panel, cohort, trend), how to design one that actually produces longitudinal data, and how dedicated longitudinal-survey platforms

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 2, 2026
360 feedback training evaluation
Use Case
Longitudinal survey

A cross-sectional survey is a snapshot. A pre-and-post survey is a bridge. A longitudinal survey is the film.

A longitudinal survey questions the same respondents at multiple time points and links every response to a persistent identifier. Three or more waves where each individual answers more than once is the general pattern; the two-wave pre-and-post is the recognizable special case. What separates a longitudinal survey from a survey run repeatedly is whether the same people come back, and whether their responses across waves are connected.

This page covers the definition, the three main types (panel, cohort, trend), the six design decisions that determine whether the data is actually longitudinal, and an honest comparison of the platforms that run them. Real examples come from program evaluation, public health follow-up, and academic social research.

On this page
01The three patterns, side by side
02Six definitions readers ask
03Six structural realities
04Six design decisions, walked through
05Two designs, same program
06Platforms compared honestly
Three patterns, side by side

Cross-sectional, pre-and-post, longitudinal: the same family, three positions

Cross-sectional, pre-and-post, and longitudinal sit on a continuum of the same survey design family. What changes across the three is the number of waves, whether the same respondents come back, and which questions the resulting data can answer. A reader trying to figure out which one their question calls for is asking, structurally, how many waves and the same people or different.

The snapshot

Cross-sectional survey

Different respondents, one moment.

Same respondents

No

Each respondent answers once.

Number of waves

One

A single round of data collection.

What it can answer

How groups differ today

Comparisons across people, not within people.

Example

Annual customer satisfaction survey to a fresh sample.

The bridge

Pre-and-post survey

Same respondents, two moments.

Same respondents

Yes

Linked by a persistent identifier across the two waves.

Number of waves

Two

Before-and-after, around a specific event.

What it can answer

Did the average change between two specific moments

Two endpoints, no trajectory shape.

Example

Skill assessment at the start and end of a training program.

The film

Longitudinal survey

Same respondents, three or more moments.

Same respondents

Yes

Persistent ID links every response across every wave.

Number of waves

Three or more

Calendar-anchored or event-anchored across time.

What it can answer

When and how individuals changed across time

Trajectory shape, individual variation, who changed faster.

Example

Workforce-training cohort surveyed at intake, end of program, six months out, and twelve months out.

The pattern

The three are stages of the same design family, not separate categories. A cross-sectional survey becomes a pre-and-post when you re-contact the same respondents once; a pre-and-post becomes a longitudinal when you add a third wave. Most teams who run a longitudinal survey grew into it from one of the other two. The rest of this page covers how to design and run a longitudinal survey when you start there from day one. The pre-and-post page covers the two-wave special case.

Definitions

Six questions readers ask first

The longitudinal-survey vocabulary spans academic survey methodology, applied program evaluation, and the commercial research industry. The same word means slightly different things in each. The six answers below cover the question forms that bring readers to this page, in plain language and with the typology terms (panel, cohort, trend) named where it matters.

What is a longitudinal survey?

A longitudinal survey is a survey that questions the same respondents at multiple time points, with each respondent linked across waves by a persistent identifier so that change within each respondent can be measured. The defining feature is that the same individual answers the survey more than once, and the answers are connected.

A questionnaire run repeatedly to different respondents is not a longitudinal survey; it is a repeated cross-section. The longitudinal version requires same-respondent linkage across waves, which is what makes within-person change measurable. The number of waves can be two (the pre-and-post special case) or three or more (the general longitudinal case), but two waves is sometimes treated as a separate category in survey methodology textbooks.

What is the meaning of a longitudinal survey?

Longitudinal in survey research means same respondents, multiple time points. The longitudinal label applies when each individual contributes more than one response and those responses are linked. The contrast is cross-sectional: different respondents, one moment. The contrast is also repeated cross-section: different respondents, multiple moments.

Only the same-respondent design earns the longitudinal label, because only that design lets the analyst answer questions about how individuals changed rather than how different populations differ. The word longitudinal comes from the same root as longitude: across the long axis of time, for the same unit of observation.

What is the longitudinal survey definition?

A longitudinal survey is defined as a survey design in which a fixed set of respondents (the panel or cohort) is questioned at two or more time points, with responses from the same person across waves linked by a persistent identifier. Some authors require three or more waves before applying the longitudinal label and call the two-wave case pre-and-post.

The structural requirement, regardless of wave count, is same-respondent linkage. Without that, the data is repeated cross-sections, not longitudinal. The link between waves is what permits the analytical methods (mixed-effects models, growth curves, generalized estimating equations) that a longitudinal dataset enables.

What is an example of a longitudinal survey?

Three well-known longitudinal surveys at the academic scale: the National Longitudinal Survey of Youth 1979 (NLSY79) has tracked the same 12,686 Americans since 1979, originally annually and now biennially, on labor market and life-course outcomes. The British Household Panel Survey (BHPS, now Understanding Society) has tracked UK households annually since 1991. The English Longitudinal Study of Ageing (ELSA) has tracked the same older adults every two years since 2002 on health, finances, and well-being.

At the applied scale: a workforce-training cohort tracked at intake, end of program, six months out, and twelve months out is a common four-wave longitudinal survey. A patient cohort tracked monthly across a 24-month chronic-condition study is another. The structure (same people, multiple waves, persistent ID) is identical; what differs is the duration, the wave timing, and the sample size.

What are the types of longitudinal surveys?

Three main types. Panel surveys interview the same panel of respondents at every wave, often with replenishment for attrition. The panel is recruited once and followed across the study, sometimes for decades.

Cohort surveys follow a defined cohort (typically defined by birth year or program-entry date) across time without adding new members. The cohort is fixed at the start; sample size only decreases.

Trend surveys (repeated cross-sections) sample new respondents each wave from the same target population, asking the same questions. Trend surveys are sometimes counted as longitudinal in the loose sense, but they do not have same-respondent linkage and so cannot measure within-person change. Of the three, only panel and cohort surveys produce true longitudinal data.

Pre-and-post surveys are a two-wave special case of either panel or cohort design. The pre-and-post page covers that case in detail.

What is a longitudinal panel survey?

A longitudinal panel survey is the most common form of longitudinal survey. A panel of respondents is recruited at the start of the study and re-contacted for every subsequent wave, sometimes with replenishment when attrition reduces the panel below a usable size. Each respondent has a panel ID that links their responses across waves.

National statistical agencies use rolling panels (the BHPS, the German Socio-Economic Panel, the US Panel Study of Income Dynamics) where panel members may stay for decades. In applied research, panels are often shorter (one to five years) and tied to a specific question. The panel terminology is sometimes used interchangeably with cohort terminology, but in strict survey methodology a cohort shares a defining starting condition while a panel is any recruited group.

What it is not

Four survey types that get confused with longitudinal survey

These four overlap with longitudinal survey at the surface and trip up readers who are deciding which design fits their question. Each is genuinely different, and the difference matters for what the data can answer.

Cross-sectional survey
One moment, different people

A cross-sectional survey questions different respondents at one time point. It cannot measure within-person change. Cross-sectional surveys are sometimes used to estimate change retroactively by asking respondents about previous time points, but recall bias makes that approach weak compared with collecting waves prospectively.

Trend survey
Repeated cross-section, not longitudinal

A trend survey samples new respondents each wave from the same target population, asking the same questions. It can show how population averages shift but cannot follow individuals. Many surveys called longitudinal in industry usage are actually trend surveys; the label slips because the questionnaire is run multiple times.

Cohort study
A study, not a survey instrument

A cohort study is a research design that follows a defined cohort across time, often using surveys but also clinical exams, biomarker collection, administrative records, and observation. The cohort study is the broader research vehicle; a cohort survey is the survey instrument inside one. Both terms appear in the literature interchangeably, which causes confusion.

Pre-and-post survey
The two-wave special case

A pre-and-post survey is a two-wave longitudinal survey, before and after a specific event. Most authors include it in the longitudinal family but treat it as the simplest case. Three or more waves let you see trajectories; pre-and-post sees only endpoints. See the pre-and-post page for the dedicated coverage of that case.

Six structural realities

What every longitudinal survey has to handle

Six things are true about every longitudinal survey, regardless of whether the study runs for two waves or twenty years, or whether the platform is Qualtrics or a research-only tool. Each of the six is the kind of thing that, if missed, breaks the longitudinal value of the data even when the survey itself runs cleanly.

01 . Continuity

Same respondents across waves is the defining feature

Without continuity, the data is repeated cross-sections.

A questionnaire run multiple times to different people is not a longitudinal survey. The longitudinal label requires that the same individual answers at each wave and that those answers are linked. Many surveys labelled longitudinal in industry usage fail this test because the panel is replaced or the link is lost between waves.


What this changes: the questions the data can answer. Same-respondent design enables within-person analysis; repeated cross-section does not.

02 . Stability

Question stability is what makes change measurable

Comparable across waves, or no comparison.

If the same construct is measured with different wording at Wave 1 and Wave 3, the analyst cannot tell whether the apparent change is real change or measurement-instrument change. The core block (the questions the longitudinal analysis depends on) has to be frozen across waves. New questions can be added but not at the cost of the core.


What this changes: what counts as a meaningful update between waves. Wording changes are not minor; they break comparability.

03 . Identifier

Persistent identifiers are the join key

Email and name break across waves.

The wave-to-wave link is whatever value joins one respondent's Wave 1 record to their Wave 2 record. Using email and name as the join is the default that breaks: people change addresses, change names, mistype, or use different addresses at different waves. An assigned persistent ID issued at enrollment and kept through every wave is what holds the link together.


What this changes: the percentage of records that match. With persistent ID, near 100 percent. With email and name, often 60 to 75 percent.

04 . Attrition

Attrition is the failure mode that compounds

Per-wave losses multiply across waves.

Typical attrition runs 10 to 30 percent per wave. A four-wave survey starting with 500 respondents and losing 20 percent per wave keeps 256 by Wave 4. Worse, the loss is rarely random: respondents who drop out tend to differ systematically from those who stay, which biases estimates. Active follow-up across multiple modes is the structural protection.


What this changes: how usable the longitudinal estimates are. Heavy attrition can make the surviving sample unrepresentative.

05 . Mode

Survey mode shapes who responds

Web, phone, mail, in-person reach different populations.

A web-only longitudinal survey will systematically under-represent respondents with limited internet access. A phone-only survey hits a different demographic. The mode that worked at Wave 1 sometimes fails at Wave 3 because respondents have moved or changed devices. Mixed-mode strategies (web primary, phone or mail backup) maintain higher response across waves.


What this changes: who stays in the sample. Mode-induced attrition is invisible if you only look at total response rates.

06 . Anchoring

Time anchoring governs how change is interpreted

Calendar versus event, picked once.

Wave timing is either calendar-anchored (every six months on the dot) or event-anchored (at intake, at end of program, at six-month follow-up). Calendar-anchored is simpler operationally but creates noise when participants enter at different times. Event-anchored produces cleaner change estimates but requires more individual scheduling. Picking one and sticking to it is what protects comparability.


What this changes: what the time variable means in the analysis. Calendar months and program-time months produce different estimates.

A worked example

Two design approaches, the same workforce program

Below is the same workforce-training cohort that runs through the data and analysis sibling pages: 320 participants enrolled across a 24-month tracking period, surveyed at intake, end of program, six-month follow-up, and twelve-month follow-up. The first design used a general-purpose survey platform with the operational defaults. The second built the longitudinal structure into the design from Wave 1. The cohort and the questions were identical. The data the funder received at the end was not.

Wave 1 went out on Qualtrics in March, the same way every other survey we run goes out. 320 enrolled participants, 287 responses, 90 percent completion rate. Looked great. Wave 2 ended-of-program survey went out on Qualtrics in June, three months later. We had 211 responses but the email-name match to Wave 1 only gave us 184 linkable records. Wave 3 in December dropped to 142 responses and 98 linked records. By Wave 4 in March of the second year, only 67 records linked across all four waves. The funder asked for trajectories. We had endpoints, sometimes, for one in five participants.

Workforce program evaluation lead, end of cohort cycle

Two designs, the same six decisions

Both designs ran the same program with the same questions to the same 320 participants across the same four waves. The only differences were how the six design decisions in section seven were resolved.

Design 1
General-purpose survey platform with operational defaults
Sample composition

Send to the original 320 each wave. No follow-up plan for non-respondents. Some waves go to people who already dropped, some get missed.

Wave timing

Calendar-fixed: March, June, December, March. Participants who joined the program in February and those who joined in October are at different program-times when each wave goes out.

Question continuity

Wave 3 added two new questions on AI-skill confidence at the funder's request. Wave 1 and Wave 2 measured the construct slightly differently from Wave 3 and Wave 4.

Tracking IDs

Email plus first name plus last name as the join. By Wave 4, three different email addresses appear for the same person across waves; some last names changed.

Attrition response

Single email reminder per wave. Anyone who did not respond by the closing date was treated as a non-response and dropped from later waves.

Mode

Web-only across all four waves. Participants without consistent email access dropped first.

Design 2
Longitudinal-by-design with structural defaults
Sample composition

Persistent panel: same 320 participants every wave. Active outreach to non-respondents. Documented exits when participants formally withdraw.

Wave timing

Event-anchored: at intake, at end of program, at six months post-program, at twelve months post-program. Each respondent is at the same program-time across waves.

Question continuity

Core block frozen at Wave 1. New questions appended at the end. The longitudinal-analysis core measures the same construct across all four waves.

Tracking IDs

Persistent participant ID (P-04812 stays P-04812 every wave). Email and name collected as contact info but not the join key.

Attrition response

Active follow-up: web invitation, phone backup at day 7, in-person or text outreach at day 14. Documented refusal codes when participants formally exit.

Mode

Mixed-mode: web primary, phone backup, in-person backup at later waves for hard-to-reach participants. Mode mix recorded so any mode-effect can be checked.

What Design 1 produced

A dataset that mostly came apart

67 of 320 participants linkable across all four waves

21 percent of the cohort. The trajectories that the analysis could examine came from one in five enrolled participants.

Per-wave matching done in spreadsheets at analysis time

Three rounds of manual matching by analyst, totalling about two weeks of work to reconcile names, emails, and approximate-string matches.

Construct drift between Waves 1, 2 and Waves 3, 4

The two new questions added at Wave 3 changed the scoring of the AI-skill scale. Wave-to-wave comparison on that scale required statistical rescaling.

Funder report based on endpoints, not trajectories

The deliverable became a Wave 1 versus Wave 4 paired t-test on the linkable subset, because growth curve modelling on 67 of 320 was statistically thin and demographically skewed.

What Design 2 produced

A dataset the analysis could use

293 of 320 participants linkable across all four waves

92 percent of the cohort. Even the participants who missed one wave had their other three waves correctly linked through the persistent ID.

No matching at analysis time

The dataset arrived already in long format with one row per participant per wave. The analyst's first task was the analysis, not data preparation.

Stable construct across all four waves

Frozen core block measured AI-skill confidence the same way at every wave. The two new Wave 3 questions sat next to the core, not inside it.

Funder report based on full-cohort trajectories

The growth curve model used all 320 participants and all available waves through full information maximum likelihood. The trajectory shape was visible. Individual variation was estimable. The program effect was separable from secular trend.

The structural point

The two designs cost roughly the same to operate. Design 2's mixed-mode follow-up added per-wave staff time; Design 1's manual matching at the end consumed all of that and more in analyst time. The difference was in the timing: Design 1 paid the structural cost as data-recovery work after Wave 4; Design 2 paid the structural cost as design work before Wave 1. Only one of the two ended up with a dataset the longitudinal analysis could actually use. None of these decisions was retrofittable after the data was collected.

Where longitudinal surveys live

Three fields, the same instrument, different conventions

Longitudinal surveys are run by national statistical agencies, hospital and clinical-research teams, and applied program evaluators. The instrument is structurally the same: same respondents, multiple waves, persistent IDs. The sample sizes, the wave timing, the duration, and the operational stack differ enormously. Knowing the conventions in your field is what tells you whether the methods on this page need any adaptation.

01

Government and academic social research

National panels, cohort studies, household surveys. Unit: person or household. Sample: thousands to tens of thousands. Duration: years to decades.

ConventionsNational statistical agencies and academic research consortia run the largest longitudinal surveys. The British Household Panel Survey, the German Socio-Economic Panel, the US Panel Study of Income Dynamics, the National Longitudinal Survey of Youth, the English Longitudinal Study of Ageing all run for decades with rolling panels of thousands of respondents. The operational standards are demanding: institutional review board approval, archival data documentation, public-use file releases, harmonized variables across decades of methodology changes.

Where decisions get made differentlyGovernment and academic researchers face decade-scale survey-mode evolution: a panel that started face-to-face in 1979 and switched to web in 2010 has to handle the mode-effect statistically. They face institutional-memory questions about question wording from previous principal investigators. They publish data through national archives that require strict documentation of every change to the questionnaire across waves.

Common operational stackCustom-built data-management systems, often hosted on government infrastructure or university research computing. Survey instruments built in research-only platforms (Blaise, IBM SPSS Statistics Data Collection, custom web tools). Data cleaning and harmonization done by dedicated statistical programmers in SAS, R, or Stata.

A specific shape

The NLSY79 has tracked the same 12,686 respondents since 1979, originally annually and now biennially. Forty-five years of waves, three major mode transitions, dozens of revisions to the core questionnaire, all harmonized for public-use research. The retention rate after four-and-a-half decades is roughly 80 percent of the original cohort, an outlier achievement built on continuous follow-up effort.

02

Public health and clinical follow-up

Patient cohort studies, post-trial follow-up, chronic-condition tracking. Unit: patient. Sample: hundreds to tens of thousands. Duration: months to decades.

ConventionsPublic health and clinical research run longitudinal surveys alongside clinical exams, biomarker collection, and administrative records. Patient-reported outcome measures (PROMs) are increasingly mandated for chronic-condition programs, post-surgery follow-up, and quality measurement. The operational standards include institutional review board approval, HIPAA compliance, encrypted data handling, and (for trials) FDA-relevant pre-specified analysis plans.

Where decisions get made differentlyHealth researchers handle stratified follow-up where high-risk patients get more frequent waves than stable patients. They handle proxy reporting (a family member completes the survey when the patient cannot). They handle censoring (the patient is alive at the end of follow-up; the event has not yet happened). They face strict ethical review when survey content covers sensitive areas.

Common operational stackREDCap (the open-source research electronic data capture platform from Vanderbilt) dominates academic clinical research. Clinical trial sponsors use Medidata Rave, Veeva Vault EDC, and similar enterprise platforms. Quality measurement programs use Press Ganey, Qualtrics XM, or custom-built EHR-integrated tools. Statistical analysis runs in SAS (regulatory submissions) or R (academic publications).

A specific shape

A chronic-condition tracking program with 800 patients followed monthly for 24 months on patient-reported outcome scales typically uses REDCap-based survey delivery, persistent patient-identifier linking through the EHR, mixed-mode follow-up (web, phone, mail backup), and a pre-specified analysis plan invoking mixed-effects models on the longitudinal output. The retention rate at 24 months runs 60 to 80 percent depending on the population.

03

Applied program evaluation and impact tracking

Workforce, education, social-impact, and philanthropy programs. Unit: participant. Sample: tens to thousands. Duration: months to a few years.

ConventionsApplied program evaluation runs longitudinal surveys at smaller scale and shorter duration than government or clinical work. A typical workforce-training program tracks 200 to 1,000 participants across 18 to 24 months with three to five waves. The operational standards are softer (no institutional review board for non-research evaluation in many cases, though many program operators voluntarily apply similar rigor) but the funder reporting standards are firm: outcome documentation, attrition explanation, comparison-group analysis when available.

Where decisions get made differentlyProgram evaluators handle multiple stakeholder groups (funders, program staff, participants, board) who each need different views of the data. They handle deadline pressure that academic and clinical research rarely faces (the funder report is due in three weeks regardless of whether the analysis is finished). They handle questions about retention that are simultaneously operational (how do we keep participants engaged) and analytical (how do we estimate effects under attrition). They work with smaller samples where attrition affects statistical power directly.

Common operational stackMany programs use the same general-purpose tools they use for one-off surveys (Qualtrics, SurveyMonkey, Google Forms) and reconcile longitudinal structure manually at analysis time. Some adopt program-evaluation-specific tools (Sopact Sense, Apricot, Salesforce-built workflows). Statistical analysis is most often R or SPSS, sometimes Excel for the smallest evaluations.

A specific shape

A workforce-training cohort of 320 participants tracked across 24 months with four event-anchored waves (intake, end of program, six-month follow-up, twelve-month follow-up) is the typical applied program-evaluation longitudinal survey. The attrition pattern, the tooling decisions, and the funder reporting deadline together determine whether the data ends up usable for the analysis the funder asked for.

Platforms compared honestly

Most platforms can deploy a survey multiple times. Few enforce longitudinal structure.

Qualtrics SurveyMonkey Forsta Alchemer Sopact Sense

The four general-purpose platforms can deliver a survey at multiple time points, store responses with respondent IDs, and export the data for downstream analysis. What they typically leave to the customer is the longitudinal-structure work: maintaining one growing record per respondent across waves, persistent-ID matching when contact details change, flagging questionnaire-version drift between waves, and reshaping the per-wave response files into a long-format analytical dataset. Many applied teams successfully run longitudinal surveys on these platforms. The trade-off is operational overhead per wave (custom variable setup, manual matching, version control done in spreadsheets) and a meaningful chance that structural problems become visible only at analysis time, often after the deliverable has been promised.

Sopact Sense builds the longitudinal structure into the data model from day one. One growing record per participant, persistent IDs assigned at enrollment, append-only storage of every wave's response, schema versioning that flags questionnaire changes, and exports that go directly to long-format analytical files without manual reshaping. The output is a dataset that the longitudinal analysis can use without the data-preparation pipeline that consumes most of an applied analyst's time. The trade-off going the other way is a more constrained instrument-design surface than the general-purpose platforms; the structural payoff comes at analysis time.

Frequently asked

Sixteen questions about longitudinal surveys

The questions below cover the definition, the type vocabulary, the design choices, and the platform comparisons that come up most often when applied teams plan a longitudinal survey. Each answer mirrors the schema markup on this page so that what a reader sees and what a search engine sees match exactly.

Q.01

What is a longitudinal survey?

A longitudinal survey is a survey that questions the same respondents at multiple time points, with each respondent linked across waves by a persistent identifier so that change within each respondent can be measured. The defining feature is that the same individual answers the survey more than once, and the answers are connected. A questionnaire run repeatedly to different respondents is not a longitudinal survey; it is a repeated cross-section. The longitudinal version requires same-respondent linkage across waves, which is what makes within-person change measurable.

Q.02

What is the meaning of a longitudinal survey?

Longitudinal in survey research means same respondents, multiple time points. The longitudinal label applies when each individual contributes more than one response and those responses are linked. The contrast is cross-sectional: different respondents, one moment. The contrast is also repeated cross-section: different respondents, multiple moments. Only the same-respondent design earns the longitudinal label, because only that design lets the analyst answer questions about how individuals changed rather than how different populations differ.

Q.03

What is the longitudinal survey definition?

A longitudinal survey is defined as a survey design in which a fixed set of respondents (the panel or cohort) is questioned at two or more time points, with responses from the same person across waves linked by a persistent identifier. Some authors require three or more waves before applying the longitudinal label and call the two-wave case pre-and-post. The structural requirement, regardless of wave count, is same-respondent linkage. Without that, the data is repeated cross-sections, not longitudinal.

Q.04

What is an example of a longitudinal survey?

Three well-known longitudinal surveys: the National Longitudinal Survey of Youth 1979 (NLSY79) has tracked the same 12,686 Americans since 1979, originally annually and now biennially, on labor market and life-course outcomes. The British Household Panel Survey (BHPS, now Understanding Society) has tracked UK households annually since 1991. The English Longitudinal Study of Ageing (ELSA) has tracked the same older adults every two years since 2002 on health, finances, and well-being. In applied program evaluation, a workforce-training cohort tracked at intake, end of program, six months out, and twelve months out is a common four-wave longitudinal survey.

Q.05

What are the types of longitudinal surveys?

Three main types: panel surveys interview the same panel of respondents at every wave, with replenishment for attrition. Cohort surveys follow a defined cohort (typically defined by birth year or program-entry date) across time without adding new members. Trend surveys, sometimes called repeated cross-sections, sample new respondents each wave from the same population. Of the three, only panel and cohort surveys produce true longitudinal data; trend surveys produce repeated cross-sections that can show population shifts but not within-person change. Pre-and-post surveys are a two-wave special case of either panel or cohort design.

Q.06

What is a longitudinal panel survey?

A longitudinal panel survey is the most common form of longitudinal survey. A panel of respondents is recruited at the start of the study and re-contacted for every subsequent wave, sometimes with replenishment when attrition reduces the panel below a usable size. Each respondent has a panel ID that links their responses across waves. National statistical agencies use rolling panels (the BHPS, the German Socio-Economic Panel, the US Panel Study of Income Dynamics) where panel members may stay for decades. In applied research, panels are often shorter (one to five years) and tied to a specific question.

Q.07

What is the difference between a longitudinal survey and a pre-and-post survey?

A pre-and-post survey is a two-wave longitudinal survey: same respondents, two time points (before and after some event, typically a program). Most authors include pre-and-post in the longitudinal family but treat it as the simplest case. The general longitudinal design has three or more waves, which lets analysts see trajectories rather than only endpoints. Pre-and-post is sufficient when the question is whether the average changed between two specific moments. Three or more waves are needed when the question is when the change happened, whether change rates differed across people, or whether the trajectory was linear. The pre-and-post page covers the two-wave case in detail.

Q.08

What is the difference between a cross-sectional and a longitudinal survey?

A cross-sectional survey questions different respondents at one time point: a snapshot of a population at a moment. A longitudinal survey questions the same respondents at multiple time points: a record of how those individuals changed. Cross-sectional surveys can answer how groups differ today; longitudinal surveys can answer how the same individuals have changed over time. Many surveys mistakenly labelled longitudinal are actually repeated cross-sections, where the questionnaire is administered multiple times but to different respondents each wave. The same-respondent linkage is what makes a survey longitudinal.

Q.09

What is the difference between panel, cohort, and trend surveys?

Panel surveys interview the same panel of respondents at every wave, often with replenishment for attrition. Cohort surveys follow a defined group sharing a starting condition (birth year, program intake date) without adding new members. Trend surveys sample new respondents each wave from the same target population, asking the same questions. Panel and cohort surveys produce longitudinal data because the same individuals answer multiple times. Trend surveys produce repeated cross-sections because different individuals answer each wave. The methodology is similar, the data structure is fundamentally different, and the analytical methods that apply differ accordingly.

Q.10

What does longitudinal survey design involve?

Longitudinal survey design involves six decisions that together determine whether the data is actually longitudinal: sample composition (who comes back across waves), wave timing (calendar-fixed or anchored to participant events), question continuity (frozen core block or rewritten each wave), respondent identifiers (persistent ID or email-and-name), attrition response (active follow-up or passive distribution), and survey mode (single-mode or mixed). The methods matrix on this page walks through each decision with the wrong-but-common default and the right-for-longitudinal alternative. Each decision controls one structural property of the resulting dataset.

Q.11

What is longitudinal survey research?

Longitudinal survey research is the application of longitudinal survey methods to a research question. The category covers academic social-science research (panel and cohort studies tracking populations across years or decades), public-health research (cohort follow-up on patient-reported outcomes), and applied program evaluation (workforce, education, social-impact programs tracking participant outcomes across waves). The methodology is identical across these fields; the conventions, sample sizes, follow-up windows, and reporting standards differ. The applications section on this page details how each field adapts the same methodology.

Q.12

What does longitudinal follow-up mean?

Longitudinal follow-up means the systematic effort to re-engage previous-wave respondents at each new wave. Follow-up includes contact attempts across multiple modes (email, phone, mail, in-person), incentive offers calibrated to the previous wave's effort, and documentation of why each non-respondent did not answer. Without active follow-up, attrition compounds across waves: typical losses run from 10 to 30 percent per wave depending on study design, which means a four-wave study with no follow-up effort can lose half the panel by the final wave. The cost of follow-up is what protects the analytical value of the data.

Q.13

What survey software handles longitudinal studies?

Most general-purpose survey platforms (Qualtrics, SurveyMonkey, Forsta, Alchemer) can deliver a survey at multiple time points; what they lack is structural support for longitudinal data, meaning persistent respondent IDs across waves, append-only respondent records, and built-in wave linking. The customer typically works around this with custom variables, downloaded CSVs, and manual matching by hand in spreadsheets. Dedicated longitudinal platforms (Sopact Sense, some research-only tools used in academic studies) build the longitudinal structure into the data model from day one, so the output is one growing record per respondent rather than a stack of separate response files. The right choice depends on whether the longitudinal structure is built upstream by the platform or downstream by the analyst.

Q.14

Can Qualtrics or SurveyMonkey run a longitudinal survey?

Yes, with caveats. Qualtrics and SurveyMonkey can deliver the same questionnaire at multiple times, store responses with respondent IDs, and export data for analysis. What they do not do is enforce longitudinal structure: they do not natively maintain one growing record per respondent, they do not flag questionnaire-version changes that break wave-to-wave comparability, and they leave persistent-ID matching to the customer. Many applied teams successfully run longitudinal surveys on Qualtrics and SurveyMonkey by setting up custom variables and matching by hand. The trade-off is operational overhead per wave and a meaningful chance of structural errors that only become visible at analysis time.

Q.15

How long should a longitudinal survey run?

The duration follows the question. For program evaluation, the survey runs as long as the outcome unfolds: workforce-training programs typically use four waves over 18 to 24 months, ending after enough post-program time has passed for outcomes to materialize. For health research, the duration matches the natural progression of the condition: chronic-condition studies often run two to five years, longitudinal aging studies can run for decades. For academic social research, panel and cohort studies sometimes run for life (NLSY79 has tracked the same people since 1979). Each wave has fixed costs, so longer studies trade analytical depth against per-wave operational cost. The right duration is the one where the additional wave changes the answer.

Q.16

What is the difference between longitudinal data and a longitudinal survey?

A longitudinal survey is the data collection instrument: the questionnaire run at multiple waves to the same respondents. Longitudinal data is the dataset that the survey produces: same units, multiple time points, persistent IDs. A longitudinal survey produces longitudinal data; the dataset can also come from administrative records, sensor logs, or transaction histories without a survey involved. For the data structure itself (formats, identifiers, schema versioning) see the longitudinal data sibling page. For the analytical methods that operate on the resulting dataset, see the longitudinal data analysis page.

Across the cluster

Six related pages, each on one slice of the longitudinal family

Each page covers a distinct slice. The hub orients the cluster. The study, data, and analysis pages cover the conceptual, structural, and analytical sides. The pre-and-post page covers the two-wave special case, and the cross-sectional comparison page covers the contrast.

Build it longitudinal from day one

Bring your survey draft. See where the longitudinal structure breaks.

The six design decisions in section seven get made one way or the other before Wave 1 goes out. The right answers depend on what your study actually has to do, who the respondents are, and what the funder or research output asks for at the end. A 30-minute working session can take a draft you already have and walk through where each of the six decisions is currently set, what is operationally lightweight versus structurally sound, and where the design would benefit from a different default.

Bring the questionnaire, the wave plan, and the rough sample. Leave with a clearer view of which decisions are on track, which need a different answer, and what the structural cost is going to be at analysis time.