play icon for videos

Longitudinal design: definition, types, and how to run one

A longitudinal design follows the same people across time. Definition, types, examples, and the structural choices that decide whether the data connects.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 2, 2026
360 feedback training evaluation
Use Case
Longitudinal design

A longitudinal study follows the same people over time. The survey questions are the visible work. Connecting the answers to the same people is the work most teams skip.

A program manager finishes six months of quarterly surveys and sits down to measure participant change. The data is all there. Intake responses. Midpoint check-ins. Exit assessments. But intake is in one file, exit is in another, and there is no reliable way to know which person answered which form. The analysis is impossible before it begins.

This is not a data quality problem. It is a structural problem that became unfixable the moment Wave 1 data was collected. Most teams plan the survey questions carefully and plan the participant tracking almost not at all. Every form is well-written. No two forms are connected. The result looks like longitudinal research but cannot be analyzed as longitudinal research.

This guide explains the structural choices that decide whether a longitudinal design works in practice: who the participants are, how each person is matched across waves, when waves run, and what stays the same in the survey between waves. Examples come from workforce training, education, and public health. No prior research background needed.

On this page
01The wave architecture
02Definitions and types
03Six structural principles
04The choices the design forces
05A worked example
06FAQ and related guides
The wave architecture

A longitudinal design is a sequence of waves connected by a tracking ID.

Each wave is a survey moment. The tracking ID is the piece of information that lets you connect each person's answers from one wave to the next. Wave timing depends on the outcome the study measures. The tracking ID is set at Wave 1 and never changes. Without it, the data from later waves cannot be analyzed as longitudinal.

Time, from left to right
Wave 1
Intake
Baseline
Demographics, baseline outcome measures, contact details, the tracking ID assigned.
320 of 320 enrolled respond
Wave 2
Month 3
Mid-program
Engagement check, midpoint outcome measures, early signal of who is on track.
288 of 320 respond
Wave 3
Month 6
Program exit
End-of-program survey, the same outcome measures used at Wave 1, plus completion data.
271 of 320 respond
Wave 4
Month 12
Six months out
Post-program follow-up. Whether the change held after the program ended.
240 of 320 respond
Wave 5
Month 24
Long-run follow-up
Long-term outcome data. Whether short-run gains translated into durable change.
187 of 320 respond
Tracking ID, set at Wave 1, survives every wave

Without the thread above, every wave is a separate survey file. Maria's intake answers and Maria's twelve-month answers cannot be connected. The waves exist. The longitudinal data does not.

The retention numbers above are illustrative for a workforce-training cohort. Real attrition rates depend on population, contact method, and incentive structure. The structural pattern is the same across sectors: a baseline, a sequence of follow-up waves at outcome-relevant intervals, and a tracking ID that connects each person's answers across waves.

Definitions

Five questions readers ask first

The longitudinal-design literature uses several overlapping terms. Longitudinal design, longitudinal research design, longitudinal study design, and longitudinal study are often used as if they meant the same thing. They mostly do, with small shades of difference. The five answers below cover the five question forms that send readers to this page.

What is a longitudinal design?

A longitudinal design is a way of running research that surveys or observes the same people, organizations, or units more than once across time, and connects each unit's answers from one round to the next. Each round is called a wave. The defining feature is that the same units appear at every wave, so changes within the same unit can be measured directly rather than inferred from comparing different groups.

Cross-sectional designs, by comparison, sample different people at one moment in time. They can show how groups differ. They cannot show how individuals change. The shorthand: longitudinal designs answer "how did this person change?"; cross-sectional designs answer "how do these groups differ?"

What is the difference between a longitudinal design and a cross-sectional design?

A longitudinal design follows the same people across time. A cross-sectional design samples different people at a single point in time. The first answers questions about how individuals change. The second answers questions about how groups differ at one moment.

The operational difference is that longitudinal designs need a way to connect each person's answers across waves. Cross-sectional designs do not. That single requirement is the source of most of the practical difficulty in running longitudinal studies. Cross-sectional studies end at data collection. Longitudinal studies start their hardest work there.

For a deeper comparison including when to choose each design and what each can and cannot show, see the dedicated page on longitudinal vs cross-sectional studies.

What is a longitudinal research design?

A longitudinal research design is the formal name for the same idea, used most often in academic research methods textbooks. The phrase emphasizes that the design is being chosen at the research-planning stage to answer a specific kind of research question, namely a question about change within a unit over time.

Longitudinal research designs come in four main types: panel designs (the same individuals at every wave), cohort designs (a group sharing a starting point), trend designs (different individuals from the same population at each wave), and retrospective designs (current participants asked about past time points). Each type fits a different research question and carries different operational requirements.

Longitudinal design in psychology

In psychology, a longitudinal design is a research design that follows the same participants across months or years to measure how their behavior, cognition, or development changes over time. Classic examples include studies of language acquisition in children, twin studies that track personality stability, and Alzheimer's research that follows the same older adults through annual cognitive assessments.

Psychology textbooks usually distinguish longitudinal designs from cross-sectional designs and from cross-sequential designs. The cross-sequential design combines the two: several cohorts followed longitudinally, so age effects and cohort effects can be separated. For most undergraduate methods coursework, the simple definition holds: longitudinal means same people, more than once, across time.

Longitudinal design meaning

The word longitudinal comes from "longitude," meaning length. In research, a longitudinal design is one that has length in time: the study is stretched across multiple time points rather than compressed into one. Saying a study is longitudinal does not commit you to a length, only to the structure. A two-wave study six weeks apart can be longitudinal. A five-wave study across thirty years can also be longitudinal.

What makes the structure work is the connection between waves. Two surveys six weeks apart with no way to match the same person's answers between them is not a longitudinal study; it is two cross-sectional studies. Length without connection is not longitudinal data.

Types of longitudinal design

Four common types, four different jobs

These four types are sometimes treated as alternatives and sometimes combined inside one study. The right type depends on the research question and on what data is realistic to collect.

Panel design
Same individuals, every wave

The same people are surveyed at every wave. Each person's answers are connected across waves through a tracking ID. Panel designs are the strongest form of longitudinal design for measuring within-person change. They are also the most demanding to run because attrition compounds across waves. Used most when the research question is about how individuals change.

Cohort design
Group sharing a starting point

A group defined by a shared start, such as students who entered school in the same year or program participants who joined in the same quarter, is followed across time. Cohort designs can survey the entire cohort at every wave (panel-style) or sample different cohort members at different waves (rolling-style). The shared starting point is what defines the cohort.

Trend design
Different individuals, same population

Different individuals are sampled from the same population at each wave. The questions are consistent across waves but the people are different. Trend designs can show how the population is changing but cannot show how individuals are changing. Used when individual-level change is not the question or when tracking the same people is not feasible.

Retrospective vs prospective
Looking back vs forward in time

A prospective longitudinal design plans the waves at the start and collects data forward in time. A retrospective design asks current participants about past time points. Prospective designs are more accurate because they collect data at the time it happens. Retrospective designs are cheaper and faster but more prone to recall error.

Structural principles

Six choices that decide whether the design works

None of these are about the survey questions themselves. They are about the structure that holds the survey questions together across time. Get four of six right and the data is usable. Get three of six right and the data is unanalyzable, no matter how good the questions were.

01 . Tracking

A tracking ID set at Wave 1

One ID per person, fixed for the life of the study.

The ID has to exist before Wave 1 data is collected and has to be carried into every later wave. Email is not an ID because emails change. Phone is not an ID for the same reason. A short generated code held by the participant and stored against their record at first contact is what survives.


Why it matters: this is the single decision that determines whether the data is longitudinal or only sequential.

02 . Stability

Survey questions stay identical

Same wording, same scale, same options across waves.

If the questions change between waves, the answers cannot be compared. Even small wording shifts ("rate your confidence" vs "how confident are you") produce different distributions. Lock the wording at Wave 1 and resist the urge to improve it later, even when something looks awkward in retrospect.


Why it matters: the comparison across waves is only valid if the questions are the same.

03 . Timing

Wave intervals match the outcome

The timing is set by what you are measuring.

Skill change measurable in weeks needs short intervals. Wage change measurable in months needs medium intervals. Health outcomes measurable in years need long intervals. A wave interval set by convenience rather than by the outcome produces data that misses the change it was supposed to capture.


Why it matters: too-frequent waves create burden; too-sparse waves miss the change.

04 . Attrition

A plan for who drops out

Distinguish "did not respond yet" from "lost permanently."

People will drop out of every longitudinal study. The question is how the design handles it. Set a window for late responses (typically two to four weeks after the wave open date). Track who is in late-response status versus who has stopped responding entirely. Plan for sample size at the final wave, not the first.


Why it matters: attrition is rarely random, and it shifts the conclusions if not handled.

05 . Anchors

Disaggregation fields stay consistent

Demographic and program fields locked at Wave 1.

Subgroup analysis (by gender, by age band, by program track, by site) only works if the disaggregation fields exist at every wave and use the same categories. Decide the categories at Wave 1, write them down, and do not change them. Adding a category at Wave 3 is much worse than starting with a slightly imperfect set.


Why it matters: "did the program work for women in cohort B?" is unanswerable if cohort B was first defined at Wave 3.

06 . Analysis

Time-aware analysis at the end

Analyze the waves as connected, not as five separate snapshots.

The simplest analysis pairs Wave 1 and Wave 5 within the same person and reports the change. The richer analysis fits a trajectory across all waves and notes who improved early, who improved late, and who plateaued. Either is valid. Collapsing all waves into one cross-sectional table is what wastes the design.


Why it matters: the design is only worth the cost if the analysis uses the within-person structure.

The choices the design forces

Six structural choices, two ways each, what they cost

Each row is a choice every longitudinal design has to make. The "broken way" is what most teams fall into when no one is paying attention. The "working way" is what survives to analysis. The right column names the consequence.

The choice
Broken way
Working way
What this decides
Identifying participants

How wave-3 Maria is matched to wave-1 Maria

Broken

Match by name and email after collection. Names get misspelled. Emails change between waves. The match is done by hand in a spreadsheet weeks later, with no way to verify the assignments.

Working

Generate a tracking ID at first contact, give it to the participant, and store it in their record. Every later wave is filed under that ID, regardless of whether their email or phone has changed.

Whether the design is actually longitudinal or only sequential surveys with no link.

Survey wording

What changes between waves and what stays

Broken

Wave-2 wording is "improved" because the team got smarter. Wave-3 adds two new questions. The scale on the main outcome shifts from 1-to-5 to 1-to-7. None of these are flagged in the file.

Working

Lock the core outcome questions at Wave 1 and resist changing them. Add new questions in a clearly marked supplementary block that is not part of the comparison set. Document scale choices in a survey log.

Whether the comparison across waves is valid or only looks valid.

Wave timing

When each wave runs relative to the program

Broken

Wave timing is set by reporting deadlines. The end-of-quarter report needs data, so a wave runs whether or not enough time has passed for the outcome to change.

Working

Wave timing is set by the outcome being measured. Six weeks for skill change. Six months for wage change. Twelve months for retention. The wave runs when the change has had time to happen.

Whether the data captures real change or measurement noise.

Sample size

How many participants the design starts with

Broken

Sample size is set for Wave 1 and the team plans to "see what happens" with attrition. By Wave 5, fifty-eight percent of the original sample is gone and the analysis is underpowered.

Working

Sample size is set for the final wave with a realistic attrition estimate. To analyze 200 people at Wave 5, plan for 320 to 380 at Wave 1, depending on population.

Whether the final wave can be analyzed at all.

Drop-out tracking

Knowing who is gone vs who is late

Broken

Non-respondents are treated as one group: missing data. There is no way to tell who is going to respond next week and who has dropped out for good. The team gives up on follow-ups too early on some, too late on others.

Working

Set a response window per wave. Within the window, participants are "late." After the window, they are "non-respondent" until they answer a later wave. The status is visible in real time.

Whether attrition is managed or only observed.

Analysis frame

How the connected waves are read at the end

Broken

Each wave is averaged separately and the averages are reported as a trend line. Within-person change is invisible. The structural advantage of having the same people across waves is wasted.

Working

Pair each person's Wave 1 and Wave N values. Report within-person change. Where data allow, fit a trajectory across all waves and group participants by trajectory shape (early gain, late gain, plateau).

Whether the design was worth running at all.

Compounding effect

These six choices compound. The tracking-ID choice at row one decides whether any of the other five matter. A study with no tracking ID cannot recover from any later choice, no matter how careful the wording, timing, or sample-size planning. A study with a tracking ID can recover from almost any other mistake.

A worked example

A workforce-training cohort across five waves

The cohort numbers from the wave architecture diagram, traced through what actually happens at analysis time. Same study, two ways the data could have been collected. The structural difference shows up at month twelve, when the program manager sits down to write the outcome report.

We enrolled 320 trainees in February 2024. We surveyed them at intake, at month three, at exit (month six), at month twelve, and at month twenty-four. Wages went up. We could see that in the average. What we couldn't see was whose wages went up. We knew the cohort moved. We didn't know if Maria moved or if Maria left and someone like her stayed. The board asked who benefited. We had a chart and no answer.

Workforce-training program lead, mid-cycle review

The two axes the design has to bind together

Quantitative axis
What was collected at each wave
  • Wage at intake, exit, month twelve, month twenty-four
  • Confidence rating on a 1-to-5 scale, every wave
  • Job-search activity counts, every wave
  • Demographic and program-track fields, fixed at Wave 1

Linked at collection, not at analysis

Qualitative axis
What was collected alongside
  • Open-text reflection at exit and month twelve
  • Reason-for-non-response code at any wave with a gap
  • Coach narrative notes from each check-in
  • Job-placement story at month twelve and month twenty-four
Sopact Sense produces

A connected record per participant

One row per person, five wave-blocks

Maria's intake, midpoint, exit, twelve-month, and twenty-four-month answers all live on one record. The board's question "did Maria's wages rise" has an answer.

Tracking ID set at first contact

Generated when Maria was first added to the cohort, stored in her record, and used to file every later wave. Email changes, phone changes, last name changes; the ID does not.

Real-time attrition status

At any moment, the team can see who is "late" inside the response window for the current wave and who is "lost" past the window. Follow-up effort goes to the late group.

Within-person change at analysis

The 240 participants who answered Wave 4 each have their own intake number to compare against. The report says 184 of 240 saw a wage rise, not only that the group average rose.

Why traditional tools fail

Five separate exports

One file per wave, no link

Each wave produces its own CSV. Connecting them requires matching by name, email, or phone. Names get misspelled, emails change. The match fails on twenty to forty percent of records.

Match work happens at analysis time

The matching is done weeks after collection ended, when fixing errors means going back to participants who have moved on. Most teams accept the partial match and report what they can.

Attrition is invisible until the end

There is no way to see during a wave who is late versus who is lost. Reminders go out as a single batch or not at all. The retention curve is steeper than it had to be.

Group averages, not within-person change

The analysis ends up being five separate cross-sectional snapshots presented as a trend. The structural advantage of having the same people at every wave is lost in spreadsheet matching.

Why this is structural, not procedural

The work of connecting Wave 1 Maria to Wave 5 Maria is either built into how the data is collected or it is paid for later in spreadsheet hours. Sopact Sense pays the cost at first contact, when it is small. Every survey tool that defers the cost ends up paying it at analysis, when it is large and partly unrecoverable. This is the practical reason most longitudinal studies report group averages rather than within-person change.

Where this design shows up

Three program contexts, three shapes, the same architecture

Longitudinal designs are most familiar from psychology and public health, but they appear in any program where the question is "did the same people change." Three contexts below differ in organizational shape, wave timing, and outcome. The structural choices from sections six and seven are the same.

01

Workforce training

Single-cohort program. Six-month delivery. Outcome twelve to twenty-four months out.

Typical shapeA workforce-training nonprofit enrolls 200 to 400 participants per cohort. The program runs six months. The outcome the funder cares about is wage growth twelve and twenty-four months after exit. Five waves are typical: intake, mid-program, exit, twelve months, twenty-four months. Coaches stay in light contact across the post-program waves to keep response rates high.

What breaksWave timing slips because the post-exit waves compete with new-cohort recruitment for staff attention. Email addresses stop working as participants change jobs. The team realizes at month eighteen that the wage question changed wording at Wave 3 and the comparison set is now four waves instead of five. The funder's annual report is built on group averages because no within-person comparison is possible.

What worksThe tracking ID is set at intake, printed on a small participant card, and used by coaches at every check-in. The wage question is locked in a survey log and not changed. The post-exit waves are scheduled at program-design time, not when the cohort approaches them. Sample size is set assuming forty percent attrition, so the final wave still has enough participants to analyze.

A specific shape

A 320-participant cohort retains 240 at month twelve and 187 at month twenty-four. The board report names 184 of 240 with a measurable wage rise at twelve months, broken down by program track. The same report cannot be produced from spreadsheet matching because the match fails on too many records.

02

Education longitudinal

Multiple sites. Multi-year tracking. Outcome at graduation and beyond.

Typical shapeAn education foundation supports a literacy program across twelve schools. Each school enrolls a new cohort of fifty to ninety students per year. The outcome the foundation cares about is reading proficiency at grade five and graduation rates at grade twelve. Waves run annually inside the program (grades one to five) and then sparsely after exit (grade eight, grade ten, grade twelve). The cohort design needs to handle students who change schools.

What breaksEach school keeps its own records. Student IDs are not standardized across the twelve schools, so a student who transfers between two program schools shows up as a different participant in the funder's data. Reading-assessment instruments differ slightly across schools. By grade twelve, only the schools with strong administrative continuity can report on their original cohorts.

What worksA program-level tracking ID is issued to every student at enrollment and travels with them across schools. The reading assessment is standardized at the foundation level, with a small per-school supplement allowed but not part of the cross-school comparison. The grade-eight and beyond waves use coach networks rather than school administrators to maintain contact.

A specific shape

A 1,400-student multi-site cohort retains 1,180 at grade five and 820 at grade twelve. The foundation report compares graduation rates by entry-grade reading level, which is only possible because the entry assessment is locked to the same scale at every school.

03

Public-health cohort

High-frequency waves. Twelve-month follow-up. Adherence and outcomes.

Typical shapeA public-health program supports patients managing a chronic condition. Patients enroll at a clinic visit and complete a structured self-report every two weeks for the first three months, then monthly out to twelve months. The outcome the program cares about is medication adherence, symptom control, and return-to-work status. Twenty-five waves is not unusual.

What breaksTwenty-five waves of survey administration through patient email rapidly produces survey fatigue. Phone numbers change as patients move or switch carriers. The same patient appears as a different person in the third quarter when their email address fell out of use. Adherence trajectories cannot be computed because the within-person identity is broken.

What worksThe tracking ID is set at the enrollment visit and the patient receives it with their starter materials. Reminders go through a channel they can update themselves rather than through whatever address was on file at intake. Wave intervals are designed against fatigue: dense in the first three months because that is when adherence behavior stabilizes, sparser later because the early signal is what predicts the twelve-month outcome.

A specific shape

A 480-patient cohort produces complete adherence trajectories for 312 patients across all twelve months. The program identifies three trajectory groups: early stable, late stable, and disengaged, which is only possible because every survey is filed against the same ID across twenty-five waves.

A note on tools

Most survey tools were built for one wave. Longitudinal design needs more.

Google Forms SurveyMonkey Qualtrics Typeform KoboToolbox Sopact Sense

The tools above all collect surveys well. Google Forms, SurveyMonkey, Qualtrics, and Typeform are excellent at the act of getting answers from a respondent. The structural gap is what happens after collection: across waves, the same person's answers need to be connected by a tracking ID set at first contact, and most of these tools produce a separate file per wave that the team has to match by hand at analysis time. The tools are built for a single survey, not for a sequence of linked surveys.

Sopact Sense is built around the longitudinal structure rather than around the single survey. Each participant is one record that grows across waves. The tracking ID is set at first contact and survives every change to email, phone, or name. Partial responses stay attached to the participant rather than becoming orphan rows. The matching work that happens in spreadsheets at analysis time in other tools happens at collection time here, which is the only point at which it can be done correctly.

FAQ

Longitudinal design questions, answered

Definitional questions, comparison questions, and execution questions. Each answer is short on purpose. The fuller treatment is in the relevant section above.

Q.01

What is a longitudinal design?

A longitudinal design is a way of running research that surveys or observes the same people, organizations, or units more than once across time, and connects each unit's answers from one round to the next. Because the same people answer at each round, the design can show how individuals change. Cross-sectional designs, by comparison, sample different people at one moment and can only show how groups differ.

Q.02

What is the difference between a longitudinal design and a cross-sectional design?

A longitudinal design follows the same people across time. A cross-sectional design samples different people at a single point in time. The first answers questions about how individuals change. The second answers questions about how groups differ at one moment. Longitudinal designs need a way to connect each person's answers across waves; cross-sectional designs do not.

Q.03

What is longitudinal design in psychology?

In psychology, a longitudinal design is a research design that follows the same participants across months or years to measure how their behavior, cognition, or development changes over time. Classic examples include studies of language acquisition in children, identical-twin studies on personality stability, and Alzheimer's research that tracks the same older adults across annual assessments.

Q.04

What are the types of longitudinal design?

There are four common types. Panel designs survey the exact same individuals at every wave. Cohort designs follow a group defined by a shared start point, such as program participants who began in the same quarter. Trend designs sample new people from the same population at each wave to measure how the population is changing. Retrospective designs ask current participants about past time points. Panel designs are the strongest for within-person change; retrospective designs are the most prone to recall error.

Q.05

What is a longitudinal study example?

A workforce-training program enrolls 320 participants and surveys each one at intake, at the end of training, three months after exit, and twelve months after exit. Because the same participants answer at every wave and are connected by a single tracking ID, the program can report that wages rose for 184 of the 240 participants who responded at twelve months, not only that average wages in the group rose. That within-person measurement is what a longitudinal design produces.

Q.06

How long does a longitudinal study need to be?

Long enough that the change you are studying has time to occur. For training programs, six to twelve months past program exit is common. For developmental psychology studies of childhood change, several years. For health outcomes that take years to manifest, decades. The minimum length is set by the outcome, not by a fixed rule. A study that ends before the outcome can show up has not been long enough.

Q.07

What are the advantages of a longitudinal design?

Longitudinal designs measure within-person change directly. They can show whether the same individuals improved, worsened, or stayed flat. They can sometimes establish that one event preceded another, which strengthens causal interpretation. And they can describe trajectories that cross-sectional designs cannot see, such as people who improve early and then plateau versus people who improve late.

Q.08

What are the disadvantages of a longitudinal design?

Longitudinal designs cost more and take longer than cross-sectional designs. Participants drop out across waves, which is called attrition, and the people who drop out are rarely random. Survey questions written years ago may no longer be the right questions. And the operational work of finding the same people at each wave, especially when contact information changes, is the most common reason longitudinal studies fail to deliver clean data.

Q.09

What is a panel design?

A panel design is the type of longitudinal design where the exact same individuals are surveyed at every wave. Each person's answers from wave one are connected to their answers from wave two, wave three, and so on. Panel designs produce the strongest data for measuring how individuals change, because the same person serves as their own comparison.

Q.10

What is a cohort design?

A cohort design follows a group of people who share a common starting point, such as students who entered school in the same year or program participants who joined in the same quarter. Cohort designs can be panel designs (the same cohort members surveyed each time) or rolling designs (different cohort members sampled at different waves). The shared starting point is what makes the group a cohort.

Q.11

What is the difference between a panel study and a trend study?

A panel study follows the same individuals across waves; a trend study samples different individuals from the same population at each wave. The panel study can answer questions about how the same person changes. The trend study can answer questions about how the population is changing. Both are forms of longitudinal design, but only the panel study produces within-person data.

Q.12

Is a longitudinal study quantitative or qualitative?

It can be either or both. The defining feature of a longitudinal design is that the same units are observed across time, not the type of data collected. Quantitative longitudinal studies use scales, counts, and structured surveys at each wave. Qualitative longitudinal studies use repeated interviews or observations. Mixed-method longitudinal studies combine both, often using the qualitative data to explain patterns the quantitative data shows.

Q.13

What software is built for longitudinal data collection?

Most survey tools (Google Forms, SurveyMonkey, Qualtrics, Typeform) are built for one-wave collection. They produce a separate file per wave, with no built-in way to connect the same person's answers across waves. Sopact Sense is built for longitudinal collection: each participant has a single record that grows across waves, the matching identifier is set at first contact rather than reconstructed from exports, and partially completed waves stay attached to the participant rather than becoming orphan rows. Pricing is per project, not per response.

Q.14

Can I run a longitudinal study with Google Forms or SurveyMonkey?

You can collect longitudinal data with those tools, but you will spend most of your analysis time matching responses across waves by hand. Each wave produces its own export. Names change. Emails change. Phone numbers change. Some people answer wave two but not wave one. The matching work happens in spreadsheets after collection ends, and matching errors at that stage cannot be fixed without going back to the participants. Tools built for longitudinal collection do this matching at the time of collection rather than after.

Related guides

Longitudinal cluster, six pages, six different angles

This page covers the structural definition. The five sibling pages cover execution, comparison, data format, analysis methods, and the survey tools that make collection work.

Bring your study design

Bring your wave plan. See it run end-to-end before Wave 1.

A working session, not a demo. Bring the participants you want to follow, the waves you want to run, and the outcome you want to measure. We will walk through how the tracking ID is set, how the waves are scheduled, and how the within-person comparison is produced at the end. By the end of the session you will know whether the design is ready to run.

Format

60 minutes, video call, working session

What to bring

Your study question, wave count, and outcome measure

What you leave with

A wave-by-wave plan and the tracking-ID setup decided

Last reviewed: May 2026