play icon for videos

Survey Sample Size Calculator: Cochran's Formula Guide

Free survey sample size calculator using Cochran's formula. Find how many responses you need for 95% or 99% confidence at any population size. Table included.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 14, 2026
360 feedback training evaluation
Use Case
Survey Sample Size Calculator: Cochran's Formula Guide
Cochran 1977 · live-bound

Survey sample size calculator.

Enter your population, pick a confidence level, set the margin. The required sample size updates live. All values use Cochran's formula with automatic finite population correction.

Inputs

Quick pick

Total people you could survey: cohort, employee list, applicant pool. Leave blank for unknown.

Confidence level Z

How sure you want to be the sample reflects the whole.

%

Maximum gap between your sample result and the true value.

Use 0.5 if you don't know how responses will split: it is the most conservative.

Required sample size

278

completed responses

To detect a true proportion within ±5% at 95% confidence, survey at least 278 people out of 1,000.

Raw Cochran n₀ 385 infinite pop.
FPC applied Yes corrected to 278
% of population 27.8% of N

Achievable for most programs. See response-rate planning below.

Formula Cochran 1977 · Distribution p = 0.5 · Z-score 1.960 · FPC on · Rounding ceil(n)

After the number

You have your number. Now collect, clean, and analyze the responses.

Sopact Sense is the intelligence layer for stakeholder data: survey intake, response cleanup at the source, and deterministic quant-plus-qual analysis that stays consistent cycle to cycle. From the first data point, every record is unique, connected, and AI-ready.

How to calculate survey sample size

The Cochran formula, explained.

Survey sample size is calculated using Cochran's formula: n₀ = Z² · p · (1 − p) / e². Z is the z-score for the chosen confidence level, p is the expected response distribution, and e is the margin of error as a decimal. For known populations under 5,000, apply the finite population correction n = n₀ / (1 + (n₀ − 1) / N). At 95% confidence with ±5% margin and unknown population, the required sample is 384.

Z · z-score

Confidence level dial

Z is determined by your confidence level. At 90%, Z = 1.645. At 95%, the program-evaluation default, Z = 1.96. At 99%, Z = 2.576. A higher z-score raises the required sample.

Common values

90% · 1.645
95% · 1.960
99% · 2.576

Set by

Funder or board policy, before fielding the survey.

p · distribution

Expected split of responses

p is the expected proportion of responses in one direction. Use 0.5 when the split is unknown: it gives the most conservative estimate. If prior research shows roughly 80% in one category, p = 0.8 reduces the required sample.

Default

p = 0.5, maximum variability.

If prior data

Use the observed proportion. Smaller required n.

e · margin

Acceptable error band

e is the maximum acceptable gap between sample result and true population value. At ±5%, a 70% finding means the true value is between 65% and 75%. Halving e quadruples the required sample, from 384 to 1,537.

Default

±5 percentage points.

Tighten if

The decision needs precision. Cost compounds.

Plugging 95% confidence (Z = 1.96), p = 0.5, and ±5% margin into the formula: n₀ = 1.96² · 0.5 · 0.5 / 0.05² = 384. This is the number cited in virtually every survey research guide for unlimited populations. It is not a magic constant: it is the output of three specific parameter choices.

Cochran's formula · worked from your inputs

Show the arithmetic.

Cochran's formula computes the raw sample, then the finite population correction trims it for known populations. The two-step calculation is step-by-step below, with z-scores keyed to the confidence level you picked above.

01

Cochran's raw formula

Compute the infinite-population sample size from your z-score, response distribution, and margin of error. This is the headline 384 at 95% / ±5%.

Step 1 · raw sample (no population cap) # Variables Z = 1.960 # 95% confidence p = 0.5 # maximum variability e = 0.05 # ±5% margin of error n0 = Z² · p · (1 − p) / e² n0 = 3.8416 · 0.25 / 0.0025 n0 = 384.16 # raw
n₀ = 385 (rounded up)
The unbounded-population sample size.
02

Finite population correction

If N is known and under 5,000, the correction shrinks the required sample. For a cohort of 1,000, n drops from 385 to 278: a 28% reduction.

Step 2 · adjust for known population N N = 1000 # your cohort size n0 = 384.16 # raw, from step 1 n = n0 / ( 1 + (n0 − 1) / N ) n = 384.16 / 1.38316 n = 277.74 # raw
n = 278 (rounded up)
The required completed responses for a cohort of 1,000.
03

Plan invitations against your response rate

Required n is completed responses, not invitations sent. Divide n by the channel response rate to get how many people to contact.

Step 3 · invitations to send n = 278 # from step 2 rate = 0.50 # expected 50% response invites = n / rate
invites = 556
Plan the field at this volume to clear the n = 278 threshold.
Survey sample size table

Minimum responses, by population.

All values use Cochran's formula with p = 0.5 and finite population correction. Use the table to sense-check the calculator, or to skim a few cohort sizes before you commit. The ★ column marks the 95% / ±5% standard used in most program evaluations, customer satisfaction surveys, and employee engagement studies.

Population (N) 90% · ±10% 95% · ±5% ★ 95% · ±3% 99% · ±5% 99% · ±2%
502945484750
1004180928898
25054152203182236
50060218341286447
1,00064278517400806
2,500663347495251,561
5,000673578805862,268
10,000683709656232,932
50,000683821,0456553,830
100,000683831,0566603,983
N → ∞683851,0686644,148

All values rounded up to whole responses. The 95% / ±5% column is the standard for most program evaluations because it balances precision with feasibility: 384 completions is achievable for most organisations, while ±5 percentage points is tight enough for board reporting and funder review.

For populations above 10,000, sample size requirements are nearly identical regardless of total size. Surveying 384 people from 100,000 provides the same ±5% precision as surveying 384 from one million.

Key insight · you are measuring variance, not percentage of population

For longitudinal surveys where participants respond at multiple timepoints, size against the smallest expected wave: typically the final follow-up, where attrition is highest. If 278 post-survey completions are needed for a cohort of 1,000, enroll significantly more participants at baseline to account for dropout.

Finite population correction

When 384 is too many.

The finite population correction reduces the required sample size when surveying a known, bounded population under 5,000. The formula is n = n₀ / (1 + (n₀ − 1) / N), where n₀ is the initial Cochran sample and N is the total population. For a population of 500, the correction shrinks the required sample from 384 to 217, a 43% reduction.

The Cochran formula assumes an infinite or very large population. When the survey targets a defined, bounded group (a company's 400 employees, a program's 600 participants, a school's 1,200 students), 384 responses is more than needed.

Without FPC

Required completions at N = 500

384

The Cochran baseline for unbounded populations. Applied to a cohort of 500, it overshoots: 77% of the entire population would need to complete the survey.

With FPC

Required completions at N = 500

218

The finite-correction adjustment. 218 completions out of 500 hits the same ±5% precision at 95% confidence. A 43% reduction in required completions and field cost.

For populations above 10,000, the correction has negligible effect and the Cochran baseline applies essentially unchanged. For populations under 200, the correction often shrinks required n into the dozens.

45

N = 50 · 90% of cohort

152

N = 250 · 61% of cohort

278

N = 1,000 · 28% of cohort

383

N = 100,000 · 0.4% of cohort

The practical implication: most nonprofit program evaluations, employee surveys, and cohort studies operate on populations under 2,000. The finite population correction is not an academic refinement: it is the difference between an achievable survey and an unachievable one for most organisations running pre and post surveys to measure program outcomes.

How many responses for statistical validity

Statistically valid is a number, not a feeling.

For a statistically valid survey at 95% confidence with ±5% margin, the minimum is 384 completed responses for unknown or very large populations. For smaller defined populations: 80 responses for N = 100, 217 for N = 500, 278 for N = 1,000. These are minimum completions, not invitations sent. Validity is a quantity threshold, not a quality verdict.

"Statistically valid" means results accurately reflect the broader population within the stated margin of error at the chosen confidence level. It does not mean the survey is free of bias. A survey with 500 responses that asks leading questions, reaches only English speakers, or surveys only the most engaged participants may be statistically large but methodologically flawed. Sample size answers the quantity question; survey design answers the quality question. Four common misreads:

Misread 01 · "We got 500, it's valid"

Volume without context

What's wrong

500 means nothing without knowing the population, the channel, and the response rate. Could be excellent or biased.

What's right

State N, required n, and actual completions side by side. Then state response rate.

Misread 02 · Below 30 responses

Directional only

What's wrong

With fewer than 30 responses, the central limit theorem breaks down. Findings are directional, not statistically inferential.

What's right

Report as "themes" or "patterns observed". Do not attach precision claims or percentages.

Misread 03 · Self-selected sample

Non-response bias

What's wrong

An open link that anyone can fill draws the most engaged or the most aggrieved. Sample size is met; representativeness is not.

What's right

Use a closed, invited sample from a known frame. Track and report response rate.

Misread 04 · Subgroup of 30

Wide confidence interval

What's wrong

A total sample of 300 split across three sites gives 100 per site. Subgroup confidence intervals widen sharply.

What's right

Size each subgroup against its own Cochran minimum. Report subgroup CIs alongside the headline number.

For qualitative surveys where open-ended responses are the primary data, Cochran's formula does not apply. Qualitative research operates on theoretical saturation: typically 15 to 50 interviews, where sample size is determined by when new responses stop producing new themes.

Response rate planning

The number everyone forgets.

A statistically valid survey response rate is not a fixed number: it is the rate at which expected invitations turn into the minimum required completions. If a Cochran calculation needs 278 completions from a cohort of 1,000, a 50% response rate requires contacting 556 people; a 25% rate requires 1,112. Sample size is the completed-response count. Response rate is the invitation-to-completion ratio. Confusing them is the most common cause of under-powered surveys.

Channel Typical response rate Invites for n = 278 Invites for n = 384 Where it works
Cold email · external list 15% – 25% 1,112 – 1,854 1,536 – 2,560 Market research, prospect surveys
Embedded program touchpoint 35% – 60% 464 – 795 640 – 1,098 In-product, SMS, post-event push
Participant in active cohort 40% – 70% 397 – 695 549 – 960 Workforce training, fellowships
Mandatory funder follow-up 70% – 90% 309 – 397 427 – 549 Grant-required, exit surveys
One-time anonymous link 5% – 15% 1,854 – 5,560 2,560 – 7,680 Open web link, social share

Sample size is the count of completed responses. Response rate is the share of invitations that turn into completions. Those two are not the same number. The real threat is non-response bias, not response rate itself. If people who do not respond differ systematically from those who do (only satisfied participants, only English-fluent participants, only the most engaged), results are biased regardless of whether the sample size threshold is met.

For program evaluation using mixed method surveys, size around the quantitative strand, then confirm that open-ended responses come from a representative subset, not only the most willing respondents. For tactics that move the response rate dial, see how to increase survey response rates.

What's a good sample size · confidence level choice

The trade-off that drives sample size.

A good survey sample size is the smallest number that meets the confidence level and margin of error for the cohort you are studying. For most program evaluations, employee engagement, and customer satisfaction studies, 95% confidence with ±5% margin (278 to 384 responses, depending on population size) is the right standard. For high-stakes decisions affecting major budget allocations or policy, 99% confidence or ±3% margin is appropriate, requiring 400 to 1,068 responses.

Two knobs decide everything else: confidence level and margin of error. Tightening either one raises the required sample. Knowing which one your funder, your board, or your LP actually cares about is half the work.

90% confidence

Exploratory · internal · pulse

Appropriate for exploratory research, preliminary needs assessments, and pulse checks where the goal is direction, not validation. The smaller required sample makes the survey faster and cheaper to run.

Sample at N=1,000

214 completions

Versus 95%

~30% smaller

95% confidence ★

Program evaluation default

The standard for program evaluations, customer satisfaction, employee engagement, and most published research. The 1.96 z-score yields sample sizes that balance rigour with practical feasibility.

Sample at N=1,000

278 completions

Use for

Board reports, funder review

99% confidence

High-stakes only

Justified when errors carry serious financial or operational consequences: regulatory compliance, pre-launch validation, clinical or health outcomes research. Sample size is roughly 73% larger than at 95%.

Sample at N=1,000

400 completions

Versus 95%

~73% larger

The most common error is treating sample size as a credibility signal rather than a precision instrument. Collecting 1,000 responses when 278 would suffice wastes budget, delays results, and often means the survey does not launch at all. Collecting 50 responses when 278 are needed means a ±5% finding has an actual margin closer to ±14%, wide enough to make the result meaningless for decisions.

Confidence level is a policy decision, not a statistical one. Set it before writing the survey, not after the responses come in.

Field rule · program evaluation

For survey analysis to produce actionable findings, the sample must also support the breakdowns the team plans to run. A total of 300 split across three sites gives 100 per site, which at a site population of 200 yields a ±8% margin: potentially too wide for site-level comparisons.

FAQ

Questions people search alongside this calculator.

Pulled from search-console data for this page. Each answer leads with the bottom-line answer, followed by the supporting context.

How do you calculate survey sample size?

Survey sample size is calculated using Cochran's formula: n₀ = Z² · p · (1 − p) / e². Z is the z-score for the chosen confidence level (1.96 for 95%), p is the expected response distribution (0.5 for maximum variability), and e is the margin of error (0.05 for ±5%). For known populations under 5,000, apply the finite population correction n = n₀ / (1 + (n₀ − 1) / N). At 95% confidence with ±5% margin and unknown population size, the required sample is 384.

How many survey responses do I need to be statistically valid?

For 95% confidence with ±5% margin: 80 responses for a population of 100, 217 for 500, 278 for 1,000, 370 for 10,000, and 384 for unknown or very large populations. These are minimum completed responses, not invitations sent. Divide the required sample by the expected response rate to determine how many people to invite. A 50% response rate doubles the invitation count required.

What is a good survey sample size?

A good survey sample size is the smallest number that produces results accurate enough to support the specific decision being made. For most program evaluations, 95% confidence with ±5% margin (278 to 384 responses) is appropriate. For high-stakes decisions, use 99% confidence or ±3% margin. The most common mistake is treating larger as always better. Collecting twice the needed responses wastes resources without improving precision.

What is Cochran's formula for sample size?

Cochran's formula is n₀ = Z² · p · (1 − p) / e², developed by statistician William Cochran. Z is the z-score for the chosen confidence level (1.645 for 90%, 1.96 for 95%, 2.576 for 99%), p is the expected proportion (use 0.5 for maximum variability), and e is the margin of error. At 95% confidence and ±5% margin, n₀ = 1.96² · 0.5 · 0.5 / 0.05² = 384. For finite populations, apply the correction n = n₀ / (1 + (n₀ − 1) / N).

What is the finite population correction?

The finite population correction reduces the required sample size when surveying a known, bounded population under 5,000. The formula is n = n₀ / (1 + (n₀ − 1) / N), where n₀ is the initial Cochran sample size and N is the total population. For a population of 500, the correction shrinks the required sample from 384 to 217, a 43% reduction. For populations above 10,000 the correction has minimal effect.

What is a statistically valid survey response rate?

Response rate alone does not determine validity. The absolute number of completed responses does. Plan invitations so that even at a conservative 20 to 30% completion rate the required n is cleared. A 15% response rate from 3,000 contacts (450 responses) can be statistically valid; a 60% rate from 60 contacts (36 responses) typically is not. Non-response bias is the real threat, not response rate itself.

What is the margin of error in a survey?

Margin of error is the maximum acceptable gap between the sample result and the true population value. At ±5%, a finding of 72% satisfaction means the true value lies between 67% and 77% with the chosen confidence. Halving the margin of error quadruples the required sample: moving from ±5% (384 responses) to ±2.5% (1,537 responses). Choose the margin based on how much uncertainty the decision can absorb.

Confidence level: 90%, 95%, or 99% · which?

95% confidence is the default for program evaluation, board reporting, and most funder-facing work. Use 90% when the decision is internal and a wider margin is acceptable. That choice cuts the required sample by roughly 30%. Use 99% only for high-stakes decisions where the cost of being wrong is severe. It pushes the required sample size up by roughly 73% versus 95%.

How many responses for statistical significance?

Statistical significance for hypothesis testing differs from descriptive survey sample size. For descriptive surveys, use Cochran's formula: typically 278 to 384 responses. For testing whether group differences are statistically significant (t-test, chi-square), each subgroup must meet its own minimum threshold. If comparing two program sites, each needs its own Cochran-calculated minimum, not just the aggregate.

What sample size do I need for 99% confidence?

At 99% confidence with ±5% margin and unknown population: 664 responses. At 99% with ±2% margin: 4,148 responses. For smaller populations: 87 responses for a population of 100, 286 for 500, 400 for 1,000. The jump from 95% to 99% confidence increases the required sample by approximately 73% across most scenarios. Reserve 99% for decisions where being wrong has serious cost.

After the calculator

Get the full Stakeholder Survey Planning guide.

Sample size is step one of four. The guide covers question type selection, response-rate uplift tactics, and longitudinal design: the four decisions that determine whether the data survives funder scrutiny.

Ready when you are

Make your data work for what matters most.

Sopact Sense: the intelligence layer for stakeholder data. From application intake to longitudinal outcomes, in one place. See your first records cleaned, connected, and AI-ready in under an hour.