play icon for videos
Use case

Survey Sample Size Calculator: Cochran's Formula Guide

Free survey sample size calculator using Cochran's formula. Find how many responses you need for 95% or 99% confidence at any population size. Table included.

TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

March 29, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

Survey Sample Size Calculator: Cochran's Formula, Tables & How to Calculate

A funder asks how many participants you surveyed. Your program director wants to know if the results are statistically valid. You have 87 responses from a program cohort of 600. Is that enough?

The answer depends on three numbers you probably haven't calculated yet: your confidence level, your margin of error, and your population size. Getting any of them wrong either wastes budget on unnecessary responses or invalidates results you've already collected.

Survey Sample Size Calculator

Cochran's formula + finite population correction · Outputs minimum completed responses needed

95% is the standard for most program evaluations and research
Halving margin of error quadruples required sample
Use 50% when unknown — gives the largest (safest) sample
Applies finite population correction for populations under 5,000
Used to calculate surveys to send, not the minimum sample
Responses Needed
Surveys to Send
Confidence Level
Margin of Error
Formula applied:
CALCULATED · COLLECT · ANALYZE WITH SOPACT SENSE

Once you have your responses, Sopact Sense analyzes open-ended and quantitative data together — automatically.

Explore Sopact Sense →

How to Calculate Survey Sample Size

Survey sample size calculation uses Cochran's formula: n₀ = Z² × p(1−p) / e². Each variable controls a different dimension of precision.

Z (z-score) is determined by your confidence level. At 90% confidence, Z = 1.645. At the standard 95% confidence used in most program evaluations and research, Z = 1.96. At 99% confidence, Z = 2.576. A higher z-score means a larger required sample.

p (response distribution) is the expected proportion of responses in one direction — the proportion of your population that holds a particular characteristic. Use p = 0.5 when you don't know how split the responses will be. This gives the most conservative (largest) sample estimate. If you know from prior research that roughly 80% of your population falls into one category, using p = 0.8 reduces the required sample.

e (margin of error) is the maximum acceptable gap between your sample result and the true population value. At ±5%, a 70% satisfaction finding means the true population value is between 65% and 75%. Halving your margin of error does not double your sample size — it quadruples it. Moving from ±5% to ±2.5% takes you from 384 responses to 1,537.

Plugging 95% confidence (Z = 1.96), p = 0.5, and ±5% margin into the formula: n₀ = 1.96² × 0.5 × 0.5 / 0.05² = 384. This is the number cited in virtually every survey research guide for unlimited populations. It is not a magic constant — it is the output of specific parameter choices.

Cochran's Sample Size Formula: All Four Variables

n₀ = Z² × p(1−p) / e² — the standard formula for unlimited or unknown populations

n₀ = Z² × p(1−p) / e²
Z
Z-Score (Confidence Level)
90% → Z = 1.645 95% → Z = 1.960 ← standard 99% → Z = 2.576
Higher confidence = larger z-score = larger sample. Moving from 95% to 99% increases sample by ~73%.
p
Response Distribution
0.5 → Maximum variability ← use when unknown 0.8 → Known skew (smaller sample) 0.2 → Rare characteristic
p = 0.5 gives the largest, most conservative sample. If you know the true proportion, using it reduces the required sample.
e
Margin of Error
±10% → 68 responses ±5% → 384 responses ← standard ±3% → 1,068 responses ±2% → 2,401 responses
Relationship is quadratic: halving margin of error quadruples required sample. Choose based on decision stakes.
N
Population Size (FPC)
Pop = 100 → need 80 (not 384) Pop = 500 → need 217 (not 384) Pop = 1,000 → need 278 (not 384) Pop ≥ 10,000 → need ~370 (≈384)
Apply FPC: n = n₀ / [1 + (n₀−1)/N]. Only matters for populations under ~5,000.
Worked example: Program evaluation, 95% confidence, ±5% margin, population = 600 participants
n₀ = 1.96² × 0.5 × 0.5 / 0.05² = 384 → FPC: n = 384 / [1 + (383/600)] = 234 responses needed
Finite Population Correction Formula
n = n₀ / [1 + (n₀ − 1) / N]   Apply when population is known and under 5,000. Above 10,000, the correction changes results by less than 4%.

Finite Population Correction: When 384 Is Too Many

The Cochran formula assumes an infinite or very large population. When you're surveying a defined, bounded group — a company's 400 employees, a program's 600 participants, a school's 1,200 students — you don't need 384 responses. You apply the finite population correction: n = n₀ / [1 + (n₀ − 1) / N], where N is your total population.

For a population of 500, this correction reduces the requirement from 384 to 217 — a 43% reduction. For a population of 200, you need only 132, not 384. For populations above 10,000, the correction has negligible effect and the Cochran baseline applies essentially unchanged.

The practical implication: most nonprofit program evaluations, employee surveys, and cohort studies operate on populations under 2,000. The finite population correction is not an academic refinement — it is the difference between achievable and unachievable survey design for most organizations using pre and post surveys to measure program outcomes.

Survey Sample Size Table: Quick Reference

Survey Sample Size Table: Minimum Responses by Population

All values use Cochran's formula with p = 0.5 (maximum variability) and finite population correction

Population 90% / ±10% 95% / ±5% ★ 95% / ±3% 99% / ±5% 99% / ±2%
100 49 80 92 87 96
200 59 132 169 155 193
500 64 217 341 286 454
1,000 66 278 516 400 869
2,000 67 322 696 498 1,655
5,000 68 357 879 586 2,475
10,000 68 370 964 622 3,297
50,000 68 381 1,045 655 4,461
∞ Unknown 68 384 1,068 664 6,765
Key insight: For populations above 10,000, sample size requirements are nearly identical regardless of total population size. Surveying 384 people from 100,000 provides the same ±5% precision as surveying 384 from 1,000,000. You are measuring variance, not percentage of population.

For practitioners who need answers without running calculations, this reference covers the most common scenarios. All values use p = 0.5 (maximum variability) with finite population correction applied.

The most counterintuitive insight this table reveals: for populations above 10,000, sample size requirements are nearly identical regardless of how large the population grows. Surveying 384 people from a city of 100,000 gives you the same ±5% precision as surveying 384 from a country of 300 million. This is because sample size determines how much variance you can detect — not what percentage of a population you've reached.

When using a longitudinal survey design where participants respond at multiple timepoints, calculate your sample size based on your smallest expected wave — typically the final follow-up, where attrition is highest. If you need 278 completed post-surveys for a population of 1,000, you'll need to enroll significantly more participants at baseline to account for dropout between waves.

How Many Survey Responses Do I Need to Be Statistically Valid?

For a statistically valid survey at 95% confidence with ±5% margin of error, you need 384 completed responses if your population is unknown or very large. For smaller, defined populations, you need fewer: 80 responses for a population of 100, 217 for 500, 278 for 1,000. These numbers come from Cochran's formula and represent the minimum completed responses — not invitations sent.

"Statistically valid" means your results accurately reflect the broader population within your stated margin of error at your chosen confidence level. It does not mean the survey is free of bias. A survey with 500 responses that asks leading questions, surveys only English speakers, or reaches only the most engaged participants may be statistically large but methodologically flawed. Sample size answers the quantity question; survey design answers the quality question.

For qualitative surveys where open-ended responses are the primary data, Cochran's formula does not apply. Qualitative research operates on theoretical saturation — typically 15–50 interviews — where sample size is determined by when new responses stop producing new themes.

Statistically Valid Survey Response Rate

A statistically valid survey response rate is not a fixed number — it is the rate at which you receive the minimum required completed responses from the pool you surveyed. If your Cochran calculation requires 217 completed responses from a population of 500, and you achieve a 50% response rate, you need to contact all 500. If you achieve a 70% response rate, you need to contact 310.

Response rate and sample size are related but distinct. A 10% response rate from a large pool can still produce a statistically valid result if the absolute number of completed responses meets your Cochran minimum — provided the non-respondents don't differ systematically from respondents. Non-response bias, not response rate itself, threatens validity.

Statistically Valid Survey Response Rate

Response rate alone does not determine validity — absolute number of completed responses does

In-Person / Captive
70–90%
Training, classroom, event
Highest rates — participants are present and engaged at time of collection
Warm / Known Contacts
35–55%
Program participants, clients
Relationship-driven — prior engagement predicts response likelihood
Targeted Email
15–35%
Segmented opt-in list
Personalized, relevant messaging increases rates toward upper range
Cold / Broad Distribution
5–15%
Unknown population, mass email
Requires large invitation volume to achieve minimum completed responses
Validity depends on completed responses — not response rate percentage
Statistically valid (10% rate) Need 384 responses. Contact 3,840 people. At 10% rate = 384 completed. ✓ Valid if non-respondents don't differ systematically.
Not valid (60% rate) Need 278 responses from population of 1,000. Contact 60 people. At 60% rate = 36 completed. ✗ Below minimum threshold regardless of rate.
The real threat is non-response bias, not response rate. If people who don't respond differ systematically from those who do (e.g., only satisfied participants respond), your results are biased regardless of whether you hit your sample size minimum. Increase validity by testing for non-response bias: compare early vs. late responders, or compare known characteristics of responders vs. non-responders.

Industry response rates vary considerably: in-person or captive audience surveys (training sessions, classroom settings) typically achieve 70–90%; targeted email to known contacts achieves 30–50%; cold email or broad online distribution achieves 5–20%. For increase survey response rate strategies, the most effective levers are personalized invitations, mobile-optimized design, and timing surveys close to program touchpoints when respondents have fresh context.

For program evaluation using mixed method surveys, build your sample size around the quantitative strand — then ensure your qualitative open-ended questions are answered by a representative subset, not just the most willing respondents.

What Is a Good Survey Sample Size?

A good survey sample size is the smallest number of completed responses that produces results accurate enough to support your specific decision. For most program evaluations, employee engagement surveys, and customer satisfaction studies, 95% confidence with ±5% margin of error is the right standard — requiring 278–384 responses depending on population size. For high-stakes decisions affecting significant budget allocations or policy changes, 99% confidence or ±3% margin is appropriate, requiring 400–1,068 responses.

The most common error is treating sample size as a credibility signal rather than a precision instrument. Collecting 1,000 responses when 278 would suffice wastes resources, delays results, and often means the survey doesn't launch at all. Collecting only 50 when 278 are needed means your ±5% finding has an actual margin of error closer to ±14% — wide enough to make the result meaningless for decisions.

[embed: component-cta-ssc-mid.html]

For survey analysis to produce actionable findings, the sample must also support the breakdowns you plan to run. If you need to compare outcomes across three program sites, each site needs its own minimum sample — not just the aggregate. A total sample of 300 split equally across three sites gives you 100 per site, which at a population of 200 per site yields only ±8% margin — potentially too wide for meaningful site-level comparisons.

Survey Confidence Level: 90%, 95%, or 99%?

Survey confidence level describes how certain you are that your results accurately reflect the population. A 95% confidence level means that if you ran your survey 100 times with different random samples, 95 of those surveys would produce results within your stated margin of error. The remaining 5 would fall outside — not because of error, but because of normal statistical variation.

Choosing a confidence level is a risk tolerance decision, not a statistical one. The consequences of a wrong decision from your survey determine how much confidence you need.

At 90% confidence: appropriate for exploratory research, preliminary needs assessments, and pulse checks where you're identifying directions rather than validating precise findings. The smaller required sample makes these surveys faster and cheaper to run.

At 95% confidence: the standard for program evaluations, customer satisfaction research, employee engagement surveys, and most published research. The 1.96 z-score yields sample sizes that balance rigor with practicality.

At 99% confidence: justified when errors have serious financial or operational consequences — regulatory compliance surveys, pre-product-launch validation, clinical or health outcomes research. The required sample size is 73% larger than at 95%.

The confidence level you choose should be stated explicitly in any report or presentation of survey findings, alongside your margin of error and sample size. Without these three numbers, a survey finding — "78% of participants reported improved confidence" — cannot be evaluated for reliability.

Frequently Asked Questions

How do you calculate survey sample size?

Survey sample size is calculated using Cochran's formula: n₀ = Z² × p(1−p) / e², where Z is the z-score for your confidence level (1.96 for 95%), p is the expected response distribution (0.5 for maximum variability), and e is your margin of error (0.05 for ±5%). For known populations under 5,000, apply the finite population correction: n = n₀ / [1 + (n₀ − 1) / N]. At 95% confidence with ±5% margin and unknown population size, the required sample is 384.

How many survey responses do I need to be statistically valid?

For a statistically valid survey at 95% confidence with ±5% margin of error: 80 responses for a population of 100; 217 for a population of 500; 278 for a population of 1,000; 370 for 10,000; 384 for unknown or very large populations. These are minimum completed responses — not invitations sent. Divide your required sample by your expected response rate to determine how many people to invite.

What is a good survey sample size?

A good survey sample size produces results accurate enough to support your specific decision. For most program evaluations and research, 95% confidence with ±5% margin (requiring 278–384 responses) is appropriate. For high-stakes decisions, use 99% confidence or ±3% margin. The most common mistake is treating larger as always better — collecting twice the needed responses wastes resources without improving precision.

What is Cochran's formula for sample size?

Cochran's formula is n₀ = Z² × p(1−p) / e², developed by statistician William Cochran. Z is the z-score for your chosen confidence level (1.645 for 90%, 1.96 for 95%, 2.576 for 99%), p is the expected proportion (use 0.5 for maximum variability), and e is the margin of error. At 95% confidence and ±5% margin, n₀ = 1.96² × 0.5 × 0.5 / 0.05² = 384. For finite populations, apply the correction: n = n₀ / [1 + (n₀ − 1) / N].

What is the finite population correction?

The finite population correction reduces the required sample size when surveying a known, bounded population under 5,000. The formula is n = n₀ / [1 + (n₀ − 1) / N], where n₀ is the initial Cochran sample size and N is the total population. For a population of 500, this reduces the required sample from 384 to 217 — a 43% reduction. For populations above 10,000, the correction has minimal effect.

What is a statistically valid survey response rate?

A statistically valid survey response rate is any rate at which you receive the minimum required completed responses from your sampled pool. There is no universal "valid" response rate — what matters is whether the absolute number of completed responses meets your Cochran minimum, and whether non-respondents differ systematically from respondents. A 15% response rate from 3,000 contacts (450 responses) can be statistically valid for most research; a 60% rate from 60 contacts (36 responses) typically is not.

What is the margin of error in a survey?

The margin of error is the maximum acceptable gap between your sample result and the true population value. At ±5%, a finding of 72% satisfaction means the true population value lies between 67% and 77% with your chosen confidence. Halving the margin of error quadruples the required sample — moving from ±5% (384 responses) to ±2.5% (1,537 responses). Choose your margin based on how much uncertainty is acceptable for the decision you're making.

What is a good confidence level for a survey?

95% confidence is the standard for most program evaluations, market research, customer satisfaction surveys, and academic research. It means 95 out of 100 repeated surveys would produce results within your stated margin of error. Use 90% for exploratory research and quick pulse checks; 99% for high-stakes decisions with significant financial or operational consequences. Always report the confidence level alongside your margin of error and sample size.

How many survey responses do I need for statistical significance?

Statistical significance for hypothesis testing (comparing group differences) differs from the sample size for descriptive survey research. For descriptive surveys, use Cochran's formula — typically 278–384 responses. For testing whether differences between groups are statistically significant (t-test, chi-square), each subgroup must meet its own minimum sample threshold. If comparing two program sites, each needs its own Cochran-calculated minimum, not just the aggregate.

What sample size do I need for a 99% confidence level?

At 99% confidence with ±5% margin of error and unknown population: 664 responses. At 99% with ±2% margin: 4,148 responses. For smaller populations: 87 responses for a population of 100 at 99%/±5%; 286 for 500; 400 for 1,000. The jump from 95% to 99% confidence increases your required sample by approximately 73% across most scenarios.

The Precision Illusion

Sample size tells you how many responses you need. It doesn't analyze what they mean.

Sopact Sense collects, analyzes open-ended and quantitative responses together, and generates findings in minutes — so your survey data actually drives program decisions.

Sopact Sense

Calculate the right sample. Then actually use the data.

From survey design through automated analysis — Sopact Sense closes the loop between data collection and program decisions.

Step 1
Calculate your sample size

Use the calculator above. Cochran's formula + finite population correction for any population.

Step 2
Collect with Sopact Sense

Clean-at-source architecture. Unique participant IDs. No cleanup, no duplicate records.

Step 3
Analyze automatically

AI extracts themes from open-ended responses and correlates them with quantitative metrics in minutes.

TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

March 29, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

March 29, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI