play icon for videos

NPS Calculator: Net Promoter Score Formula Guide

Free NPS calculator. Enter responses on 0–10, 1–5, or 1–7 scale and get your Net Promoter Score, confidence interval, Excel formula, and industry context. Worked examples included.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 13, 2026
360 feedback training evaluation
Use Case
NPS Calculator: Net Promoter Score Formula Guide
Free tool · 0–10, 1–5, or 1–7 scale · No sign-up

NPS Calculator: the Net Promoter Score formula, applied to your data

Enter responses as percentages, raw counts, or pasted scores. Get your NPS, the promoter / passive / detractor breakdown, a 95% confidence range, and an Excel formula you can drop into your spreadsheet. Built for teams who need a defensible number on a board deck, not a vendor dashboard.

Calculate your NPS

Survey scale

Input mode

Quick pick

9–10
%
7–8
%
0–6
%

Total: 100%. The three percentages should sum to 100.

N

Sets the confidence interval. Leave at 0 to skip the statistical band.

Your Net Promoter Score
+40 NPS
(−100 → +100)

+40 means promoters outnumber detractors by 40 percentage points. That places this cohort in the Great band on the standard NPS scale.

Promoters

55%

220 of 400

Passives

30%

120 of 400

Detractors

15%

60 of 400

Great · Healthy distribution. Track quarterly and pair with open-ended feedback to know what's driving it.
Formula %P − %D · Scale 0–10 · Bucket P 9–10 · Ps 7–8 · D 0–6 · Sample N = 400 · 95% CI ±5 pts

The score takes thirty seconds. Closing the loop on every detractor is where most programs stall.

The formula above is arithmetic. The system around it — persistent stakeholder IDs, qualitative analysis on the open-ended “why”, segment-level views — is what actually moves the number. See how Sopact Sense closes the loop →

Book a 30-min walkthrough

The formula

How to calculate NPS in six steps

NPS = % Promoters − % Detractors. Bucket every 0–10 response: 9–10 are Promoters, 7–8 are Passives, 0–6 are Detractors. Divide each group's count by the total respondent base for the percentages, then subtract the detractor percentage from the promoter percentage. Passives count toward the total but are excluded from the subtraction. The result is a whole number between −100 and +100, reported as a score with a sign — not a percentage.

Step 1 — Ask the canonical question. The original Reichheld phrasing is “On a scale of 0 to 10, how likely are you to recommend [organization] to a friend or colleague?” Keep the wording stable across cycles. Changing the question changes the score, and you lose your trend.

Step 2 — Bucket the responses. Scores of 9 and 10 are Promoters. Scores of 7 and 8 are Passives. Scores of 0 through 6 are Detractors. The split is asymmetric on purpose — Reichheld's original research found that only the top two scores predicted referral behavior. A 7 looks positive on a Likert scale but does not.

Step 3 — Calculate the percentages. For each group, divide its count by the total number of respondents (including Passives), then multiply by 100. If 220 of 400 respondents scored 9–10, that is 55% Promoters. If 60 of 400 scored 0–6, that is 15% Detractors.

Step 4 — Subtract. %P − %D. In the example above, 55 − 15 = +40. Reported as +40, not 40%, not 0.40.

Step 5 — Read the result. Anything above 0 means promoters outnumber detractors. The bands +0 to +30, +30 to +50, +50 to +70, and above +70 are commonly labelled good, great, excellent, and world-class. Industry context matters — a +35 in telecom is exceptional, a +35 in SaaS is below median.

Step 6 — Treat it as a trend, not a snapshot. A single cycle's score at low sample volume can swing 10+ points from statistical noise alone. The direction of movement across cycles, segmented by cohort and touchpoint, is more reliable than any one number.

Reference

NPS by distribution and sample size

The NPS you report and the NPS your real customer base would produce are not the same number. Below N=100, the 95% confidence interval is wider than ±10 points — meaning a reported +30 could be anywhere from +20 to +40. The table below shows the score for five canonical distributions, and the ± confidence band at each sample size. Use the band, not the point estimate, in any board or funder conversation.

Sample size · N World-class
75/20/5 → +70
Healthy
55/30/15 → +40
Mid
40/35/25 → +15
Slightly negative
30/25/45 → −15
Negative
20/35/45 → −25
50 +70 ±13+40 ±19+15 ±22−15 ±24−25 ±20
100 +70 ±9 +40 ±13+15 ±15−15 ±17−25 ±14
250 +70 ±6 +40 ±9 +15 ±10−15 ±11−25 ±9
400 +70 ±5 +40 ±7 +15 ±8 −15 ±8 −25 ±7
500 +70 ±4 +40 ±6 +15 ±7 −15 ±7 −25 ±6
1,000 +70 ±3 +40 ±4 +15 ±5 −15 ±5 −25 ±4
2,500 +70 ±2 +40 ±3 +15 ±3 −15 ±3 −25 ±3
10,000 +70 ±1 +40 ±1 +15 ±2 −15 ±2 −25 ±1
Confidence intervals are 95% Wald approximations for the difference of two proportions. The ★ column reflects the distribution most programs land in their first measurement cycle.
Show the math — formula, Excel/Sheets, and the calculation step-by-step

1. Bucket every response by score.

Scale0–10 (standard)
Promoter range9 to 10
Passive range7 to 8
Detractor range0 to 6

2. Count each bucket.

Promoter count220
Passive count120
Detractor count60
Total respondents400

3. Calculate percentages and apply the formula.

% Promoters = 220 / 400 × 100 = 55%
% Detractors = 60 / 400 × 100 = 15%

NPS = 55 − 15 = +40

4. The same calculation in Excel or Google Sheets. Paste this formula assuming your 0–10 responses live in column A:

=(COUNTIF(A:A,">=9")-COUNTIF(A:A,"<=6"))/COUNTA(A:A)*100

For a 1–5 scale, swap the thresholds: "=5" for promoters, "<=3" for detractors. For a 1–7 scale, ">=6" and "<=4".

5. Confidence interval — what your reported number actually means.

95% CI = ±1.96 × √( (p + d − (p − d)²) / N ) × 100

at N = 400 with the current distribution: ±9 pts

Where p is promoter share and d is detractor share, both as decimals. Below N=30, the interval is unreliable — report the trend across cycles instead of a point estimate.

Scale handling

NPS on a 1–5 or 1–7 scale — the conversion that does not exist

A 1–5 NPS and a 0–10 NPS are different metrics, not the same metric on different rulers. The bucket thresholds change, the response distribution changes, and the score is no longer comparable to published 0–10 benchmarks. If you collect on a 1–5 scale, calculate NPS within that scale (5 = Promoter, 4 = Passive, 1–3 = Detractor) and track your own trend — but do not put your number next to a Bain or Satmetrix industry average and call it a comparison.

The 0–10 scale is the canonical instrument. It produces an 11-point distribution with most respondents landing in the 7–9 range. The Promoter / Passive / Detractor split (9–10 / 7–8 / 0–6) was calibrated to that distribution by Fred Reichheld and Bain & Company in the early 2000s, against actual referral behavior. Every published industry benchmark assumes this scale.

A 1–5 scale compresses the distribution. Most respondents land at 3, 4, or 5. The natural bucketing is 5 = Promoter (a clear top-box), 4 = Passive, 1–3 = Detractor. The math is identical — %P − %D — but the response shape is different, so a 1–5 NPS typically reads 15–25 points higher than the same population on a 0–10 scale would.

A 1–7 scale (Likert-derived) sits between them. 6–7 are Promoters, 5 is Passive, 1–4 are Detractors. Common in academic and Likert-trained survey programs.

When to deviate from 0–10. If you are integrating NPS into an existing 1–5 satisfaction instrument and want one consistent scale across the survey, the alt scales are defensible. If you are starting fresh and want to compare against industry benchmarks, use 0–10 — that is the only way the comparison holds.

Statistical validity

How many responses for a reliable NPS

Below 50 responses, NPS is noise; above 400 it becomes a number you can defend. For a confidence interval of about ±5 points at 95% confidence, you need roughly 400 responses. For ±3 points — the precision that lets you detect a 5-point shift between cycles — you need about 1,000. Smaller cohorts can still be useful if you report the trend across multiple cycles rather than treating any single number as exact. Use the sample size calculator to size your cohort to the precision you need.

The arithmetic of confidence. NPS is the difference between two proportions, which has more variance than either proportion alone. The 95% Wald approximation is ±1.96 × √( (p + d − (p − d)²) / N ) × 100, where p and d are promoter and detractor shares. At a typical mid-band distribution (55/30/15), this works out to roughly ±15 points at N=100, ±7 at N=400, and ±5 at N=1,000.

What the confidence interval actually tells you. At N=400 with a reported +40, the true population NPS sits between roughly +31 and +49 with 95% confidence. A reported +42 next quarter is not actually higher than +40 — both bands overlap. The only way to claim improvement is to either (a) collect enough responses to narrow the interval, or (b) show a consistent direction across three or more cycles.

The cohort question is upstream of the precision question. A statistically valid NPS for the wrong cohort tells you nothing. Segment your respondent pool by what you can actually act on — region, segment, manager, tenure, lifecycle stage — and aim for the precision band within each segment, not across the aggregate. An aggregate NPS of +40 with ±5 points across 1,000 customers can hide a Manufacturing segment running at −5 with the rest of the customer base at +50.

Practical heuristics. For a single executive number, aim for N=400 minimum. For segment-level NPS, target 200 responses per segment. For program-level evaluation against industry benchmarks, target 1,000. For pilot reads on a new touchpoint, 50 responses is enough to detect catastrophic problems but not enough for nuance — treat it as directional only.

Benchmarks

What counts as a good NPS

Above 0 is positive, +30 is good, +50 is great, +70 is world-class — but the bands shift heavily by industry. SaaS and software programs typically run +30 to +50. Healthcare, multifamily, and education programs run +10 to +40. Telecom, insurance, and airlines often run below +20 and sometimes negative. A +35 in telecom is exceptional; a +35 in SaaS is below median. Use the bands as direction, not as a verdict, and weigh your own trend more heavily than any external comparison. See the 2026 NPS benchmarks by industry for the segmented view.

The bands are heuristics, not standards. Reichheld and Bain did not publish official thresholds. The +0/+30/+50/+70 split has been adopted by the market because it produces convenient labels (good, great, excellent, world-class) — not because there is empirical evidence that the +30 boundary corresponds to a discontinuity in customer behavior. The bands work fine for internal labelling. They do not work for comparing across companies.

Industry context shifts the bands by 30–60 points. Categories with structural friction — long contracts, billing complexity, low switching ease — produce lower NPS regardless of underlying service quality. Telecom averages have historically run below +20 and dipped into negative territory. SaaS programs with engaged users routinely score above +50. Comparing your number to the wrong industry average produces the Benchmark Mirage — a chart that looks authoritative but measures incompatible programs.

Your own trend matters more than the external comparison. A program that moves from +18 to +28 over four quarters has improved meaningfully. A program that stays flat at +52 while the industry climbs to +58 has lost ground in relative terms. Trend is signal; absolute number is context.

How to use bands well. Pick one published benchmark within the last 12 months from a source that documents methodology (Bain & Company, CustomerGauge, Satmetrix). Note the response volume and scale they used. Compare cautiously, and lead any board conversation with your own trend chart — not the external comparison.

The three groups

Promoters, passives, detractors

The Promoter / Passive / Detractor split is asymmetric on purpose. Reichheld's original research found that only the top two scores (9 and 10) reliably predicted referral behavior; a 7 looks positive on a Likert scale but does not behave that way. Passives count toward the total respondent base (the denominator) but are excluded from both sides of the subtraction. Detractors include the broad 0–6 range because the predictive boundary between active negative word-of-mouth and neutrality falls between 6 and 7, not lower. NPS can also be negative, which is not a measurement error but a signal that detractors outnumber promoters.

Promoters (9–10) are the small base that drives growth. They refer others, renew, expand. The behavioral evidence behind the 9-and-up threshold is the most rigorously tested part of the methodology. Moving a Passive (7–8) to a Promoter is typically harder than recovering a Detractor (0–6) to a Passive — the gap between “satisfied” and “enthusiastic” is bigger than the gap between “unhappy” and “neutral”.

Passives (7–8) are the future Promoters and future Detractors. They count toward N but not toward the score, which can mislead — a program with 65% Passives and 20% Promoters / 15% Detractors reports +5 NPS, but the trajectory depends entirely on which way the Passives drift. Programs that treat Passives as “satisfied enough” miss the next cycle's risk.

Detractors (0–6) are recoverable for a short window. A 4 or a 5 means there is a specific frustration the respondent will name if asked. The recovery window is roughly 48 hours from response submission — long enough for an account owner to make contact, short enough that the experience is still fresh. Anonymous NPS cannot close that loop; the architecture of identity at collection is what makes detractor recovery operationally feasible.

The same three-group structure applies to Employee NPS (eNPS). Employees scoring 9–10 are Promoters of the workplace, 7–8 are Passives, 0–6 are Detractors. eNPS typically runs 20–30 points lower than customer NPS because employees see operational reality customers do not, and a +20 eNPS is respectable.

FAQ

Common questions about calculating NPS

What is the NPS formula?

NPS = % Promoters − % Detractors. Promoters score 9–10 on a 0–10 scale, Detractors score 0–6. Passives (7–8) count toward the total respondent base but are excluded from the subtraction. The result is a whole number between −100 and +100, reported as an integer rather than a percentage.

How do I calculate NPS in Excel?

If your 0–10 responses live in column A, the formula is =(COUNTIF(A:A,">=9")-COUNTIF(A:A,"<=6"))/COUNTA(A:A)*100. The COUNTIF for >=9 counts promoters, the COUNTIF for <=6 counts detractors, COUNTA gives the response base. Round the result to a whole number. Google Sheets uses the same formula. For a 1–5 scale, swap the thresholds to "=5" and "<=3".

What is a good NPS score?

Anything above 0 means you have more promoters than detractors. The conventional bands are +0 to +30 good, +30 to +50 great, +50 to +70 excellent, above +70 world-class. Bands vary by industry — telecom and airline benchmarks run far lower than SaaS or software benchmarks, so a +35 in telecom and a +35 in SaaS are not comparable.

Can NPS be negative?

Yes. NPS ranges from −100 to +100, and any score below zero means detractors outnumber promoters. A negative NPS is not a measurement error — it is the metric working as designed. Categories with structural friction (telecom, insurance, airlines) sometimes report negative NPS even at decent service quality. What matters is what you do in the 48 hours after the score surfaces.

How do I calculate NPS on a 1–5 scale?

On a 1–5 scale, 5 is the Promoter, 4 is the Passive, and 1–3 are Detractors. Then the same formula applies: NPS = % Promoters − % Detractors. Note that a 1–5 NPS is not directly comparable to a 0–10 NPS — the distribution of responses is different, so published 0–10 benchmarks will not apply. Use a 1–5 NPS to track your own trend, not for cross-industry comparison.

How many responses do I need for a reliable NPS?

Below 50 responses, NPS can swing 10–20 points from statistical noise alone. For a confidence interval of about ±5 points at 95% confidence, you need roughly 400 responses. For ±3 points, you need about 1,000. Smaller cohorts can still be useful if you report the trend across cycles instead of treating a single score as precise.

What is the difference between NPS and CSAT?

NPS measures recommendation likelihood — would you tell others? CSAT measures satisfaction with a specific interaction or product. NPS is forward-looking and indexes loyalty; CSAT is backward-looking and indexes transaction quality. Most programs run both: CSAT after individual touchpoints (a support ticket, a delivery), NPS quarterly for the relationship.

How often should I measure NPS?

Relationship NPS is typically measured quarterly or semi-annually. Transactional NPS (after a purchase, an onboarding, a support resolution) is measured at the moment of the transaction. Surveying more often than your operational ability to close the loop on detractors creates score volatility without improvement — the cadence should match the speed at which feedback can reach the person who can act on it.

What is eNPS?

eNPS is Employee Net Promoter Score — the same formula applied to the question “How likely are you to recommend [organization] as a place to work?” eNPS typically runs lower than customer NPS, so a +20 eNPS is respectable and +40 is strong. Segment by department, manager, and tenure to surface what an aggregate eNPS hides.

Should I report NPS as a number or a percentage?

Report NPS as a whole number, not a percentage. The Bain & Company convention is that NPS is a score on a −100 to +100 scale, written as an integer with a sign (e.g. +42 or −8). Writing it as 42% creates confusion with CSAT and top-box satisfaction metrics, which are genuinely percentages.

Get the full NPS measurement playbook

The end-to-end guide to NPS programs that actually move — from question wording and segment design to closed-loop detractor recovery and longitudinal cohort tracking. Built on the same architecture that powers Stakeholder Intelligence at Sopact.

Read the Stakeholder Intelligence pillar →

Related

Other tools and NPS guides in this cluster

Sopact's NPS cluster covers the calculation (this page), the benchmarks, the feedback analysis, the negative-score recovery playbook, the employee variant, and the operational software. The survey design and methodology pages cover the upstream choices that make any score defensible.

Make your NPS work for what matters most.

NPS is the number. The system around it — continuous collection, persistent identity, qualitative analysis of the “why”, and a closed loop on every detractor — is what actually moves it. That is what Sopact Sense was built to do.