play icon for videos

NPS Benchmarks by Industry 2026: Averages & eNPS | Sopact

2026 NPS benchmarks for 14 industries — median, good, excellent ranges. Plus eNPS benchmarks, response-rate benchmarks, and peer-set guidance.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 21, 2026
360 feedback training evaluation
Use Case

NPS Benchmarks by Industry 2026: Averages, eNPS Ranges, and How to Read Them

Your board asks whether your NPS of +42 is good. You search "average NPS by industry," find a chart from a vendor blog, notice it hasn't been updated since 2021, and present the number anyway. Three months later a competitor publishes their score of +61 and your funder asks why you're trailing. What nobody told you: the competitor used a 5-point scale, surveyed only their most engaged users, and their "+61" is not the same metric as your "+42." That gap between your score and theirs is methodology, not loyalty.

Last updated: April 2026

NPS benchmarks by industry provide directional context for interpreting a Net Promoter Score, but they rarely provide a precise peer comparison. Published benchmarks mix incompatible methodologies — different scales, different response rates, different collection moments — so a published "+40 average" for your industry and your own "+40" can mean radically different things. This page covers the 2026 NPS benchmarks across 14 industries, eNPS (employee Net Promoter Score) benchmarks for HR teams, response-rate benchmarks, the tools and providers that publish benchmark data, and — most importantly — how to read benchmarks without falling into the Benchmark Mirage.

NPS Benchmarks by Industry · 2026
Benchmarks give context. They don't give targets.

Industry NPS averages range from below zero (telecom) to above +60 (premium consumer brands). The spread reflects business model, customer segment, and survey methodology — not relative performance. This page covers 2026 NPS and eNPS benchmarks across 14 industries, with the methodological caveats most published charts skip.

Industry NPS Range · 2026
Why "industry average" is rarely one number
NPS score ranges across 8 industries — median, good, excellent 0 +25 +50 +75 Telecom +10 median Healthcare +32 median Financial Svcs +35 median SaaS +33 median Education +35 median Nonprofit +40 median Retail +40 median Consumer Tech +45 median Premium brands +70+ median to top-quartile NPS range per industry · 2026
Industry range + median Below-zero territory
aggregated from 2024–2025 research
Ownable Concept
The Benchmark Mirage

Cross-industry NPS averages mix incompatible survey methodologies, response bases, and collection moments — producing a number that looks authoritative and measures nothing you can act on against your own program. A 5-point scale converted to NPS looks like a 0–10 result. A 15% response rate from your engaged segment looks like a full-base sample. A post-transaction snapshot looks like a quarterly relational score. The alternative isn't "ignore benchmarks" — it's your own internal trend, where methodology stays constant across every cycle.

14
industries covered in this 2026 benchmark reference
−5 to +70
typical median NPS range across industries — 75-point spread
+10 to +30
typical good eNPS range — employee NPS runs ~20 points below customer NPS
< 12 mo
recency ceiling for a useful benchmark — older data leads the mirage

What is an NPS benchmark?

An NPS benchmark is an industry-level or sector-level average Net Promoter Score, aggregated from surveys across many companies in the same category. Benchmarks are published by research firms (Bain & Company, Satmetrix/NICE, CustomerGauge), survey platforms (Qualtrics, SurveyMonkey, Medallia), and industry analysts. The intent is to give an organization context for interpreting its own score — "are we above or below the industry norm?"

Benchmarks are directional, not precise. Two companies in the same industry can score radically differently because of survey methodology, customer segment, collection timing, and response rate. A useful benchmark clearly documents its methodology (scale, collection moment, response-rate convention) and is less than 12 months old. Most benchmarks fail at least one of those tests, which is why your own trend line usually matters more than the external comparison.

What is a good NPS score?

A good NPS score depends on industry. Absolute benchmarks treat any positive score as baseline healthy, +30 to +50 as good, +50 to +70 as excellent, and +70+ as world-class. Most consumer and B2B contexts fall in the +20 to +50 range; industries with structural friction (telecom, cable, airlines) often average below +20, while premium consumer brands and high-engagement nonprofits often exceed +60. The wrong way to use these ranges: as universal targets. The right way: as directional context alongside your own trend.

Industry matters more than the absolute number. A +40 NPS is excellent in cable/telecom, healthy in financial services, and merely average in SaaS or e-commerce. Benchmark your score against companies with genuinely comparable customer bases, business models, and survey methodologies — not against a single aggregated industry average pulled from a five-year-old report.

NPS benchmarks by industry (2026)

NPS benchmarks vary dramatically by industry — from below zero in cable/telecom to above +60 in consumer-brand retail. The widget below shows median, good (top-quartile), and excellent (top-decile) ranges across 14 industries, based on aggregated 2024–2025 research from Bain & Company, Qualtrics, CustomerGauge, SurveyMonkey, and publicly disclosed company data. Treat the ranges as directional guidance — your peer comparison should look at companies with comparable scale, business model, and survey methodology, not at the industry average alone.

Three patterns worth noting before you read the table. B2B vs. B2C matters more than "industry": B2B SaaS scores typically run 10–15 points below consumer SaaS because switching costs depress active advocacy. Premium positioning drives variance within each category: Apple's +60, Amazon's +60, Costco's +70 all pull their category averages up without reflecting a typical retailer. Regulatory industries cluster below zero: cable, telecom, and airlines average near zero or below — a +20 in those industries is competitive, not merely acceptable.

2026 Benchmarks · 14 Industries
NPS benchmarks by industry, filterable

Toggle between customer NPS and employee NPS (eNPS). Each row shows the industry's median score, the typical "good" range, and the excellent threshold — based on aggregated 2024–2025 research.

Median typical industry average
Good range top-quartile territory
Excellent threshold top-decile performers
Industry
Median
Good range
Excellent
Cable / Telecom Structural advocacy drag
+5
−5 +20
+30top-decile
Airlines Budget vs premium carrier split
+15
0 +35
+50top-decile
Insurance Contact-driven loyalty
+25
+15 +45
+55top-decile
Healthcare Access + billing drive detraction
+32
+25 +45
+60top-decile
SaaS / B2B Software Switching cost suppresses advocacy
+33
+30 +45
+60top-decile
Financial Services Credit unions outperform large banks
+35
+30 +50
+65top-decile
Education Post-program scores drop 10–15 pts
+35
+30 +50
+60top-decile
Multifamily / Real Estate Move-in + maintenance dominate
+20
+10 +35
+55top-decile
Retail / E-commerce Commodity vs premium spread is wide
+40
+35 +55
+70top-decile
Nonprofit / Social Sector No published consensus · internal trend matters most
+40
+35 +60
+70top-decile
Automotive Dealer experience + service cycle
+40
+35 +55
+65top-decile
Professional Services Consulting · accounting · legal
+45
+40 +60
+70top-decile
Consumer Technology Product-led advocacy
+45
+40 +60
+70top-decile
Premium Consumer Brands Apple · Costco · Chewy · Tesla tier
+65
+60 +80
+80top-decile
Retail / Hospitality Front-line turnover pressure
0
−10 +15
+25top-decile
Healthcare Clinical stress · shift rotation
+3
−5 +20
+30top-decile
Education K-12 skews lower · higher ed higher
+5
−5 +20
+30top-decile
Manufacturing Shift work · plant-level variance
+10
0 +25
+35top-decile
Financial Services Bank · insurance · fintech mix
+12
0 +25
+35top-decile
Technology / SaaS Highest-eNPS category in most studies
+18
+10 +35
+45top-decile
Professional Services Partner-track culture drives upside
+20
+15 +35
+45top-decile
Nonprofit / Mission-Driven Mission alignment boosts eNPS consistently
+25
+20 +40
+50top-decile

Ranges aggregated from Bain & Company (NPS Prism), Qualtrics XM Institute, SurveyMonkey, CustomerGauge, and publicly disclosed company data, 2024–2025. Treat as directional — a tight peer set of 3–5 named competitors plus your own rolling trend is more decision-useful than broad industry averages.

Build your internal benchmark →

What is a good eNPS benchmark?

eNPS (Employee Net Promoter Score) benchmarks generally run 20–30 points below customer NPS benchmarks in the same industry. The typical eNPS range across industries is −10 to +30, with "good" scores generally +10 to +30, "excellent" scores above +30, and "world-class" scores above +50. A few high-performing technology companies report eNPS above +50, but most industries center between 0 and +20. The question "is our eNPS good?" requires the same care as NPS — industry, company size, tenure mix, and survey methodology all shift the baseline.

eNPS benchmarks by industry (2026 composite ranges):

Technology and Professional Services tend to lead (eNPS +15 to +35 typical range for healthy organizations). Financial Services and Manufacturing cluster in the middle (eNPS 0 to +20). Healthcare, Retail, and Hospitality often report lower eNPS (−10 to +15), reflecting operational stress and high front-line turnover in those sectors. Nonprofit and Mission-Driven organizations frequently report above-median eNPS (+20 to +40) when mission alignment is strong — one of the few cases where a sector's structural features consistently produce higher scores. See the widget above for the eNPS view toggle.

Reading Benchmarks Correctly · 2026
Six principles for using NPS benchmarks without falling into the Mirage

Benchmarks are context, not targets. These six practices separate teams that use benchmarks to sharpen their thinking from teams that use benchmarks to rationalize weak conclusions about their own programs.

01
Recency
Use benchmarks that are less than 12 months old

Industries change fast — especially tech, retail, and finance. Citing a 2021 benchmark in a 2026 board conversation is the most common Mirage mistake. Every published benchmark should carry a collection-year stamp, and "2021 data updated 2024" is not the same as 2024 data.

Most top-ranking "NPS benchmarks" blog posts use data 3+ years stale.
02
Methodology
Match scale, collection moment, and response base

A 1–5 scale converted to NPS-equivalent isn't comparable to 0–10 NPS. Post-transaction scores aren't comparable to quarterly relational scores. Engaged-segment benchmarks aren't comparable to full-base collection. Confirm all three before citing any benchmark.

Methodology mismatch accounts for most "why are we behind?" boardroom confusion.
03
Peer Set
Build a peer set, not just an industry

Select 3–5 named competitors with your revenue model, customer acquisition channel, and relationship cadence. This peer set is far more informative than a broad "industry average." "B2B SaaS" is not a peer set — mid-market finance B2B SaaS is.

An aggregated industry average hides the 20-point variance inside the same industry.
04
Own Trend
Your internal trend beats any external benchmark

Your +38 today vs. your +32 six months ago holds methodology constant across the comparison — same scale, same collection moment, same respondent base. This is the most defensible and decision-useful benchmark you can build, and the one most teams underuse.

Funder reports should lead with trend, not with industry comparison.
05
Distribution
Read the distribution, not just the headline

A +40 average with 70% Promoters and 30% Detractors is very different from a +40 average with 45% Promoters and 5% Detractors and 50% Passives. The first has loud advocates and loud detractors; the second has a large ambivalent middle. The interventions are different.

A single score obscures the signal that drives action.
06
Segment
Segment before you benchmark

Aggregate NPS masks segment-level variance. Enterprise customers, SMB customers, new customers, tenured customers, and churned customers all have different baseline scores. Compare segment to segment before comparing against any external benchmark.

One headline number hides the segment that needs attention.

Apply all six and benchmarks become a sharpening tool. Apply none of them and benchmarks become a narrative device — useful for boardroom slides, unhelpful for actual program improvement.

See NPS calculation methodology →

Why NPS benchmarks vary so much across industries

NPS benchmarks vary across industries for structural reasons — not because some industries are "better" than others. Switching cost suppresses advocacy in regulated or contract-locked industries (telecom, enterprise B2B software, insurance) where customers may be satisfied but unable to recommend specific providers; this pulls benchmarks down regardless of service quality. High emotional investment raises advocacy in mission-aligned sectors (nonprofits, education, healthcare at its best) where customers become champions when outcomes land. Commodity positioning compresses scores in categories where price is the primary differentiator — generic retailers, budget airlines, mass-market banking.

Business model matters more than industry label. A SaaS company serving a high-NPS buyer persona (developers, designers) will run very different numbers from a SaaS company serving compliance-driven enterprises. A healthcare provider with vertical specialty focus will benchmark differently from a broad-network hospital system. The practical rule: when you select a benchmark, select a peer group — companies with your revenue model, customer acquisition channel, and relationship cadence — not just companies in your SIC code.

How to benchmark your NPS against the right peer group

Build your benchmark bottom-up from three layers: your own trend (the most reliable comparison), a tight peer set of 3–5 named competitors (the most informative external comparison), and broad industry averages (directional only). Your own trend beats every external benchmark because it holds methodology constant — same scale, same collection moment, same respondent base across cycles. The peer set matters because it's close enough to your specific business model to make the comparison meaningful. The broad industry average provides directional context but should never be treated as a target.

The alternative — taking one published industry average as your benchmark — is what produces most of the misleading board conversations about NPS. Your +38 compared to your +32 six months ago tells you something real: the trend is positive and the absolute improvement is quantifiable. Your +38 compared to an industry average of +41 published in 2022 by a vendor using a different survey methodology tells you very little — and arguing about the gap is usually a worse use of time than investigating what's driving the positive movement in your own data.

Three Contexts · Three Benchmark Problems
How the Benchmark Mirage breaks real programs — and what to do instead

Select your context to see the common benchmark mistake and the alternative framing that produces better decisions.

A VP of Programs at a workforce nonprofit gets the same board question every quarter: "Where does our +38 NPS sit relative to industry?" The VP has been pulling numbers from vendor blog posts that don't match program type. There is no published consensus benchmark for nonprofit workforce programs — which is useful to know, because it reframes the question from "are we above or below?" to "are we improving, and by how much?"

Program NPS · current
+38

End-of-program score, cohort 7, n=84

Program NPS · trend (6 cohorts)
+28 → +38

+10 points over 18 months — the real story

Benchmark Mirage
Citing irrelevant industry averages
  • VP presents SaaS NPS benchmark (+35) as if it applies to workforce programs
  • Board wonders why program "trails industry" despite strong internal trend
  • Next quarter's conversation repeats — stuck comparing to a number that doesn't apply
  • Actual improvement narrative (+28 → +38 over six cohorts) never gets told
Better framing
Internal trend plus directional sector context
  • Lead with trend: "+28 → +38 over six cohorts, methodology held constant"
  • Directional context: "Sector ranges we've seen in social programs: +28 to +58"
  • Honest framing: "No published consensus benchmark for nonprofit workforce"
  • Next target: "Cohort 8 goal is +42 — driven by [specific intervention]"

For nonprofit programs: internal trend is the most defensible benchmark you can present. When no published consensus exists, stop pretending one does.

Nonprofit Programs →

A B2B SaaS company reports quarterly NPS of +32 to its board. The board compares this to a cited "SaaS industry average of +41" from a vendor report — and asks why the company is trailing. What nobody checked: the benchmark came from a vendor that surveys immediately post-support-resolution with a 5-point scale converted to NPS-equivalent. The company runs 0–10 quarterly relational NPS across its full customer base. The +9 "gap" is methodology, not loyalty.

Company NPS · quarterly relational
+32

0–10 scale · 38% response rate · full base

Cited "benchmark"
+41

1–5 converted · post-support · engaged segment

Benchmark Mirage
Citing without checking methodology
  • +9 "gap" to benchmark drives roadmap pressure that's unrelated to actual customer signal
  • Team spends a quarter chasing a score target set by someone else's methodology
  • Actual signal in the data (low-NPS segments, specific theme patterns) gets deprioritized
  • Methodological differences never surface in the board conversation
Better framing
Methodology-matched peer set
  • Check the benchmark: scale, collection moment, response rate before citing
  • Build a peer set: 3–5 named competitors with similar survey methodology
  • Hold methodology constant: quarter-over-quarter trend is the strongest signal
  • Focus roadmap on segments: Enterprise +45 vs SMB +22 is the actionable gap

For SaaS and CX teams: always verify benchmark methodology before citing. Most benchmark gaps are methodology gaps.

Impact Intelligence →

A people analytics lead at a healthcare system with 3,200 employees reports a quarterly eNPS of +18. Published benchmarks for healthcare eNPS range from +15 to +40 depending on source. The gap between those numbers isn't regional variation — it's study methodology, company-size mix, and tenure-band composition. Comparing a 3,200-person health system to a benchmark built primarily on ambulatory clinics under 500 employees produces a misleading verdict.

Current eNPS
+18

System-wide · 62% response · Q2 2026

Department range
+42 / −8

50-point spread — where the signal lives

Benchmark Mirage
Comparing aggregate to aggregate
  • +18 compared to "healthcare industry +25" — appears to trail
  • Leadership asks why system is "below industry" — without segmenting
  • Departments pulling score down (night shift nursing, emergency) invisible in the number
  • Effort spent explaining the gap rather than fixing the departments driving it
Better framing
Segment first, then benchmark
  • Segment by department: +42 (admin/clinical research) to −8 (night shift nursing)
  • Segment by tenure: new-hire 0–6mo +28, tenure 3+yr +12 — retention signal
  • Peer set: 2–3 comparable multi-site systems with similar shift mix
  • Intervention target: three specific departments moving the aggregate most

For HR and people analytics: eNPS aggregates hide the 50-point department spread that matters most. Segment before you benchmark.

Training Intelligence →

What is a good NPS response rate benchmark?

NPS response rate benchmarks typically sit between 20% and 40% for email-distributed surveys, 10% to 25% for in-app surveys, and 50%+ for program-embedded or transactional surveys tied to a specific customer event. Low response rates (below 15%) skew toward detractor bias because disengaged customers either don't respond at all or respond negatively when they do; very high response rates (above 60%) typically indicate a well-designed transactional collection tied closely to a customer moment. The response rate directly affects the defensibility of the NPS score — a +50 built on a 8% response rate is substantially less reliable than a +35 built on a 45% response rate.

Response rate benchmarks also vary by stakeholder type. Customer NPS typically runs 20–30% response rates. Employee NPS (eNPS) typically runs 40–70% response rates when sent from internal HR systems. Beneficiary NPS in nonprofit programs typically runs 30–55% when embedded in program touchpoints. Transactional NPS tied to a specific moment (onboarding completion, support close, program milestone) consistently produces the highest response rates and the most actionable feedback.

NPS benchmarking tools and providers

The major NPS benchmarking sources in 2026 are Bain & Company (who developed NPS and publish the NPS Prism benchmarks), Satmetrix/NICE (the legacy NPS benchmarking platform), Qualtrics XM (broad cross-industry benchmarks through their XM Institute), Medallia (industry-vertical benchmarks), SurveyMonkey (eNPS benchmarks from their workforce research), and CustomerGauge (B2B-focused benchmarks). Each uses different methodology, samples different populations, and publishes on different cadences — so citing one benchmark without noting the source is rarely defensible.

For organizations without access to enterprise benchmarking platforms, the most practical approach is pairing a broad directional benchmark (from a publicly cited source like the NPS Prism Bain report) with a tight peer-set comparison of 3–5 named competitors and your own rolling trend. Sopact Sense doesn't publish external benchmarks — what it does is make your internal benchmark (quarter-over-quarter trend, cohort-over-cohort trend, segment-over-segment trend) visible as the default view, which is usually the more decision-useful comparison.

Tools and Providers · 2026
Who publishes NPS benchmarks by industry — and what each covers

The major sources of industry NPS and eNPS benchmark data, with coverage scope and access model. Cite the source when using a benchmark — unsourced "industry averages" are rarely defensible.

NPS Benchmarking Tools & Providers
Compare the major benchmark sources for 2026
Provider Coverage Benchmark type Access model Best for
NPS Prism (Bain & Company)

The original NPS source — Bain developed the metric

B2C and B2B industries

Financial services, telecom, retail, automotive, healthcare

Subscription

Sector-depth reports

Enterprise contract
Sector-specific

Fortune 500 / multi-sector competitive benchmarking

Qualtrics XM Institute

XM platform research arm — broad cross-industry

Tech, retail, healthcare, finance

Cross-industry XM maturity and CX benchmarks

Free + paid

Annual research reports

Free reports · paid access
Qualtrics XM platform ($30K+/yr)

Existing Qualtrics customers · CX teams

Medallia Benchmarks

Enterprise CX platform · industry-vertical depth

Retail, hospitality, financial

Location-level and touchpoint-specific benchmarks

Platform-bundled

Customer data required

Enterprise contract
$50K–$250K+/yr

Large multi-location retail / hospitality

CustomerGauge

B2B-focused NPS and account health benchmarks

B2B SaaS, professional services

Account-level rather than consumer-level benchmarks

Mid-market

B2B SaaS depth

Subscription
Custom pricing

B2B SaaS · account-based businesses

Satmetrix / NICE

Legacy NPS benchmarking — now part of NICE

Multi-industry

Historical NPS depth · legacy reports still cited

Subscription

Part of NICE CX suite

Enterprise CX contract
Bundled with NICE

NICE CX suite customers

SurveyMonkey (eNPS)

Strongest publicly available eNPS benchmarks

Employee NPS across industries

eNPS by company size, industry, tenure

Public research

Annual eNPS reports

Free reports
Platform from $39/mo

HR teams running eNPS programs

Perceptyx

People analytics and eNPS benchmarks

Employee NPS · engagement

Fortune 1000-scale eNPS benchmarks

Platform-bundled

Customer base required

Enterprise contract
Custom

Enterprise HR · large multi-site

Public company disclosures

Company NPS cited in annual reports and earnings

Consumer + select B2B

Apple, Costco, Tesla, Chewy, T-Mobile, others

Public

Self-reported · methodology varies

Free · SEC filings
10-K / earnings calls

Competitive positioning narrative

Sopact Sense

Does not publish external benchmarks — builds internal ones

Your own rolling benchmark

Quarter-over-quarter, cohort-over-cohort, segment-over-segment

Internal

Your data · your trend

$1,000/month
Full platform

Programs where internal trend matters more than external average

Select by use case: Bain NPS Prism for multi-sector Fortune 500 competitive benchmarking, SurveyMonkey for freely accessible eNPS ranges, CustomerGauge for B2B SaaS account-level benchmarks, public disclosures for competitive positioning. And always pair external benchmarks with your own rolling internal trend.

NPS calculation methodology →

External benchmarks give directional context. Your own rolling trend gives decision-useful signal. The strongest NPS program uses both — a tight peer set, a credible benchmark source, and an internal trend that holds methodology constant across every cycle.

See Sopact Sense →

The Benchmark Mirage: why published NPS averages mislead

The Benchmark Mirage is the belief that cross-industry NPS averages provide an actionable comparison — when in practice most published benchmarks mix incompatible survey methodologies, response bases, and collection moments. Three mechanisms produce the misleading comparison.

Scale contamination. A meaningful minority of published NPS studies use 1–5 or 1–7 scales converted to NPS-equivalent scores through interpolation. These converted scores systematically overstate NPS relative to validated 0–10 collection. When an aggregator benchmark includes studies from survey tools that default to 5-point scales, the resulting average is not directly comparable to your 0–10 NPS program. Survivorship bias in response populations. Most NPS benchmarks are collected through email surveys, where active responders skew positive and disengaged customers — who are disproportionately detractors — opt out entirely. A benchmark built on a 15% response rate from your most engaged segment will always look better than a benchmark built on a 45% response rate from your full stakeholder population. Moment-of-collection mismatch. Post-transaction NPS collected within 48 hours of a positive event produces substantially higher scores than quarterly relational NPS from the same customer base. Benchmarks typically don't distinguish between collection moments, so comparing your quarterly relational NPS to a post-transactional industry benchmark produces a gap that reflects methodology, not loyalty.

The alternative is an internal benchmark built on consistent methodology over time — same scale, same collection moment, same respondent base across cycles. Your +32 six months ago versus your +38 today tells you something real and acts on. Your +38 compared to an industry average of +41 collected in 2021 using a different survey tool tells you very little.

Frequently Asked Questions

What is a good NPS score?

Absolute benchmarks treat any positive score as healthy, +30 to +50 as good, +50 to +70 as excellent, and +70+ as world-class. Relative benchmarks matter more — a +40 is excellent in cable/telecom, healthy in financial services, and average in SaaS. Evaluate your score against industry context, not universal thresholds.

What are the NPS benchmarks by industry for 2026?

2026 NPS ranges directionally: SaaS +30 to +45, Financial Services +30 to +45, Retail +35 to +55, Healthcare +25 to +45, Telecommunications −5 to +20, Technology/Consumer +40 to +60, Professional Services +40 to +60, Education +30 to +50, Multifamily/Real Estate +10 to +35. Treat as directional — peer-set comparison is more reliable than broad industry averages.

What is a good eNPS score?

A good eNPS score is +10 to +30, excellent is +30 to +50, world-class is +50+. eNPS benchmarks typically run 20–30 points below customer NPS benchmarks in the same industry. Technology and Professional Services tend to lead (+15 to +35 typical); Healthcare, Retail, and Hospitality often cluster lower (−10 to +15).

What are the eNPS benchmarks by industry?

Technology +15 to +35, Financial Services 0 to +20, Healthcare −10 to +15, Retail/Hospitality −10 to +15, Manufacturing 0 to +20, Professional Services +15 to +35, Nonprofit/Mission-Driven +20 to +40, Education −5 to +20. Mission alignment is one of the few consistent cross-sector boosts to eNPS.

What is the average NPS score by industry?

Industry medians vary widely: SaaS around +30, Financial Services around +35, Retail around +40, Healthcare around +30, Telecommunications around +10, Technology around +45, Education around +35, Nonprofits around +40 (internal data, no published consensus). Median is less useful than top-quartile and top-decile ranges for target-setting.

What's a good NPS response rate?

Benchmark response rates: email surveys 20–40%, in-app 10–25%, program-embedded or transactional 50%+. Below 15% indicates detractor bias risk; above 60% usually indicates a well-designed transactional touchpoint. Response rate affects the defensibility of the score — a +50 on 8% response is weaker evidence than a +35 on 45%.

Who publishes NPS benchmarks by industry?

The major sources are Bain & Company (NPS Prism), Satmetrix/NICE, Qualtrics XM Institute, Medallia, SurveyMonkey (eNPS), and CustomerGauge (B2B). Each uses different methodology and samples different populations. Cite the source when using a benchmark — an unsourced "industry average" is rarely defensible.

What tools benchmark NPS by sector?

Enterprise NPS benchmarking tools include NPS Prism (Bain), Qualtrics XM Institute, Medallia Benchmarks, CustomerGauge B2B benchmarks, and SurveyMonkey benchmarks. For smaller programs, the most practical approach is pairing a publicly cited directional benchmark with a peer set of 3–5 named competitors and your own rolling trend.

What is a good B2B SaaS NPS benchmark?

B2B SaaS NPS typically averages +30 to +40 for healthy organizations, with top-quartile performers reaching +45 to +55 and world-class at +60+. B2B SaaS usually scores 10–15 points below consumer SaaS in the same product category because switching costs and procurement friction depress voluntary advocacy.

What is a good healthcare NPS benchmark?

Healthcare NPS typically averages +25 to +45. Access friction (wait times, scheduling), billing complexity, and administrative experience are the largest drivers of detractor scores — often more impactful than clinical quality ratings. Patient loyalty runs highest when access and outcome feel strong together.

What is the NPS benchmark for multifamily apartment communities?

Multifamily/apartment NPS typically runs +10 to +35, with top-quartile properties reaching +40 to +55. Move-in experience, maintenance response time, and communication clarity are the most common drivers of both promoter and detractor scores. Multifamily tends to score below consumer retail but above telecom.

What is The Benchmark Mirage?

The Benchmark Mirage is the belief that cross-industry NPS averages provide an actionable comparison, when in practice most published benchmarks mix incompatible survey methodologies, response bases, and collection moments — producing a number that looks authoritative but measures nothing you can act on against your own program.

How often should NPS benchmarks be updated?

Published NPS benchmarks should be less than 12 months old. Industries change quickly (especially tech, retail, and finance), and citing a 2021 benchmark in a 2026 board conversation is the most common Benchmark Mirage mistake. Internal rolling benchmarks update automatically as new collection cycles complete — the most current reference point is always your own recent trend.

Benchmark right · Build internal · Read the trend
Stop arguing about the benchmark. Build the better one.

External benchmarks give directional context. Your own rolling trend gives decision-useful signal. The right NPS program uses a tight peer set, a credible benchmark source cited honestly, and an internal trend that holds methodology constant across every cycle — so every score moves mean something.

  • Peer set of 3–5 named competitors — not an industry average pulled from a vendor blog
  • Internal rolling trend with methodology held constant across cycles — same scale, same moment, same respondent base
  • Segment-level disaggregation before any external comparison — aggregate scores hide the signal that matters
Stage 01 · Peer Set
Tight 3–5 named competitors

Select companies with your revenue model, customer acquisition channel, and relationship cadence — not just your SIC code

Stage 02 · Internal Trend
Rolling quarter-over-quarter baseline

Same scale, same collection moment, same respondent base across every cycle — the most defensible benchmark you can present

Stage 03 · Directional Context
Sourced industry ranges

Cite Bain NPS Prism, Qualtrics XM Institute, or public disclosures — always with year, methodology, and peer composition noted

One intelligence layer runs all three — powered by Claude, OpenAI, Gemini, watsonx.