play icon for videos

eNPS Guide: Meaning, Score, Formula, Benchmarks | Sopact

eNPS meaning, 0–10 formula, score ranges, industry benchmarks 2026, and how to collect employee feedback without The Department Average Illusion.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 21, 2026
360 feedback training evaluation
Use Case

eNPS: Employee Net Promoter Score Meaning, Formula, Score Ranges, and Industry Benchmarks

A company's eNPS arrives at the all-hands: 24. Leadership marks it as acceptable. Meanwhile, the customer success department sits at −12, the engineering team at −8, and two product squads are at −30 — invisible inside a number that averages to "fine." The decision that follows — no action needed — is the exact wrong conclusion. The aggregate score wasn't wrong. The architecture that produced only the aggregate score was. This is The Department Average Illusion.

Last updated: April 2026

eNPS stands for Employee Net Promoter Score — a single-question employee feedback metric measuring whether employees would recommend the organization as a place to work, scored on a 0–10 scale. The formula, score ranges, industry benchmarks, and survey design are all covered below. What most eNPS guides don't cover is the architectural problem: company-wide eNPS averages hide departmental reality, and annual collection surfaces the signal after turnover has already started. This guide addresses both — the standard methodology and the structural fixes that turn eNPS from a metric into a workflow.

eNPS Guide · Complete Reference · 2026
A company eNPS of +24 can hide a department at −35.

eNPS (Employee Net Promoter Score) is a 2-question employee feedback metric measuring whether employees would recommend the organization as a place to work. This guide covers everything: definition, formula, score ranges, industry benchmarks, survey wording, software options, and the architectural failure that turns an acceptable aggregate into invisible department-level crises — the Department Average Illusion.

The Department Average Illusion
Company eNPS +24 — three departments in crisis, one in testimonial mode
Company eNPS of +24 decomposed by department showing customer success -12, engineering -8, product -30, and sales +58 Four departments · one average · four different realities AGGREGATE (WHAT LEADERSHIP SEES) Company eNPS = +24 "Acceptable." No action taken. Segment by department 0 +50 −50 Sales Large commissions +58 Engineering Burnout signals −8 Customer Success Workload spike −12 Product squads Management crisis −30 aggregate +24 THE DEPARTMENT AVERAGE ILLUSION Sales carries the aggregate. Product squads are a retention crisis invisible in the headline.
Segment-level reality Aggregate hides distribution
Illustrative pattern · real across mid-market orgs
Ownable Concept
The Department Average Illusion

The structural failure that occurs when eNPS is reported as a single company-wide number, making acceptable averages out of internal distributions that are anything but acceptable. An organization with departments at +58, −8, −12, and −30 might report +24 — and conclude the organization is in reasonable health. Three departments are failing while the aggregate protects them from scrutiny. The illusion sustains through three mechanisms: aggregation at the wrong level, absence of qualitative follow-up, and annual cadence surfacing signal after turnover has already started. Closing it requires department-level views as default output, not post-hoc analysis.

0–10 scale
promoters 9–10 · passives 7–8 · detractors 0–6 · formula %P − %D
−100 to +100
possible score range · most orgs land between +10 and +40
+20 to +30
average eNPS for tech companies · industry benchmarks vary 5–45 points
2 questions
optimal eNPS survey length · score + one open-text "why"

What is eNPS (Employee Net Promoter Score)?

eNPS (Employee Net Promoter Score) is a single-question employee survey metric measuring whether employees would recommend their organization as a place to work. The canonical eNPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" — adapted from Fred Reichheld's customer NPS methodology at Bain & Company. Respondents scoring 9–10 are classified as Promoters, 7–8 as Passives, and 0–6 as Detractors. The eNPS is calculated as the percentage of Promoters minus the percentage of Detractors, producing a score between −100 and +100.

eNPS works well as a leading indicator for retention risk, organizational health, and culture change. It does not replace engagement surveys, performance reviews, or exit interviews — it complements them by providing a consistent, comparable signal that can be collected frequently enough to track momentum rather than only annual state. The core methodology is identical to customer NPS — see the NPS calculation guide for the full formula details. What makes eNPS different is the respondent population: employees have different feedback dynamics than customers, which determines every architectural choice above the math.

What does eNPS stand for and mean?

eNPS stands for Employee Net Promoter Score — the employee-facing adaptation of Fred Reichheld's Net Promoter Score methodology, originally developed for measuring customer loyalty and published in 2003. The "e" prefix distinguishes the metric from customer NPS; the scoring mechanism, score bands, and calculation are otherwise identical. Some organizations also use "Employee NPS" or "employee Net Promoter Score" interchangeably, though the three-letter abbreviation has become standard in HR and people analytics contexts.

The meaning of the eNPS score is a measure of employee advocacy — specifically, whether employees would vouch for the organization to people in their professional network. This is a narrower and more behaviorally meaningful question than "are you satisfied" or "are you engaged," because it ties the rating to a specific social action the respondent would or would not take. A Promoter isn't just satisfied — they are willing to stake their personal credibility on recommending the employer. A Detractor isn't just unhappy — their rating signals they would actively warn others away.

How do you calculate eNPS? (formula)

Calculate eNPS using the formula: eNPS = % Promoters − % Detractors. Take the total number of Promoters (employees scoring 9–10), divide by total respondents, multiply by 100 to get the percentage. Do the same for Detractors (0–6). Subtract the Detractor percentage from the Promoter percentage. Passives (7–8) are not included in the calculation — only in the total respondent count that determines the percentages. The resulting score ranges from −100 (every respondent is a Detractor) to +100 (every respondent is a Promoter).

Worked example: an organization surveys 200 employees. 80 score 9 or 10 (Promoters = 40%). 70 score 7 or 8 (Passives = 35% — excluded from the calculation but counted in the base). 50 score 0 through 6 (Detractors = 25%). eNPS = 40 − 25 = +15. The calculation is identical to customer NPS, but the interpretation differs because employee populations behave differently — see the score-band explorer below for eNPS-specific interpretation. Common mistakes include using a 1–5 or 1–10 scale instead of 0–10 (breaks the band math), counting Passives in the calculation (produces a non-comparable score), and reporting a single company-wide number without department segmentation (produces the Department Average Illusion).

Interactive · Calculate Your eNPS
Calculate your eNPS — see where it lands on the score band spectrum

Enter your Promoter, Passive, and Detractor percentages. See your eNPS calculated live, plotted on the −100 to +100 spectrum, and interpreted against your industry benchmark.

Your respondent distribution
Promoters scores of 9 or 10
%
Passives scores of 7 or 8
%
Detractors scores of 0 through 6
%
Total 100%
Your eNPS = % Promoters − % Detractors
+25
Average
Where your score lands on the spectrum
Critical
Poor
Average
Good
Great
World-class
−100 −50 0 +30 +50 +70 +100
Interpretation · Average
Most organizations land here — the middle of the distribution

An eNPS in the +10 to +30 range sits in the center of the global distribution. It's neither a crisis nor an achievement — but it is an invitation to look at the segment distribution underneath. The Department Average Illusion sets in here when leadership reads the aggregate and concludes no action is needed.

General benchmark: Median ~+15, top quartile ~+35, bottom quartile ~0.

The score is the headline — not the story. Whatever your eNPS, the decision-useful signal lives in department-level distribution and open-text themes. An aggregate "Good" can hide a department in crisis.

See department-level eNPS →

What is a good eNPS score? Score ranges explained

A "good" eNPS score depends on industry, organization size, and tenure cohort — but the general score bands are: below 0 is critical, 0–10 is poor, 10–30 is average, 30–50 is good, 50–70 is great, above 70 is world-class. Most organizations land in the 10–40 range; scores above 50 are rare and typically signal either strong organizational culture or small-sample selection effects (fewer respondents, louder voices). Scores below 0 mean more employees would actively recommend against the organization than would recommend it — a signal that retention risk is already material.

Context matters more than the raw number. A tech company at +20 may be below peer industry benchmarks; a healthcare system at +20 may be above them. A 500-person company at +35 is doing well; a 50-person startup at +35 may be running on founder enthusiasm that won't persist through the first retention wave. The single most important interpretation rule: a company-wide score masks department-level reality. A +24 organization with engineering at −30 is not a +24 organization — it is a +45 organization with an engineering crisis, and leadership acting on the aggregate will fix nothing. See NPS benchmarks by industry for cross-reference data on customer NPS comparisons.

eNPS Discipline · 6 Principles
eNPS best practices — what separates action-producing eNPS from dashboard eNPS

Six principles that turn eNPS from an annual number into a management workflow. Skip any of them and the company-wide aggregate becomes all you have — reported, filed, never used to change a decision.

01
Scale
Use the 0–10 scale — never 1–5, never emoji, never stars

The 0–10 scale is load-bearing. Compressed scales break the Promoter (9–10) / Passive (7–8) / Detractor (0–6) threshold math and destroy benchmark comparability. If your survey tool defaults to stars or 1–5, override. 0–10 numeric only.

Engagement survey platforms often default to 1–5; manual override required for eNPS.
02
Segment
Segment by department, tenure, and role — as default output

A company-wide eNPS hides the department-level distribution. Segment-first dashboards show the real picture: which department is carrying the aggregate, which is in crisis. Segmentation is not a drill-down filter — it is the primary view.

Aggregate-first dashboards train leadership to read the headline and stop.
03
Follow-Up
Pair every score with one open-text follow-up — selected by intent

"What is the most important reason for the score you gave?" outperforms "why?" every time. Focuses attention, produces shorter actionable responses. One follow-up, not five. Five additional questions drop response rate 40–60% and crush the eNPS signal that's the whole point.

Never bury eNPS inside a 30-question engagement survey — breaks benchmark comparability.
04
Cadence
Run quarterly, not annually — retention crises develop in weeks

Annual eNPS surfaces the signal 12 months after the conditions that produced it. Quarterly eNPS matches operational planning cycles. Monthly pulse works for organizations in active transformation (reorgs, leadership change, hypergrowth). Consistency across cycles matters more than precision at any single moment.

By the time annual eNPS is reviewed, turnover has already started.
05
Identity
Persistent employee IDs at collection — not anonymous by default

Persistent employee IDs link eNPS to HRIS data (department, tenure, role) without manual merge. Aggregate responses to protect anonymity at small-group reporting — under 5 respondents per department. The architectural choice is identified-with-aggregation, not anonymous-with-guesswork.

Pure anonymity breaks segment analysis and closes the loop before it can close.
06
Close the Loop
Close the loop before the next cycle — visible action, not just analysis

Every eNPS cycle should produce 1–2 specific actions per department based on Detractor themes, communicated back to employees before the next cycle launches. Feedback without visible action is trust debt — the next cycle's response rate and candor both drop when employees see their input disappear.

A cycle that produces only analysis erodes trust in the measurement program itself.

Apply all six and eNPS becomes a management workflow. Skip any of them and the aggregate score becomes all you have — reported to the board, filed, never used to change a specific operational decision.

See full NPS analysis methodology →

eNPS benchmarks by industry (2026)

Industry eNPS benchmarks vary significantly — technology companies average around +20 to +30, healthcare systems around +10 to +25, financial services around +15 to +28, retail and hospitality around +5 to +20, and nonprofits around +20 to +45. The variation reflects industry-specific factors: tech has higher scores driven by compensation and mission framing; healthcare has tighter ranges due to staffing pressures; nonprofits often score higher because mission alignment substitutes for compensation gaps in the advocacy calculation. Within any industry, top-quartile organizations score 15–25 points above the median, and bottom-quartile organizations score 15–30 points below.

The industry benchmarks that matter most are for your specific industry and organization size — not the global average. A 300-person tech company should benchmark against the 200–500 employee tech band, not the tech industry overall (which includes both FAANG and early-stage startups with very different dynamics). When eNPS vendors publish "industry benchmark" numbers, the number itself is less important than the sample composition — a +24 benchmark built from 40 companies with mostly under-200-person teams is not a valid benchmark for a 3,000-person multinational. Match your benchmark to your context, or the comparison produces false confidence or false alarm.

eNPS survey questions: wording and cadence

The canonical eNPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" — paired with exactly one open-ended follow-up: "What is the most important reason for the score you gave?" Two questions total. The standard wording is load-bearing — variants like "rate your experience" or "how satisfied are you" produce measurably different scores and break comparability with any benchmark. See the NPS survey questions guide for the full wording library including diagnostic, recovery, activation, and milestone follow-up variants.

Cadence matters as much as wording. Annual eNPS produces a lagging indicator — by the time the score is reported, the conditions that produced it are 12 months old. Quarterly eNPS is the most common professional practice and matches operational planning cycles. Monthly pulse eNPS works for organizations in active transformation (reorgs, leadership changes, hypergrowth) where weekly or bi-weekly tracking surfaces retention risk before turnover starts. Whatever cadence you pick, consistency across cycles matters more than precision at any single moment — same question, same scale, same collection window, every cycle.

Three HR Scenarios · One Architecture
Where eNPS actually gets used — three common scenarios

Retention risk diagnosis, culture change measurement, and small-team scale decisions. Each one needs a different cadence and analysis depth — but all three need the same architectural foundation.

People Operations at a 300-person organization runs quarterly eNPS. The company-wide score has dropped from +28 to +14 over three cycles. Leadership wants to know what changed. The HR lead has 300 responses in a spreadsheet, no capacity to read them all, and a suspicion that two specific departments are driving the decline. Company-wide eNPS cannot answer the question — department-level segmentation plus theme extraction from Detractor responses is what turns the drop from a trend into a diagnosis.

Aggregate-only workflow
Single company-wide score · 3-week coding sprint
  • "eNPS dropped 14 points" — reported to leadership as a trend, not a diagnosis
  • Manual coding of 300 open-text responses takes 3 weeks; next cycle launches before themes are ready
  • Leadership guesses at compensation as the cause — budget request built on a hypothesis nobody can verify
  • Two weeks after the "all hands," attrition in the actual problem department begins
Department-level workflow
Segment-first dashboard · themes within hours
  • Department-level eNPS grid — two departments flagged as the 14-point drop drivers
  • Detractor themes per department — "manager feedback cadence" and "role-scope ambiguity" surface as the top two in the problem departments, NOT compensation
  • Named action owner per department — HRBP assigned for manager coaching intervention, before the next cycle
  • Compensation budget retained for where it actually matters — the problem was managerial, not financial

For retention risk diagnosis: department-level eNPS plus theme extraction within the decision window. Turn a 14-point drop from a trend into a targeted intervention.

Training Intelligence →

A CHRO is three weeks into a significant restructuring. The goal: run pulse eNPS weekly for 12 weeks to track whether employee sentiment is stabilizing or deteriorating — and identify specific concerns as they emerge rather than after turnover starts. The traditional monthly or quarterly engagement survey cadence is too slow. Weekly pulse with persistent employee IDs produces the tracking signal required for active transformation management.

Quarterly cadence approach
Reorganization happens · eNPS reviewed end of quarter
  • First post-reorg eNPS arrives 13 weeks after the announcement — too late to adjust the transformation approach
  • Specific team-level concerns emerging in week 2 invisible until week 14 reporting
  • Retention loss from reorg-ambiguity happens weeks 3–9; HR learns about it week 13
  • No week-over-week trajectory — only before/after snapshots that hide the within-change dynamics
Weekly pulse during transformation
Continuous collection · persistent employee IDs · themes weekly
  • Weekly 2-question pulse — eNPS + "what is your most pressing concern this week" — under 3 minutes per employee
  • Department trajectory visible week-over-week — stabilizing vs. deteriorating signals flagged automatically
  • Emerging themes extracted weekly — concerns surface in week 2, not week 14
  • Transformation team adjusts communication based on real-time sentiment — manager enablement, specific messaging, escalation routing

For active transformation: weekly pulse with persistent IDs and theme extraction. Track stabilization or deterioration as it happens — not 13 weeks later.

Training Intelligence →

A startup founder runs a 25-person organization and is considering eNPS. At this scale, the methodology's statistical reliability is genuinely questionable — 25 respondents isn't enough for segment-level analysis, and one strongly negative or positive voice can swing the aggregate by 8+ points. This scenario shows where eNPS isn't the right tool yet — and what to do instead until headcount crosses the threshold.

Formal eNPS at 25 employees
Quarterly instrument designed for larger orgs
  • Scores swing 8–15 points cycle over cycle based on who responded this time vs. last
  • Department breakdowns impossible — most departments have 3–5 people, below anonymity threshold
  • Benchmark comparison misleading — "+35 is good" benchmarks built from samples of hundreds, not twenty-five
  • Founder makes decisions on statistically unreliable trend data, loses trust in the measurement program
Right-sized measurement approach
Qualitative pulse + 1:1 protocol until 50+ employees
  • Structured 1:1 protocol with consistent questions every quarter — captures the same signal without the statistical fragility
  • Qualitative pulse survey with one open-text question — "what is the most important thing we could change?"
  • Revisit eNPS at 50+ employees — statistical reliability threshold where department breakdowns start working
  • Honest scoping — eNPS is a tool, not a ritual; scale your measurement to your organization, not the other way around

For small teams under 50: qualitative pulse and structured 1:1s produce more reliable signal than formal eNPS. Right-size the measurement to the organization, not the other way around.

Training Intelligence →

eNPS vs employee engagement surveys — what's the difference?

eNPS is a single-question metric; employee engagement surveys are multi-question instruments measuring multiple dimensions of engagement (belonging, enablement, alignment, growth). eNPS answers "would employees recommend us" in 2 questions. Engagement surveys answer "how are employees experiencing work across 5–15 dimensions" in 30–80 questions. The two are complementary, not substitutes — eNPS is the high-frequency signal, engagement surveys are the deep-dive diagnostic. Using eNPS inside an engagement survey (as one question among 30) breaks the eNPS benchmark comparability and produces a score that is not comparable to any standalone eNPS baseline.

When to use which: eNPS for continuous pulse tracking (quarterly or more often), engagement surveys for annual deep-dive diagnostics, exit interviews for attrition analysis, stay interviews for retention conversations, and 1:1 manager check-ins for individual signal. Treating any one of these as a substitute for the others is a common architectural mistake. eNPS will tell you retention risk is rising; it will not tell you whether the cause is compensation, management, growth opportunity, or workload. The open-text follow-up and department-level theme extraction — not the score itself — are where the causal signal lives.

What is eNPS software? What to look for

eNPS software is a platform for collecting, calculating, and analyzing Employee Net Promoter Score data — ranging from lightweight single-purpose pulse tools (Officevibe, Peakon, TinyPulse) to full engagement-survey platforms with eNPS modules (Culture Amp, Qualtrics EmployeeXM, Glint, Lattice) to general survey tools with eNPS templates (SurveyMonkey, Typeform). The category has consolidated toward comprehensive employee-experience platforms at enterprise pricing ($30K–$150K/year) with specialized pulse tools remaining at mid-market pricing ($5K–$30K/year). The category is crowded; the differentiation that matters is architectural.

What to look for goes beyond feature checklists. Persistent employee IDs at collection (so scores link to HRIS data without manual merge). Department-level segmentation as a default view (not a drill-down filter). Qualitative theme extraction within hours (not 3–4 week coding sprints). Real-time processing (so response loops close within the decision window, not a cycle later). Tools that require a separate text analysis platform for open-text responses produce analysis that is always one reconciliation cycle behind reality — even when the core eNPS capture is strong. See the comparison table below for how generic engagement survey platforms and specialized employee-experience tools compare against unified-schema architecture.

eNPS Software Comparison · 2026
Why most eNPS software produces scores but not workflow

Generic survey tools ship the form. Specialized engagement platforms ship the form plus a dashboard. Unified-schema architecture ships the form, dashboard, segment-level analysis, and theme extraction on the same data fabric. Four common risks, then the capability comparison.

Risk 01
Anonymous by default — breaks segment analysis

Anonymous eNPS produces a number that cannot be segmented by department, tenure, or role without re-asking in the survey form. The Department Average Illusion sets in by architecture.

"Anonymous for candor" is usually infrastructure that can't link to HRIS records.
Risk 02
eNPS buried in engagement survey

When eNPS is one question among 30+ in an engagement survey, response rate drops 40–60% and the resulting score is not comparable to standalone eNPS benchmarks.

Most engagement platforms include eNPS as a default template item — check how it's structured.
Risk 03
Manual coding of open-text responses

Open-text "why" responses sit in a coding backlog. 2–4 week sprints for 300–600 responses. Themes land after the next cycle launches.

Generic survey platforms export CSVs; someone has to read them.
Risk 04
Annual cadence produces lagging indicator

Annual eNPS surfaces the signal 12 months after the conditions that produced it. Retention crises develop in weeks; quarterly cadence is the floor for action-producing eNPS.

Enterprise engagement platforms default to annual full-survey cadence.
eNPS Software Capability Comparison
What each tool actually delivers — across the eNPS workflow
Capability Generic survey tool Specialized engagement platform Sopact Sense
Collection
Question wording · scale · cadence
0–10 scale as default

Reichheld-validated scale

1–5 stars default · manual override

SurveyMonkey, Typeform default to star scales — need manual switch to 0–10 numeric

0–10 eNPS template

Culture Amp, Qualtrics EmployeeXM, Lattice ship with eNPS-specific templates

0–10 eNPS as template default

Reichheld-validated anchor labels pre-set; can't accidentally break the scale math

Standalone eNPS vs buried-in-engagement

Benchmark comparability

Standalone possible, requires setup

No default structure — teams often build 30-question surveys with eNPS inside

Often bundled with engagement module

Culture Amp, Glint, Lattice include eNPS in broader engagement surveys — breaks comparability

Standalone 2-question instrument by default

eNPS template is its own instrument, not a question inside engagement survey bloat

Analysis
Segmentation · theme extraction · identity
Department-level segmentation

Default view, not drill-down

Drill-down filter

Department view requires filtering; re-asking department in the survey drives response-rate loss

HRIS integration for segment attributes

Culture Amp, Qualtrics, Lattice integrate with major HRIS — but views default to aggregate

Segment-first dashboard default

Department distribution leads; aggregate is one line near the top

Qualitative theme extraction

Open-text to themes

Manual coding or CSV export

2–4 week sprints; themes land after next cycle launches

AI theme extraction on premium tiers

Culture Amp NLP and Glint theme detection available on higher-tier contracts

Intelligent Column — themes in hours

Real-time processing; attribution links preserved; per-segment theme frequency

Persistent employee IDs

Link to HRIS, tenure, role level

Anonymous by default

Identity optional; most teams choose anonymous and lose segment capacity

HRIS-linked IDs

BambooHR, Workday, Rippling integrations for standard employee-experience platforms

Persistent IDs with confidential aggregation

Identified responses with under-5-respondent anonymity aggregation at reporting

Workflow & Pricing
Cadence · close-the-loop · cost
High-frequency pulse support

Weekly or monthly eNPS

Technically supported, operationally slow

Creating weekly surveys possible; reporting lags by a week or more each cycle

Pulse module available

Culture Amp Pulse, Lattice Pulse, Glint Pulse all support sub-quarterly cadence on higher tiers

Real-time collection and analysis

Continuous cycle supported natively; themes and trend strips update as responses arrive

Pricing

Annual software cost

$500–$5K/year

SurveyMonkey, Typeform, Google Forms — the eNPS capability is DIY-assembled

$30K–$150K+/year

Culture Amp, Qualtrics EmployeeXM, Lattice — comprehensive platform pricing

$1,000/month

Complete eNPS workflow — collection, segmentation, theme extraction — on one schema

Generic tools ship the form. Specialized platforms ship form + dashboard. Unified-schema architecture ships form, dashboard, segment analysis, and theme extraction on one data fabric — the structural answer to the Department Average Illusion.

See full NPS analysis methodology →

Segment attributes at collection. Persistent employee IDs. Themes extracted within hours. Quarterly cadence or faster for active transformation. That is the architecture that turns eNPS from an annual number into a management workflow — and it is not what most engagement platforms are built for.

See Sopact Sense →

The Department Average Illusion: why company-wide eNPS hides the real problem

The Department Average Illusion is the structural failure that occurs when eNPS is reported as a single company-wide number, making acceptable averages out of internal distributions that are anything but acceptable. An organization with departments at +40, +15, −8, and −35 might report a company eNPS of +12 — and conclude the organization is in reasonable health. Two departments are quietly failing while the aggregate protects them from scrutiny. The illusion sustains itself through three mechanisms.

Aggregation at the wrong level. Company-wide eNPS pools incompatible populations — remote and in-person teams, new hires and tenured employees, high-growth divisions and declining ones. The average of these populations is not the eNPS of any actual team. Absence of qualitative follow-up. When eNPS is collected without an open-text "why" question, you know the distribution but not the cause — and every intervention is a guess. Annual or quarterly cadence. Retention crises develop over weeks; quarterly eNPS surfaces the signal after turnover has already started. Closing the illusion requires segment-level views by department, tenure, and role as default outputs — not a post-hoc analysis project. Persistent employee IDs at collection, qualitative analysis on the same schema as the score, and collection cadence matched to the decision you're trying to inform.

Frequently Asked Questions

What is eNPS?

eNPS stands for Employee Net Promoter Score — a single-question employee survey metric measuring whether employees would recommend their organization as a place to work. The canonical question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" Scored 0–10, with Promoters (9–10), Passives (7–8), and Detractors (0–6). The eNPS is calculated as % Promoters minus % Detractors.

What does eNPS stand for?

eNPS stands for Employee Net Promoter Score. The "e" distinguishes it from customer NPS (Net Promoter Score), which was developed by Fred Reichheld at Bain & Company in 2003. Other terms used interchangeably include "Employee NPS" and "employee Net Promoter Score." The three-letter abbreviation eNPS has become standard in HR and people analytics contexts.

What is the eNPS formula?

The eNPS formula is: eNPS = % Promoters − % Detractors. Count employees scoring 9–10 (Promoters), divide by total respondents, multiply by 100 for the percentage. Do the same for Detractors (0–6). Subtract the Detractor percentage from the Promoter percentage. Passives (7–8) are excluded from the calculation but counted in the respondent base. The resulting score ranges from −100 to +100.

How is eNPS calculated?

Calculate eNPS in four steps: (1) count respondents scoring 9–10 (Promoters), (2) count respondents scoring 0–6 (Detractors), (3) divide each by total respondents to get percentages, (4) subtract Detractor percentage from Promoter percentage. Example: 200 respondents, 80 Promoters (40%), 50 Detractors (25%). eNPS = 40 − 25 = +15.

What is a good eNPS score?

A good eNPS score depends on industry and organization size. General ranges: below 0 is critical, 0–10 is poor, 10–30 is average, 30–50 is good, 50–70 is great, above 70 is world-class. Most organizations land in 10–40. Top-quartile scores within any industry are 15–25 points above the industry median. Context matters more than the raw number — match your score to your industry and size benchmark.

What are the eNPS score ranges?

eNPS score ranges on a −100 to +100 scale: Critical (below 0) — more Detractors than Promoters, active retention risk. Poor (0 to 10) — marginal, warning zone. Average (10 to 30) — most organizations land here. Good (30 to 50) — above industry median in most sectors. Great (50 to 70) — strong advocacy, typically top quartile. World-class (above 70) — exceptional, rare at scale.

What is the average eNPS score?

The average eNPS score varies by industry: technology ~+20 to +30, healthcare ~+10 to +25, financial services ~+15 to +28, retail ~+5 to +20, nonprofits ~+20 to +45. Global cross-industry median is approximately +15 to +20. Top-quartile in any industry typically sits 15–25 points above the median. Scores above +50 are uncommon at scale; scores below 0 signal active retention risk.

What is the average eNPS for tech companies?

The average eNPS for technology companies is approximately +20 to +30, higher than most industries due to compensation, mission framing, and typically younger workforces with higher tolerance for performance-oriented cultures. Within tech, SaaS and enterprise software often score higher than consumer tech; early-stage startups score higher than scaled companies but with thinner sample sizes. Top-quartile tech companies commonly exceed +45.

What are eNPS benchmarks?

eNPS benchmarks are industry-specific score ranges used to contextualize your organization's score. Benchmarks vary by industry, organization size, and geography — the median for a 300-person tech company is not the median for a 3,000-person multinational. Match your benchmark source to your context. Vendor-published benchmarks vary in sample composition; always check the sample description before comparing.

What is the eNPS question wording?

The canonical eNPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" with anchor labels "Not at all likely" (0) and "Extremely likely" (10). Pair with exactly one open-text follow-up: "What is the most important reason for the score you gave?" Two questions total. Wording variants break benchmark comparability.

How often should you run eNPS surveys?

Run eNPS surveys quarterly for most organizations — this matches operational planning cycles and balances signal frequency with survey fatigue. Monthly or weekly pulse eNPS works for organizations in active transformation (reorgs, leadership changes, hypergrowth). Annual eNPS is too infrequent to produce actionable signal — by the time the score is reported, the conditions that produced it are 12 months old.

What is the difference between eNPS and employee engagement surveys?

eNPS is a 2-question pulse metric; employee engagement surveys are multi-question (30–80 item) instruments measuring multiple engagement dimensions. They are complementary, not substitutes — eNPS is the high-frequency signal, engagement surveys are the deep-dive diagnostic. Running eNPS as one question inside a 30-question engagement survey breaks benchmark comparability and produces a non-comparable score.

What is eNPS software?

eNPS software is a platform for collecting, calculating, and analyzing Employee Net Promoter Score data. Categories include specialized pulse tools (Officevibe, Peakon, TinyPulse), comprehensive employee-experience platforms (Culture Amp, Qualtrics EmployeeXM, Lattice), and general survey tools with eNPS templates (SurveyMonkey, Typeform). Pricing ranges from ~$5K/year for pulse tools to $30K–$150K/year for enterprise employee-experience platforms.

What is The Department Average Illusion?

The Department Average Illusion is the structural failure that occurs when eNPS is reported as a single company-wide number, making acceptable averages out of internal distributions that are anything but. An organization with departments at +40, +15, −8, and −35 might report +12 and conclude health is reasonable — while two departments are quietly failing. Segment-level views by department, tenure, and role close the illusion.

How do you analyze eNPS responses?

Analyze eNPS responses in four steps: (1) segment the score by department, tenure, role level, and location, (2) run sentiment analysis on open-text responses to flag Passives with negative language and Detractors with constructive feedback, (3) extract themes from verbatim comments within each segment, (4) track segment trajectories across three or more cycles. See NPS analysis methodology for the full 4-method framework.

Run eNPS That Produces Action
Segment by department. Theme the verbatims. Close the loop per cycle.

Close the Department Average Illusion with the three architectural choices that turn eNPS from an annual number into a management workflow: department-first collection, themes extracted from open-text within hours, and visible follow-up before the next cycle launches. One data schema, not three disconnected systems.

Stage 01 · Segment
Department-first architecture

Segment attributes captured at intake from HRIS — department, tenure, role level, location. Persistent employee IDs carry the full attribute set through every response. Segment distribution is the default dashboard view; the company-wide aggregate is one line near the top.

Stage 02 · Theme
Themes within hours

Intelligent Column analyzes open-text "why" responses on the same schema as the score — no 3–4 week manual coding sprints, no analyst bottleneck, no inter-rater drift. Per-department theme frequency, attribution links back to source responses, themes trajectory across cycles.

Stage 03 · Close Loop
Loop closed per cycle

Named action owner per department. Visible follow-up before the next cycle launches — not an analysis filed quarterly. The cycle produces 1–2 specific actions per department based on Detractor themes, communicated back to employees so the next cycle's candor and response rate survive.

  • Department-level segmentation as default view, not drill-down filter. Leadership reads distributions, not company-wide averages.
  • One intent-selected follow-up question, not buried in a 30-question engagement survey that breaks benchmark comparability.
  • Quarterly cadence minimum — with pulse support for active transformation (weekly during reorgs, leadership changes, hypergrowth).
One intelligence layer — powered by Claude, OpenAI, Gemini, watsonx. Segmentation, sentiment, theme extraction, and longitudinal trend on the same data fabric.