Sopact is a technology based social enterprise committed to helping organizations measure impact by directly involving their stakeholders.
Useful links
Copyright 2015-2025 © sopact. All rights reserved.

New webinar on 3rd March 2026 | 9:00 am PT
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
eNPS meaning, 0–10 formula, score ranges, industry benchmarks 2026, and how to collect employee feedback without The Department Average Illusion.
A company's eNPS arrives at the all-hands: 24. Leadership marks it as acceptable. Meanwhile, the customer success department sits at −12, the engineering team at −8, and two product squads are at −30 — invisible inside a number that averages to "fine." The decision that follows — no action needed — is the exact wrong conclusion. The aggregate score wasn't wrong. The architecture that produced only the aggregate score was. This is The Department Average Illusion.
Last updated: April 2026
eNPS stands for Employee Net Promoter Score — a single-question employee feedback metric measuring whether employees would recommend the organization as a place to work, scored on a 0–10 scale. The formula, score ranges, industry benchmarks, and survey design are all covered below. What most eNPS guides don't cover is the architectural problem: company-wide eNPS averages hide departmental reality, and annual collection surfaces the signal after turnover has already started. This guide addresses both — the standard methodology and the structural fixes that turn eNPS from a metric into a workflow.
eNPS (Employee Net Promoter Score) is a single-question employee survey metric measuring whether employees would recommend their organization as a place to work. The canonical eNPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" — adapted from Fred Reichheld's customer NPS methodology at Bain & Company. Respondents scoring 9–10 are classified as Promoters, 7–8 as Passives, and 0–6 as Detractors. The eNPS is calculated as the percentage of Promoters minus the percentage of Detractors, producing a score between −100 and +100.
eNPS works well as a leading indicator for retention risk, organizational health, and culture change. It does not replace engagement surveys, performance reviews, or exit interviews — it complements them by providing a consistent, comparable signal that can be collected frequently enough to track momentum rather than only annual state. The core methodology is identical to customer NPS — see the NPS calculation guide for the full formula details. What makes eNPS different is the respondent population: employees have different feedback dynamics than customers, which determines every architectural choice above the math.
eNPS stands for Employee Net Promoter Score — the employee-facing adaptation of Fred Reichheld's Net Promoter Score methodology, originally developed for measuring customer loyalty and published in 2003. The "e" prefix distinguishes the metric from customer NPS; the scoring mechanism, score bands, and calculation are otherwise identical. Some organizations also use "Employee NPS" or "employee Net Promoter Score" interchangeably, though the three-letter abbreviation has become standard in HR and people analytics contexts.
The meaning of the eNPS score is a measure of employee advocacy — specifically, whether employees would vouch for the organization to people in their professional network. This is a narrower and more behaviorally meaningful question than "are you satisfied" or "are you engaged," because it ties the rating to a specific social action the respondent would or would not take. A Promoter isn't just satisfied — they are willing to stake their personal credibility on recommending the employer. A Detractor isn't just unhappy — their rating signals they would actively warn others away.
Calculate eNPS using the formula: eNPS = % Promoters − % Detractors. Take the total number of Promoters (employees scoring 9–10), divide by total respondents, multiply by 100 to get the percentage. Do the same for Detractors (0–6). Subtract the Detractor percentage from the Promoter percentage. Passives (7–8) are not included in the calculation — only in the total respondent count that determines the percentages. The resulting score ranges from −100 (every respondent is a Detractor) to +100 (every respondent is a Promoter).
Worked example: an organization surveys 200 employees. 80 score 9 or 10 (Promoters = 40%). 70 score 7 or 8 (Passives = 35% — excluded from the calculation but counted in the base). 50 score 0 through 6 (Detractors = 25%). eNPS = 40 − 25 = +15. The calculation is identical to customer NPS, but the interpretation differs because employee populations behave differently — see the score-band explorer below for eNPS-specific interpretation. Common mistakes include using a 1–5 or 1–10 scale instead of 0–10 (breaks the band math), counting Passives in the calculation (produces a non-comparable score), and reporting a single company-wide number without department segmentation (produces the Department Average Illusion).
A "good" eNPS score depends on industry, organization size, and tenure cohort — but the general score bands are: below 0 is critical, 0–10 is poor, 10–30 is average, 30–50 is good, 50–70 is great, above 70 is world-class. Most organizations land in the 10–40 range; scores above 50 are rare and typically signal either strong organizational culture or small-sample selection effects (fewer respondents, louder voices). Scores below 0 mean more employees would actively recommend against the organization than would recommend it — a signal that retention risk is already material.
Context matters more than the raw number. A tech company at +20 may be below peer industry benchmarks; a healthcare system at +20 may be above them. A 500-person company at +35 is doing well; a 50-person startup at +35 may be running on founder enthusiasm that won't persist through the first retention wave. The single most important interpretation rule: a company-wide score masks department-level reality. A +24 organization with engineering at −30 is not a +24 organization — it is a +45 organization with an engineering crisis, and leadership acting on the aggregate will fix nothing. See NPS benchmarks by industry for cross-reference data on customer NPS comparisons.
Industry eNPS benchmarks vary significantly — technology companies average around +20 to +30, healthcare systems around +10 to +25, financial services around +15 to +28, retail and hospitality around +5 to +20, and nonprofits around +20 to +45. The variation reflects industry-specific factors: tech has higher scores driven by compensation and mission framing; healthcare has tighter ranges due to staffing pressures; nonprofits often score higher because mission alignment substitutes for compensation gaps in the advocacy calculation. Within any industry, top-quartile organizations score 15–25 points above the median, and bottom-quartile organizations score 15–30 points below.
The industry benchmarks that matter most are for your specific industry and organization size — not the global average. A 300-person tech company should benchmark against the 200–500 employee tech band, not the tech industry overall (which includes both FAANG and early-stage startups with very different dynamics). When eNPS vendors publish "industry benchmark" numbers, the number itself is less important than the sample composition — a +24 benchmark built from 40 companies with mostly under-200-person teams is not a valid benchmark for a 3,000-person multinational. Match your benchmark to your context, or the comparison produces false confidence or false alarm.
The canonical eNPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" — paired with exactly one open-ended follow-up: "What is the most important reason for the score you gave?" Two questions total. The standard wording is load-bearing — variants like "rate your experience" or "how satisfied are you" produce measurably different scores and break comparability with any benchmark. See the NPS survey questions guide for the full wording library including diagnostic, recovery, activation, and milestone follow-up variants.
Cadence matters as much as wording. Annual eNPS produces a lagging indicator — by the time the score is reported, the conditions that produced it are 12 months old. Quarterly eNPS is the most common professional practice and matches operational planning cycles. Monthly pulse eNPS works for organizations in active transformation (reorgs, leadership changes, hypergrowth) where weekly or bi-weekly tracking surfaces retention risk before turnover starts. Whatever cadence you pick, consistency across cycles matters more than precision at any single moment — same question, same scale, same collection window, every cycle.
eNPS is a single-question metric; employee engagement surveys are multi-question instruments measuring multiple dimensions of engagement (belonging, enablement, alignment, growth). eNPS answers "would employees recommend us" in 2 questions. Engagement surveys answer "how are employees experiencing work across 5–15 dimensions" in 30–80 questions. The two are complementary, not substitutes — eNPS is the high-frequency signal, engagement surveys are the deep-dive diagnostic. Using eNPS inside an engagement survey (as one question among 30) breaks the eNPS benchmark comparability and produces a score that is not comparable to any standalone eNPS baseline.
When to use which: eNPS for continuous pulse tracking (quarterly or more often), engagement surveys for annual deep-dive diagnostics, exit interviews for attrition analysis, stay interviews for retention conversations, and 1:1 manager check-ins for individual signal. Treating any one of these as a substitute for the others is a common architectural mistake. eNPS will tell you retention risk is rising; it will not tell you whether the cause is compensation, management, growth opportunity, or workload. The open-text follow-up and department-level theme extraction — not the score itself — are where the causal signal lives.
eNPS software is a platform for collecting, calculating, and analyzing Employee Net Promoter Score data — ranging from lightweight single-purpose pulse tools (Officevibe, Peakon, TinyPulse) to full engagement-survey platforms with eNPS modules (Culture Amp, Qualtrics EmployeeXM, Glint, Lattice) to general survey tools with eNPS templates (SurveyMonkey, Typeform). The category has consolidated toward comprehensive employee-experience platforms at enterprise pricing ($30K–$150K/year) with specialized pulse tools remaining at mid-market pricing ($5K–$30K/year). The category is crowded; the differentiation that matters is architectural.
What to look for goes beyond feature checklists. Persistent employee IDs at collection (so scores link to HRIS data without manual merge). Department-level segmentation as a default view (not a drill-down filter). Qualitative theme extraction within hours (not 3–4 week coding sprints). Real-time processing (so response loops close within the decision window, not a cycle later). Tools that require a separate text analysis platform for open-text responses produce analysis that is always one reconciliation cycle behind reality — even when the core eNPS capture is strong. See the comparison table below for how generic engagement survey platforms and specialized employee-experience tools compare against unified-schema architecture.
The Department Average Illusion is the structural failure that occurs when eNPS is reported as a single company-wide number, making acceptable averages out of internal distributions that are anything but acceptable. An organization with departments at +40, +15, −8, and −35 might report a company eNPS of +12 — and conclude the organization is in reasonable health. Two departments are quietly failing while the aggregate protects them from scrutiny. The illusion sustains itself through three mechanisms.
Aggregation at the wrong level. Company-wide eNPS pools incompatible populations — remote and in-person teams, new hires and tenured employees, high-growth divisions and declining ones. The average of these populations is not the eNPS of any actual team. Absence of qualitative follow-up. When eNPS is collected without an open-text "why" question, you know the distribution but not the cause — and every intervention is a guess. Annual or quarterly cadence. Retention crises develop over weeks; quarterly eNPS surfaces the signal after turnover has already started. Closing the illusion requires segment-level views by department, tenure, and role as default outputs — not a post-hoc analysis project. Persistent employee IDs at collection, qualitative analysis on the same schema as the score, and collection cadence matched to the decision you're trying to inform.
eNPS stands for Employee Net Promoter Score — a single-question employee survey metric measuring whether employees would recommend their organization as a place to work. The canonical question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" Scored 0–10, with Promoters (9–10), Passives (7–8), and Detractors (0–6). The eNPS is calculated as % Promoters minus % Detractors.
eNPS stands for Employee Net Promoter Score. The "e" distinguishes it from customer NPS (Net Promoter Score), which was developed by Fred Reichheld at Bain & Company in 2003. Other terms used interchangeably include "Employee NPS" and "employee Net Promoter Score." The three-letter abbreviation eNPS has become standard in HR and people analytics contexts.
The eNPS formula is: eNPS = % Promoters − % Detractors. Count employees scoring 9–10 (Promoters), divide by total respondents, multiply by 100 for the percentage. Do the same for Detractors (0–6). Subtract the Detractor percentage from the Promoter percentage. Passives (7–8) are excluded from the calculation but counted in the respondent base. The resulting score ranges from −100 to +100.
Calculate eNPS in four steps: (1) count respondents scoring 9–10 (Promoters), (2) count respondents scoring 0–6 (Detractors), (3) divide each by total respondents to get percentages, (4) subtract Detractor percentage from Promoter percentage. Example: 200 respondents, 80 Promoters (40%), 50 Detractors (25%). eNPS = 40 − 25 = +15.
A good eNPS score depends on industry and organization size. General ranges: below 0 is critical, 0–10 is poor, 10–30 is average, 30–50 is good, 50–70 is great, above 70 is world-class. Most organizations land in 10–40. Top-quartile scores within any industry are 15–25 points above the industry median. Context matters more than the raw number — match your score to your industry and size benchmark.
eNPS score ranges on a −100 to +100 scale: Critical (below 0) — more Detractors than Promoters, active retention risk. Poor (0 to 10) — marginal, warning zone. Average (10 to 30) — most organizations land here. Good (30 to 50) — above industry median in most sectors. Great (50 to 70) — strong advocacy, typically top quartile. World-class (above 70) — exceptional, rare at scale.
The average eNPS score varies by industry: technology ~+20 to +30, healthcare ~+10 to +25, financial services ~+15 to +28, retail ~+5 to +20, nonprofits ~+20 to +45. Global cross-industry median is approximately +15 to +20. Top-quartile in any industry typically sits 15–25 points above the median. Scores above +50 are uncommon at scale; scores below 0 signal active retention risk.
The average eNPS for technology companies is approximately +20 to +30, higher than most industries due to compensation, mission framing, and typically younger workforces with higher tolerance for performance-oriented cultures. Within tech, SaaS and enterprise software often score higher than consumer tech; early-stage startups score higher than scaled companies but with thinner sample sizes. Top-quartile tech companies commonly exceed +45.
eNPS benchmarks are industry-specific score ranges used to contextualize your organization's score. Benchmarks vary by industry, organization size, and geography — the median for a 300-person tech company is not the median for a 3,000-person multinational. Match your benchmark source to your context. Vendor-published benchmarks vary in sample composition; always check the sample description before comparing.
The canonical eNPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" with anchor labels "Not at all likely" (0) and "Extremely likely" (10). Pair with exactly one open-text follow-up: "What is the most important reason for the score you gave?" Two questions total. Wording variants break benchmark comparability.
Run eNPS surveys quarterly for most organizations — this matches operational planning cycles and balances signal frequency with survey fatigue. Monthly or weekly pulse eNPS works for organizations in active transformation (reorgs, leadership changes, hypergrowth). Annual eNPS is too infrequent to produce actionable signal — by the time the score is reported, the conditions that produced it are 12 months old.
eNPS is a 2-question pulse metric; employee engagement surveys are multi-question (30–80 item) instruments measuring multiple engagement dimensions. They are complementary, not substitutes — eNPS is the high-frequency signal, engagement surveys are the deep-dive diagnostic. Running eNPS as one question inside a 30-question engagement survey breaks benchmark comparability and produces a non-comparable score.
eNPS software is a platform for collecting, calculating, and analyzing Employee Net Promoter Score data. Categories include specialized pulse tools (Officevibe, Peakon, TinyPulse), comprehensive employee-experience platforms (Culture Amp, Qualtrics EmployeeXM, Lattice), and general survey tools with eNPS templates (SurveyMonkey, Typeform). Pricing ranges from ~$5K/year for pulse tools to $30K–$150K/year for enterprise employee-experience platforms.
The Department Average Illusion is the structural failure that occurs when eNPS is reported as a single company-wide number, making acceptable averages out of internal distributions that are anything but. An organization with departments at +40, +15, −8, and −35 might report +12 and conclude health is reasonable — while two departments are quietly failing. Segment-level views by department, tenure, and role close the illusion.
Analyze eNPS responses in four steps: (1) segment the score by department, tenure, role level, and location, (2) run sentiment analysis on open-text responses to flag Passives with negative language and Detractors with constructive feedback, (3) extract themes from verbatim comments within each segment, (4) track segment trajectories across three or more cycles. See NPS analysis methodology for the full 4-method framework.