Sopact is a technology based social enterprise committed to helping organizations measure impact by directly involving their stakeholders.
Useful links
Copyright 2015-2025 © sopact. All rights reserved.

New webinar on 3rd March 2026 | 9:00 am PT
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
eNPS measures whether employees would recommend your org as a place to work. Formula, benchmarks by industry, and why company-wide averages hide the real problem.
A company's eNPS arrives at the all-hands meeting: 24. Leadership marks it as acceptable. Meanwhile, the customer success department sits at -12, the engineering team at -8, and two product squads are at -30 — invisible inside a number that averages to fine. The decision that follows — no action needed — is the exact wrong conclusion. The aggregate score wasn't wrong. The architecture that produced only the aggregate score was. That failure has a name: The Department Average Illusion.
Employee Net Promoter Score (eNPS) measures one thing: whether your employees would recommend your organization as a place to work. The question is direct — "On a scale of 0–10, how likely are you to recommend [organization] as a place to work?" — and the calculation is identical to customer NPS: %Promoters (9–10) minus %Detractors (0–6), with Passives (7–8) excluded.
eNPS works well as a leading indicator for retention risk, organizational health, and culture change. It does not replace engagement surveys, performance reviews, or exit interviews. It complements them by giving you a consistent, comparable signal that can be collected frequently enough to track momentum — not just annual state. For organizations already using longitudinal survey methodology, eNPS is a natural addition to the continuous feedback architecture.
The Department Average Illusion is the structural failure that occurs when eNPS is reported as a single company-wide number, making acceptable averages out of internal distributions that are anything but acceptable. An organization with departments at +40, +15, -8, and -35 might report a company eNPS of 12 — and conclude the organization is in reasonable health. Two departments are quietly failing while the aggregate protects them from scrutiny.
The illusion has three mechanisms. First: aggregation at the wrong level. Company-wide eNPS pools incompatible populations — remote and in-person teams, new hires and tenured employees, high-growth divisions and declining ones. The average of these populations is not the eNPS of any actual team. Second: absence of qualitative follow-up. When eNPS is collected without an open-text "why" question, you know the distribution but not the cause. Third: annual or quarterly cadence. Retention crises develop over weeks; quarterly eNPS data surfaces the signal after turnover has already started.
Sopact Sense closes the illusion at the architectural level by collecting eNPS with unique employee IDs that persist across every survey cycle, enabling segment-level views by department, location, tenure band, and role level as a default output — not a post-hoc analysis. The qualitative data collection methods that explain why scores differ are structured into the same survey, analyzed automatically.
The eNPS formula is: eNPS = % Employees who are Promoters − % Employees who are Detractors
Promoters score 9–10 on the recommendation question. Detractors score 0–6. Passives (7–8) are excluded from the calculation. Unlike engagement surveys that produce averages across multiple dimensions, eNPS produces a single signed number — which makes it directly comparable across departments, time periods, and organizations.
A worked example: 150 employees respond. 60 score 9–10 (40% Promoters). 30 score 0–6 (20% Detractors). eNPS = 40 − 20 = 20. This is considered good by most benchmarks. But if those 30 Detractors are concentrated in one 40-person department, that department's eNPS is -50 — a crisis-level signal invisible in the company aggregate.
Three calculation disciplines that matter: collect eNPS from your full employee population, not a sample; include a qualitative follow-up question on every survey cycle; and report eNPS by department and role level as a standard output, not an ad hoc request.
eNPS and employee engagement measure different things through different methods, but they are not interchangeable. Engagement surveys measure multiple dimensions simultaneously — communication, recognition, workload, career development, manager relationship — and produce an engagement score that reflects the weighted average across those dimensions. eNPS measures one thing: whether the employee would stake their professional reputation on recommending the organization.
The relationship between the two matters for action. High engagement + low eNPS indicates that employees are invested in their work but don't believe outsiders should join — often a signal of external reputation problems or leadership trust issues. Low engagement + moderate eNPS indicates employees are disengaged but don't feel strongly enough to actively discourage others — often a sign of early-stage disengagement not yet terminal. High engagement + high eNPS is the target state. Low engagement + low eNPS is the crisis signal that quarterly annual surveys frequently miss until it's reflected in turnover numbers.
Collecting eNPS through mixed-method surveys that pair the 0–10 rating with a qualitative follow-up produces the evidence to distinguish which combination you're in. The number tells you the category; the qualitative context tells you the cause.
Survey fatigue is the failure mode of ambitious eNPS programs. Organizations that launch monthly eNPS surveys to 500 employees without a feedback loop see response rates collapse within two cycles — not because employees don't have opinions, but because they've learned their input doesn't produce change.
The conditions that prevent fatigue: short surveys (the eNPS question plus one open-text follow-up is the minimum viable instrument), visible action taken on the previous cycle's most common theme before the next survey launches, and a clear communication of what changed and why. Employees who see their qualitative feedback referenced by name in a department update complete the next survey at materially higher rates.
Timing matters as much as frequency. Event-triggered eNPS — collected after a major organizational change, leadership transition, or policy announcement — produces higher signal-to-noise than calendar-triggered surveys. Continuous collection with short check-ins following specific events closes The Department Average Illusion faster than quarterly surveys of the full population.
Average eNPS benchmarks vary by industry and methodology source. Technology companies typically range from 15–35. Professional services organizations range from 10–25. Healthcare ranges from 5–20. Manufacturing typically ranges from 0–15. These benchmarks come from self-reported data and vary significantly by survey methodology — treat them as orientation, not targets.
The benchmark that matters most is your own trend line. An eNPS of 12 improving 8 points per quarter is a healthier signal than an eNPS of 30 that hasn't moved in six cycles. What good looks like in a working eNPS program: response rate above 70% (below that, you're measuring motivated employees, not the full population), department-level views available as standard output, qualitative themes extracted and summarized within 48 hours of survey close, and one visible action taken before the next collection cycle.
Average eNPS for tech companies sits around 20–30 by most benchmarks. For nonprofits and mission-driven organizations, averages tend to run lower — 10–20 — because expectations for working conditions are shaped by mission alignment rather than market compensation. Employees who joined for mission express lower eNPS when leadership decisions appear to contradict that mission, regardless of compensation level. That dynamic is invisible in aggregate scores and visible only in department-level qualitative data.
eNPS stands for Employee Net Promoter Score. eNPS meaning: a measurement of whether employees would recommend your organization as a place to work, on a 0–10 scale. Scores 9–10 are Promoters, 0–6 are Detractors, 7–8 are Passives. eNPS = %Promoters minus %Detractors. It is a leading indicator of retention risk, culture health, and organizational trust — most actionable when segmented by department rather than reported as a company-wide average.
A good eNPS score is above 20 by most industry benchmarks. Scores of 10–20 are considered acceptable, 20–40 good, and above 40 excellent. Average eNPS for tech companies typically ranges from 20–30. These benchmarks vary by industry, survey methodology, and population sampled. A rising eNPS trend over three or more cycles is more valuable than any single score — trajectory reveals whether your feedback loop is working.
A negative eNPS means more employees are Detractors (0–6) than Promoters (9–10). It is a warning signal indicating that the majority of your workforce would not recommend the organization to peers. Negative eNPS is not unusual in organizations undergoing major change, leadership transitions, or compensation restructuring. The critical variable is whether qualitative follow-up data exists to identify the specific cause — and whether the organization has a feedback loop to respond within one cycle.
The Department Average Illusion is the structural failure that occurs when eNPS is reported as a single company-wide number, making acceptable averages out of internal distributions that are crisis-level. A company eNPS of 12 can mask departments at -35, invisible until turnover makes the signal impossible to ignore. Sopact Sense closes the illusion by defaulting to department-level views with qualitative context — not requiring a separate analysis request.
Average eNPS for tech companies typically ranges from 20–30 by most benchmarks, with high-growth companies and recent-IPO organizations often reporting lower scores during transition periods. These figures come from self-reported data and vary significantly by company size, location, and survey methodology. A technology company below 10 has a retention risk signal worth investigating — particularly if concentrated in specific departments or role levels.
eNPS measures one thing: recommendation likelihood. Engagement surveys measure multiple dimensions — communication, recognition, workload, career growth, manager relationship — and produce composite scores. eNPS is faster to collect, directly comparable across time periods and organizations, and more useful as a leading indicator. Engagement surveys produce richer diagnostic data. The two work best together: eNPS as the continuous pulse check, engagement surveys as the annual diagnostic.
Collect eNPS at whatever frequency your organization can actually respond to. A quarterly eNPS program with visible follow-up actions outperforms a monthly program with no visible response. Event-triggered collection — after major policy changes, leadership transitions, or reorganizations — produces higher signal-to-noise than calendar-triggered surveys. The minimum viable cadence: twice per year, segmented by department, with qualitative follow-up and one visible response per cycle.
The most actionable eNPS follow-up question is: "What is the primary reason for your score?" This open-text question, analyzed across the detractor population, reveals the specific cause behind the number. Secondary follow-up options: "What would it take to move your score to a 9 or 10?" (forward-facing, solution-oriented) or "What is the one thing leadership could change that would have the biggest positive impact?" Both produce more actionable qualitative data than sentiment-only analysis.
Sopact Sense collects eNPS with persistent unique employee IDs, enabling segment-level views by department, location, tenure band, and role level as a default output. Open-text follow-up responses are analyzed by Intelligent Column as responses arrive — extracting theme frequencies across promoter, passive, and detractor segments without manual coding. Detractor lists with full employee history are available for follow-up within 48 hours, and cycle-over-cycle trends update automatically without record-matching.
eNPS works for nonprofits with one important context adjustment: mission alignment is a stronger predictor of recommendation likelihood than compensation in mission-driven organizations. Employees who joined for mission express lower eNPS when leadership decisions contradict that mission. This dynamic is invisible in aggregate company scores and visible only in department-level qualitative themes — which is why segmented eNPS with qualitative follow-up is more valuable for nonprofits than a single organization-wide score.
eNPS (Employee Net Promoter Score) asks employees whether they would recommend the organization as a place to work. Customer NPS asks customers or stakeholders whether they would recommend the product, service, or program. Both use the same formula: %Promoters minus %Detractors. Benchmarks differ — customer NPS averages tend to run higher than eNPS for the same organization. Both are most actionable at the segment level, with qualitative follow-up data, and tracked longitudinally across three or more cycles.