NPS Calculation Formula: How to Calculate Net Promoter Score (2026 Guide)
Your NPS came back as +42. Leadership called it "solid." Three weeks later, your highest-value customer churned — an account that had scored a 6 on that same survey. Nobody followed up because the survey was anonymous. Nobody flagged the account because the score was reported as one number. The formula worked. The program failed.
Last updated: April 2026
The NPS calculation formula is NPS = % Promoters minus % Detractors, where Promoters score 9–10 and Detractors score 0–6 on a 0–10 recommendation scale. Passives (7–8) are counted in the total but excluded from the subtraction. That math takes thirty seconds. The system that connects each score to a specific customer, segment, and touchpoint — and triggers action before churn — is what actually moves the score. This guide covers both: the formula and benchmarks first, then the program design that makes the number matter.
NPS Calculation · Formula & System
NPS = % Promoters − % Detractors. The formula is trivial.
The system around the formula — continuous collection, persistent stakeholder IDs, segment architecture, closed-loop action — is what actually moves the score. This page explains both: the math, then the program that makes the math matter.
Overall +42 — the number that hides four different programs
segment NPSaggregate
same formula, four realities
Ownable Concept
The Aggregate Illusion
When NPS is calculated as one number across all customers, touchpoints, and time periods, the score hides more than it reveals. A +42 average can mask an onboarding NPS of +8 — the real churn driver — because the calculation collapses segment-level signal into a single integer. The formula is trivial. Escaping the Aggregate Illusion requires a system, not a spreadsheet.
6–7 wk
Typical lag from survey launch to actionable insight in quarterly NPS programs
+30 pt
Typical gap between onboarding NPS and renewal NPS — hidden in aggregate
3–5×
Revenue value of customers who actively recommend vs. those who don't
< 50
Response count below which NPS swings ±10 points from statistical noise alone
What is the NPS calculation formula?
The NPS calculation formula is NPS = % Promoters − % Detractors. Promoters are respondents who score 9 or 10 on the "How likely are you to recommend us?" question. Detractors score 0–6. Passives (7–8) count toward the response base but are excluded from the subtraction. The resulting score ranges from −100 (all detractors) to +100 (all promoters) and is reported as a whole number, not a percentage.
The formula does not change across industries, response volumes, or NPS types (customer NPS, employee NPS, beneficiary NPS). What changes is the architecture around it — the touchpoints where you collect, the segments you can disaggregate by, and the speed at which signal reaches the person who can act on it.
How do you calculate NPS step by step?
Calculate NPS in four steps: (1) classify every respondent as Promoter (9–10), Passive (7–8), or Detractor (0–6); (2) divide Promoters by total respondents to get Promoter %; (3) divide Detractors by total respondents to get Detractor %; (4) subtract Detractor % from Promoter %. The result is your NPS, expressed as a whole number between −100 and +100.
The Passive count affects your denominator but never your numerator. This is the most common arithmetic mistake in NPS calculation — teams either forget to include Passives in the total (which inflates their score) or they subtract Passives from Promoters (which is not the formula). Only Promoters and Detractors enter the subtraction.
NPS calculation example with 200 respondents
Out of 200 survey respondents: 120 scored 9 or 10 (Promoters), 40 scored 7 or 8 (Passives), and 40 scored 0–6 (Detractors). Promoter % = 120 ÷ 200 = 60%. Detractor % = 40 ÷ 200 = 20%. NPS = 60 − 20 = +40. The Passives are visible in the total but never enter the subtraction.
Two edge cases matter. First, the 1–5 scale that survey tools sometimes default to is not NPS — it produces a different distribution and the formula does not apply without a validated transformation. Second, statistical significance collapses below 50 responses: your score can shift 10+ points from random noise, which is why quarterly programs with small cohorts frequently report "movement" that reflects sample variance, not sentiment change.
NPS Calculator · Formula & Benchmarks
The complete NPS calculation reference
Formula, worked example, score interpretation, and 2026 industry benchmarks — in one place.
Net Promoter Score
NPS = % Promoters − % Detractors
Result ranges from −100 (all detractors) to +100 (all promoters)
9–10
Promoters
Included in calculation
7–8
Passives
Counted in base, not subtraction
0–6
Detractors
Included in calculation
Step-by-step NPS calculation
Four steps, always in this order. Passives affect your denominator, never your numerator.
1
Classify responses
Bucket every 0–10 response into Promoters (9–10), Passives (7–8), or Detractors (0–6).
2
Promoter %
Divide Promoter count by total respondents (Passives included in the denominator).
P ÷ Total × 100
3
Detractor %
Same method for Detractors — count ÷ total respondents × 100.
D ÷ Total × 100
4
Subtract
Promoter % minus Detractor %. The result is always a whole number between −100 and +100.
General NPS interpretation tiers. Industry context shifts these thresholds — see the benchmarks below.
+70 or above
World-class
Apple, USAA in specific segments
+50 to +70
Excellent
Top-quartile performance
+30 to +50
Great
Strong and sustainable
+10 to +30
Good
Room to move upward
0 to +10
Needs work
Structural issues likely
Below 0
Critical
More detractors than promoters
Industry NPS benchmarks · 2026
Average ranges and top-performer thresholds across common industries. Use as a baseline — your own trend over time matters more.
Average & top-performer NPS by industry
Compiled from 2025–2026 industry reports
Industry
Average range
Top performers
Retail
~50
60+
Financial services
~45
55+
E-commerce
35–50
60+
Technology / SaaS
40–55
70+
Hospitality / travel
40–55
60+
Automotive
40–55
60+
Telecommunications
20–30
40+
Healthcare / telehealth
30–45
50+
B2B services
30–45
50+
Nonprofit / beneficiary
30–50
60+
Employee NPS (eNPS)
10–30
40+
How to use these benchmarks: Compare with direct competitors in your region, not the global aggregate. Benchmarks shift year-over-year with technology, economic conditions, and customer expectations — your own cohort trend is the more reliable signal.
Key Insight
Even a negative score is a starting point
In 2003, Charles Schwab discovered its corporate NPS was −35. Rather than bury the finding, leadership treated it as a catalyst for customer experience transformation. By systematically closing the feedback loop with detractors — contacting named accounts, acting on open-ended "why" responses, and tracking whether interventions moved the score — they eventually converted enough detractors into promoters to reshape their competitive position.
Your current score matters less than your commitment to closing the loop. A −35 with a named detractor list and follow-up protocol is substantially more valuable than a +45 averaged from anonymous responses nobody can act on.
What is a good NPS score? Benchmarks by industry
Any NPS above 0 means more respondents would recommend you than warn others away. Scores above +50 are excellent. Scores above +70 are world-class and rare — achieved by companies like Apple and USAA in specific segments. The benchmark that matters most is your own trend over time, disaggregated by segment, not a competitor's published aggregate.
Industry context shifts the baseline substantially. A +20 in cable or telecommunications is strong; the same +20 in e-commerce or SaaS signals a problem. Published "industry averages" are worth less than they appear — they average customer NPS, employee NPS, transactional NPS, and relational NPS into a single headline number that nobody's program actually measures. Your trend against your own prior quarter, broken out by segment, is the only benchmark that drives decisions.
NPS Best Practices · 2026
Six principles that separate programs stuck at +30 from programs hitting +70
The formula is the same for everyone. The operational model around it is what actually moves the score. These are the six design decisions that drive the difference.
Survey at onboarding completion, 90-day check-in, support close, and 60 days pre-renewal. A single annual survey produces a score that is twelve months stale the moment it lands in the deck.
Most teams keep the annual cadence because "that's how we've always done it" — and lose every action window.
02
Identity
Link every response to a persistent ID
Anonymous NPS produces an untraceable score. You learn the aggregate moved but not which account, cohort, or manager caused it. Every response should carry a stakeholder ID back to a known record.
Anonymous is the default in most survey tools — and the single largest reason NPS data cannot drive action.
03
Segment
Disaggregate at collection, not later
Structure tier, cohort, touchpoint, and demographic segments into the collection design. Retrofitting segmentation from an anonymous export is the three-week delay that makes most quarterly NPS programs unactionable.
Post-hoc segmentation is where open-ended qualitative signal disappears entirely.
04
Qualitative
Pair every rating with "why"
The open-ended follow-up — "What would make this a 10?" — is where the actionable signal lives. The rating tells you the size of the problem. The open-ended tells you what the problem actually is.
Without AI-native theme extraction, open-ended responses take 3–4 weeks to code — and usually get skipped.
05
Action
Close the loop in days, not weeks
A detractor contacted within 48 hours is substantially more likely to retain. A detractor who is never contacted is a churn event that has not yet been recorded. Alerts must route to owners the moment responses arrive.
Programs that skip the loop are scoring rituals, not retention instruments.
06
Trend
Report cohort trend, not point-in-time average
The aggregate number moves for dozens of reasons. The same cohort over multiple cycles moves for one. Cohort trend lines — Q1 → Q2 → Q3 — are where real program effects become visible above sample noise.
Below 50 responses per segment, quarter-over-quarter NPS swings 10+ points from pure noise.
Apply any four of these six and your NPS becomes an operational signal. Apply none and it remains a number nobody acts on until it's too late.
Why one NPS number isn't enough: The Aggregate Illusion
The structural problem with traditional NPS reporting is that the formula collapses everything into one integer. You are averaging together the loyalty of customers who just onboarded, customers who have used your product for three years, enterprise accounts worth $200K, and trial users on day seven. Then leadership asks the question every NPS team fears: "So what do we do about it?"
The Aggregate Illusion is the moment a rich, disaggregated loyalty signal becomes a single dashboard metric. The score is technically accurate. It is also nearly useless as a decision tool because nothing in the calculation tells you which segment drives the score down, which touchpoint creates detractors, or which cohort is about to churn. Companies with consistently high NPS do not have a better formula — they have a better architecture: continuous collection at specific touchpoints, persistent stakeholder IDs linking every response to a known record, segment structure built at collection, and closed-loop action within days. Learn how this connects to program evaluation workflows and impact measurement design.
Three NPS Program Types · One System
Customer, beneficiary, or employee — the Aggregate Illusion breaks the same way
Select your context to see how a continuous NPS system replaces the annual-survey model in practice.
A VP of Customer Success runs a mid-market SaaS company — 340 accounts, $8M ARR. Quarterly NPS in a Google Form. Response rate 19%. Leadership sees one number. Customer Success cannot act because the survey is anonymous and the scores arrive six weeks after the customer moment that drove them. The fix is architectural, not procedural — collection, identity, and segment design decided before the first survey sends.
01
Onboarding NPS
Triggered at Day 14 — captures the moment when churn risk forms
02
90-Day Health Check
Transactional NPS tied to usage quartile — not a calendar survey
03
Pre-Renewal NPS
60 days before contract end — last action window before renewal
Traditional stack
SurveyMonkey + spreadsheet + CSM email
Quarterly snapshot — six weeks from send to boardroom
Anonymous responses — detractors cannot be contacted
Segmentation retrofitted manually from an export
Open-ended responses read sporadically, never coded
With Sopact Sense
Continuous, identified, disaggregated at collection
Unique account IDs assigned at first contact — every response linked
Three transactional triggers — onboarding, 90-day, pre-renewal
Segment structure built at collection — tier, usage, cohort
Qualitative themes extracted across open-ended responses in minutes
For impact funds and portfolio-of-companies contexts: investee customer NPS rolls into portfolio monitoring — the same architecture that generates LP-ready reports overnight.
A Program Director runs a workforce development nonprofit in Chicago — 3 programs, 280 participants per year. Two funders now require NPS in impact reporting. The risk: if NPS is reported as one aggregate, the program will never see whether justice-involved participants are experiencing the program differently from long-term unemployed participants. The Aggregate Illusion, applied to beneficiary data, is an equity blind spot.
01
Mid-Program NPS
Week 6 — catches equity gaps while there's still time to adjust
02
Completion NPS
Week 12 — measures effect of mid-program intervention
03
90-Day Follow-Up
Employment outcome + experience — ties NPS to real impact
Traditional stack
Paper exit surveys + Excel aggregation
One anonymous NPS reported in annual grant narrative
Demographic segmentation requires separate data collection
Equity gaps visible only after program ends — too late to fix
Funder A and Funder B get the same unmodifiable report
With Sopact Sense
Disaggregated at intake, actionable mid-program
Unique participant IDs assigned at intake — demographics linked
Mid-program equity surfacing — adjust Week 7 based on Week 6 signal
Funder-specific reports from one collection process — aggregate or segmented
Open-ended "why" analysis — detractor themes surfaced in minutes
Primary fit for this page: Beneficiary NPS inside a full nonprofit program evaluation workflow — intake, outcomes, follow-up — with persistent participant IDs that satisfy funder reporting and reveal equity gaps in real time.
An HR Director oversees a 600-person healthcare organization with 34% annual turnover in nursing and support staff. The annual engagement survey produces one number that nobody can act on until the following year. Quarterly eNPS segmented by department, manager, and tenure band changes this entirely — but only if the architecture is right from Day 1: employee IDs linked at every pulse, manager-level visibility without breaking individual privacy, and early-warning flags that fire before exit interviews.
01
Quarterly eNPS Pulse
Full workforce — 3-minute survey with one open-ended "why"
Team eNPS + themes — without access to individual responses
Traditional stack
Annual engagement survey + action plans
One eNPS number per year — no tenure or department cut
Open-ended comments read once, never coded systematically
No early warning — at-risk teams visible only in exit interviews
New-hire experience invisible until first-year turnover is logged
With Sopact Sense
Quarterly cadence, manager-level visibility
Employee IDs link every pulse to department, manager, tenure band
Threshold alerts — any team eNPS drop >10 points triggers HR notice
New-hire vs. tenured gap visible Quarter 1 — onboarding fix in Quarter 2
Retention ROI case — eNPS improvement tied to turnover reduction dollars
For training programs measuring trainee NPS alongside learning outcomes: the same Sopact architecture runs pre-post survey design, skill transfer measurement, and trainee recommendation scoring as one connected workflow.
NPS tool comparison: traditional stack vs. Sopact Sense
Most dedicated NPS tools — Delighted, AskNicely, SurveyMonkey's NPS feature, Medallia at the enterprise tier — were designed around the anonymous, quarterly, consumer-CX use case. They are genuinely effective when that is the actual need: transactional NPS for a retail brand with no requirement to contact detractors by name. Where they break down is the moment your program requires identity, disaggregation, open-ended theme extraction, or integration with a broader measurement workflow — beneficiary NPS, employee NPS, investee portfolio NPS, or any context where action on specific responses matters more than the aggregate trend.
Sopact Sense is a data-collection origin system, not an NPS-specific tool. Every respondent receives a unique stakeholder ID at first contact — whether that is a customer, a program participant, or an employee. That ID persists across every subsequent touchpoint, so disaggregation by tier, cohort, demographic, or touchpoint is automatic rather than retrofitted from an export. Qualitative "why" responses flow through the same record as the numeric rating, and Intelligent Column theme extraction produces coded themes by segment in minutes rather than the three to four weeks of manual coding that traditional NPS programs absorb before they can act.
NPS Tool Comparison · 2026
Where dedicated NPS tools break — and what the architecture actually needs to include
Four structural risks that determine whether your NPS program produces action or just arithmetic. Then the capability comparison itself.
Risk 01
Anonymous collection
Dedicated NPS tools default to anonymous. The aggregate moves; no account, cohort, or manager can be flagged.
Every detractor becomes unreachable.
Risk 02
Quarterly cadence lock-in
Most NPS tools are built around a calendar survey, not the customer moment that drove the score.
Transactional triggers require a different architecture.
Risk 03
Open-ended data stranded
"Why?" responses accumulate in exports nobody codes. Themes take 3–4 weeks per cycle.
The signal that drives action is the one that doesn't get read.
Risk 04
Post-hoc segmentation
Without identity at collection, segment cuts require manual reconciliation — three weeks after the moment.
The decision window has already closed.
Capability Comparison
Traditional NPS stack vs. Sopact Sense
Capability
Traditional NPS tools Delighted · AskNicely · SurveyMonkey NPS
Sopact Sense Data-collection origin system
Collection & Identity
Persistent stakeholder ID
Links every NPS response to a known record
Anonymous by default
Requires CRM integration + email key matching after the fact
Unique ID at first contact
Assigned at intake — persists across every subsequent touchpoint automatically
Transactional survey triggers
Tied to the customer moment, not a calendar
Quarterly or "relational" default
Transactional triggers supported via integrations — requires configuration
Native touchpoint architecture
Onboarding, milestone, renewal, exit — configured in setup, not bolted on
Segment structure at collection
Tier, cohort, demographic, usage — built in from day 1
Retrofitted from exports
Segmentation happens in a BI tool after the fact — 2–3 week lag
Structured at intake
Every response arrives already tagged — drill-down is immediate
Analysis & Action
Open-ended theme extraction
"What would make this a 10?" — coded to themes
Manual or basic sentiment only
Most tools offer word-cloud or sentiment — not theme taxonomy by segment
AI-native theme analysis
Hundreds of open-ended responses → themes by segment and score tier in minutes
Detractor alert routing
Owner receives flagged account within 48 hours
Email notifications
Generic alerts — owner must open the tool to see context
Context-linked alerts
Flag includes score, open-ended reason, segment tag, prior engagement history
Qualitative + quantitative on one record
Rating and reason-why linked to the same ID
Two separate datasets
Rating and comment live in different exports — joined manually in Excel
Single record
Quantitative score and qualitative "why" never separate — follow the ID
Fit for Cross-Stakeholder NPS (customer + beneficiary + employee)
Beneficiary / participant NPS
Designed for nonprofit program contexts
Not the native use case
Customer-experience oriented; nonprofit use requires workarounds
First-class context
Participant IDs, demographic disaggregation, funder reports built in
Integration with program evaluation
NPS alongside outcomes, training, or impact measurement
NPS-only scope
Dedicated tools don't extend beyond the NPS question itself
Part of a full evaluation system
NPS + pre-post + outcomes + qualitative — one participant record
Pricing transparency
Published starting price for small-to-mid programs
Varies widely
Delighted from ~$224/mo; AskNicely custom; Medallia enterprise-only
$1,000/month
Full platform — not an NPS-only feature tier
Dedicated NPS tools are genuinely effective for anonymous consumer-CX measurement at high volume. If that is the actual need, they work well and Sopact Sense is not the right fit.
When your NPS program requires named detractors, segment cuts at collection, and qualitative themes by cohort — the architecture is the deliverable, not the score itself. That's the Sopact Sense wedge.
To calculate NPS in Excel: put your 0–10 scores in column A, then use =COUNTIF(A:A,">=9")/COUNTA(A:A)*100 for Promoter %, =COUNTIF(A:A,"<=6")/COUNTA(A:A)*100 for Detractor %, and subtract the second from the first. The result is your NPS. For Google Sheets, the identical formulas work — no Excel-specific functions are required.
For segment-level analysis, use a pivot table with the segment field as rows and three calculated columns: Promoter count, Detractor count, and NPS. Excel-based NPS breaks down at two points: first, when response volume exceeds a few thousand rows, pivot recalculation becomes slow; second, when you need to merge open-ended text with score segmentation, Excel has no native qualitative analysis — you end up copying responses into a separate document for manual coding. That manual coding is the 3–4 week delay that makes most quarterly NPS programs unactionable.
How to calculate employee NPS (eNPS)
Employee NPS uses the identical formula — eNPS = % Promoters − % Detractors — applied to the question "How likely are you to recommend [Organization] as a place to work?" on a 0–10 scale. Promoters (9–10), Passives (7–8), and Detractors (0–6) follow the same bands. The interpretation differs: eNPS typically runs lower than customer NPS because employees are tougher graders of their own workplace, so a +20 eNPS is respectable and a +40 is strong.
The most valuable eNPS segmentation is by department, manager, and tenure band (0–12 months, 1–3 years, 3+ years). New-hire eNPS almost always runs 15–25 points below tenured-staff eNPS, which exposes onboarding as a primary turnover driver — a signal that is invisible in the aggregate score. See pre-post survey design for the collection architecture that makes this visible in the first quarter, and training evaluation workflows for how eNPS integrates with workforce development programs.
NPS on a 1–5 scale vs. the standard 0–10 scale
The standard NPS scale is 0–10, not 1–5 — and the formula does not translate directly. Some survey tools offer "NPS on a 5-point scale" as a shortcut, but the resulting score is not comparable to a true NPS benchmark because the distribution of responses and the Promoter/Passive/Detractor bands are fundamentally different.
If you are locked into a 1–5 scale for legacy reasons, the closest approximation treats 5 as Promoter, 4 as Passive, and 1–3 as Detractor, then applies the standard subtraction. The number you get is internally comparable over time — you can see your own trend — but it cannot be compared to published 0–10 NPS benchmarks without explicit conversion, and vendors who claim otherwise are selling a false equivalence.
[embed: video]
Frequently Asked Questions
What is the NPS formula?
The NPS formula is % Promoters minus % Detractors. Promoters score 9–10, Detractors score 0–6, and Passives (7–8) are counted in the response base but excluded from the subtraction. The result ranges from −100 to +100 and is reported as a whole number, not a percentage.
How do you calculate NPS?
Calculate NPS by dividing the number of Promoters (scores 9–10) by total respondents, doing the same for Detractors (scores 0–6), and subtracting the Detractor percentage from the Promoter percentage. If 60% of 200 respondents are Promoters and 20% are Detractors, your NPS is +40.
What is a good NPS score?
Any NPS above 0 is positive — more respondents would recommend you than warn others away. Scores above +50 are excellent. Scores above +70 are world-class. Industry context shifts the baseline: a +20 in cable is strong; a +20 in e-commerce signals a problem. Your own trend over time is the benchmark that matters.
How is NPS calculated with passives?
Passives (scores 7–8) count toward the total response base but are never included in the subtraction. If you have 100 respondents — 50 Promoters, 30 Passives, 20 Detractors — Promoter % is 50%, Detractor % is 20%, and NPS is +30. The Passive count affects your denominator but never your numerator.
What is The Aggregate Illusion in NPS?
The Aggregate Illusion is the moment NPS is calculated as one number across all customers, touchpoints, and time periods — collapsing segment-level signal into a single integer that describes everything and changes nothing. A +42 average can hide an onboarding NPS of +8, the real churn driver. The formula is trivial; escaping the Aggregate Illusion requires a system.
How do you calculate NPS in Excel?
In Excel, use =COUNTIF(A:A,">=9")/COUNTA(A:A)*100 for Promoter %, =COUNTIF(A:A,"<=6")/COUNTA(A:A)*100 for Detractor %, and subtract the second from the first. For segment-level cuts, build a pivot table with the segment as rows and Promoter/Detractor counts as calculated columns.
What is eNPS and how is it calculated?
Employee NPS uses the same formula — % Promoters − % Detractors — applied to the question "How likely are you to recommend [Organization] as a place to work?" on a 0–10 scale. eNPS typically runs lower than customer NPS, so +20 is respectable and +40 is strong. Segment by department, manager, and tenure to see what the aggregate hides.
How large should my NPS sample size be?
Below 50 responses, NPS can shift 10+ points from statistical noise alone. For a confidence interval of ±5 points, aim for 200+ responses per segment. For smaller cohorts, report the trend across multiple survey cycles rather than treating any single score as a precise measurement — the direction of movement is more reliable than the absolute number.
What's the difference between NPS on a 1–5 scale and 0–10 scale?
Standard NPS uses a 0–10 scale with Promoters at 9–10, Passives at 7–8, and Detractors at 0–6. The 1–5 scale is a different instrument — the response distribution and band thresholds are not equivalent, so a "1–5 NPS" cannot be directly compared to published 0–10 benchmarks. Use 0–10 unless a legacy system prevents it.
How often should you measure NPS?
Relational (annual) NPS has been largely replaced by transactional NPS tied to specific moments — onboarding completion, support ticket closure, renewal, program milestones. Transactional NPS runs continuously and produces actionable signal within days, not twelve months. Aggregating transactional scores across touchpoints into a single annual number re-introduces The Aggregate Illusion.
How much does NPS measurement software cost?
Dedicated NPS tools range from free tiers (Google Forms, basic SurveyMonkey) through $200–$3,000/month (Delighted, AskNicely) to enterprise contracts ($30K–$150K/year for Qualtrics or Medallia). Sopact Sense starts at $1,000/month and includes persistent stakeholder IDs, qualitative theme extraction, and segment architecture that dedicated NPS tools treat as add-ons or miss entirely.
Can Sopact Sense replace a dedicated NPS tool?
Yes — Sopact Sense runs customer, beneficiary, and employee NPS programs with structural advantages dedicated NPS tools lack: persistent stakeholder IDs linking every response to a known record, disaggregation built at collection rather than retrofitted, and qualitative theme extraction that reads open-ended responses in minutes rather than weeks.
Ready to rebuild?
Start with the system, not the survey
Most NPS tools let you send the survey on Monday. Sopact Sense lets you know, by Friday, which detractor to contact, why they rated a 6, and whether the intervention moved their next score.
Persistent stakeholder IDs — every response linked to a known record
Transactional triggers at every touchpoint — not a calendar survey
Qualitative + quantitative on one record — themes by segment in minutes