play icon for videos

NPS vs CSAT: Differences, When to Use Each & How to Link

NPS vs CSAT explained: what each measures, how to link scores to qualitative feedback comments, and how to detect churn before it shows up in either metric.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 21, 2026
360 feedback training evaluation
Use Case

NPS vs CSAT: Key Differences, When to Use Each, and How to Link Them (2026 Guide)

Your Tuesday morning data review shows overall NPS holding at +38 — steady for three quarters. The CSAT for your last two workshops just dropped from 4.1 to 3.5 out of 5. Leadership wants to know which number is the right one. It's the wrong question. NPS and CSAT are not competing metrics and they are not interchangeable. They measure different things at different moments, and a program that can only report one has a blind spot the other metric was designed to cover.

Last updated: April 2026

NPS measures relationship-level loyalty on a 0–10 scale. CSAT measures transactional satisfaction on a 1–5 scale. NPS is a lagging indicator of whether the overall relationship is at risk; CSAT is a leading indicator of where friction is building right now. Teams that run both — linked to the same participant record — have a diagnostic system. Teams that run one have a number. This guide covers what each metric actually measures, how they differ structurally, when to use each, how to use them together, and how to link both to qualitative feedback and churn data.

NPS vs CSAT · 2026 Guide
Two metrics, two questions, one diagnostic gap

NPS measures loyalty at the relationship level. CSAT measures satisfaction at the interaction level. The gap between them is the signal neither metric produces alone — and the reason picking one to report costs more than running both.

The NPS / CSAT Gap, visualized
When the two metrics diverge, the gap is the diagnostic
NPS stays stable while CSAT drops — the early warning signal high good mid low the diagnostic gap NPS CSAT Q1 Q2 Q3 CSAT drops — NPS hasn't Q4 Q5 NPS catches up
NPS (0–10, relational) CSAT (1–5, transactional)
CSAT leads, NPS lags
Ownable Concept
The Single-Signal Trap

Treating NPS and CSAT as interchangeable metrics and picking one to report collapses two structurally different customer intelligence signals into a single blunt score. The cost isn't survey bandwidth — it's a diagnostic blind spot. NPS-only programs miss interaction-level friction; CSAT-only programs miss relationship erosion. Escaping the trap requires infrastructure — shared participant identity across both instruments — more than it requires a bigger survey budget.

0–10
NPS scale — % Promoters (9–10) minus % Detractors (0–6)
1–5
CSAT scale — top-two-box (4–5) typically reported as percentage
2–3Q
typical lag between CSAT drops and NPS catching up — the early-warning window
~0
marginal cost of adding the second instrument when both live on one linked record

What is NPS?

NPS (Net Promoter Score) is a 0–10 loyalty metric that measures how likely customers or program participants are to recommend an organization to someone else. The full question is "How likely are you to recommend [organization] to a friend or colleague?" Respondents scoring 9–10 are Promoters, 7–8 are Passives, 0–6 are Detractors. The score is calculated as % Promoters minus % Detractors, ranging from −100 to +100. NPS reflects cumulative relationship health — all the interactions, outcomes, and expectations up to the moment of collection.

NPS is a lagging signal. It moves slowly because it captures aggregated sentiment, not a single moment. A drop in NPS this quarter often reflects accumulated friction from the previous two or three quarters — which is valuable for strategic direction but unhelpful for identifying the specific interaction causing the problem. See how to calculate NPS step by step for the full formula, worked examples, and industry benchmarks.

What is CSAT?

CSAT (Customer Satisfaction Score) is a 1–5 rating metric that measures satisfaction with a specific interaction, deliverable, or moment in the customer journey. The typical question is "How satisfied were you with [specific interaction]?" on a 5-point scale (very dissatisfied to very satisfied). CSAT is reported either as the average rating or as the percentage of respondents giving a 4 or 5. Because it asks about a defined moment rather than the whole relationship, CSAT changes quickly in response to recent events.

CSAT is a leading indicator. It moves faster than NPS and can surface friction before that friction accumulates into relationship damage — a CSAT drop in week six of a workshop series is often the first warning that the overall program NPS will fall next quarter. The trade-off: CSAT alone tells you nothing about whether respondents would recommend you, which means satisfying interactions can coexist with a broken relationship.

What is the difference between NPS and CSAT?

The core difference is what they measure and when they move. NPS measures loyalty at the relationship level and changes slowly; CSAT measures satisfaction at the interaction level and changes quickly. NPS answers "do they love us?" CSAT answers "did they like this?" Both questions matter, and the answers often diverge — a customer can rate an individual support call 5/5 while giving your company an NPS of 4. The interaction was excellent; the relationship is broken.

Four structural differences matter for program design: scale (NPS uses 0–10, CSAT typically 1–5); timing (NPS runs relationally at milestones, CSAT runs transactionally after specific events); what each is sensitive to (NPS is sensitive to outcomes and perceived value, CSAT is sensitive to friction in specific touchpoints); how benchmarkable each is (NPS has standardized industry benchmarks, CSAT benchmarks are less comparable because scale conventions differ). The measure-nps guide covers NPS calculation in depth; this page focuses on the comparison and linkage.

Side-by-Side Comparison · 7 Dimensions
NPS vs CSAT on every dimension that matters

What each measures, how each scales, when each moves, and what each misses on its own.

Loyalty Signal
NPS
Net Promoter Score
"

How likely are you to recommend [organization] to a friend or colleague?

Scale
0–10 rating

Promoters 9–10 · Passives 7–8 · Detractors 0–6

Reported as
% Promoters − % Detractors

Whole number between −100 and +100

Measures
Relationship-level loyalty

Whether someone puts their reputation behind a referral

Timing
Relational — at milestones

Program completion · 90-day follow-up · quarterly pulse

Signal type
Lagging indicator

Moves slowly — reflects accumulated experience

Sensitive to
Outcomes, perceived value, expectations

Less sensitive to individual touchpoint friction

Benchmarks
Standardized, cross-industry

+50 excellent · +70 world-class · varies by industry

Satisfaction Signal
CSAT
Customer Satisfaction Score
"

How satisfied were you with [specific interaction or deliverable]?

Scale
1–5 rating (typical)

Very dissatisfied (1) to very satisfied (5)

Reported as
Top-two-box % (ratings 4–5)

Or average rating — convention varies by org

Measures
Interaction-level satisfaction

How a specific event or deliverable felt

Timing
Transactional — post-event

Within 24–48 hours of the interaction

Signal type
Leading indicator

Moves quickly — reflects current friction

Sensitive to
Specific touchpoints, process friction

Less sensitive to overall relationship health

Benchmarks
Less cross-comparable

Scale conventions differ — own-trend is more reliable

Diagnostic Gap Analysis
The signal neither metric produces alone
Pattern A
NPS stable · CSAT dropping

Recent interactions are creating friction that hasn't yet damaged the relationship. You have a narrow intervention window before NPS catches up next quarter. Early-warning signal.

Pattern B
NPS dropping · CSAT stable

Something earlier — expectations, outcomes, perceived value — is eroding loyalty even though current interactions feel fine. The recent-moment view is misleading. Relationship-level problem.

Pattern C
Both dropping

Confirmed erosion across both dimensions. Priority moves from diagnosis to triage — identify which cohort is most at risk and intervene against the open-ended comments from both instruments. Active churn risk.

Pattern D
High CSAT · low NPS

Participants like individual sessions but won't recommend the program. Usually an outcome or value problem — the experience is pleasant but not producing what respondents came for. The hidden loyalty problem.

Running NPS and CSAT on a shared participant ID is what makes these four patterns visible at the individual level — not just as aggregate averages on two disconnected dashboards.

See the linked view →

When should you use NPS vs CSAT?

Use NPS when you need a relationship-level signal that can be benchmarked against industry peers, reported to a board or funder as a loyalty indicator, and tracked as a trend line over multiple quarters. NPS is the right instrument at defined relationship milestones: program completion, 90-day post-enrollment, annual renewal, or quarterly pulse on a stable customer base. NPS is the wrong instrument to ask after every interaction — you will exhaust respondents and the trend becomes noisy.

Use CSAT when you need transactional feedback tied to a specific moment — after a support ticket closes, after a workshop session, after onboarding, after a document turnaround. CSAT is the right instrument where the deliverable is discrete and the friction point needs to be named. CSAT is the wrong instrument for measuring whether someone would recommend you, because satisfaction with a single interaction doesn't tell you that.

In practice, most programs need both. The question is not "NPS or CSAT?" but "at which moments do we run each?" For a 12-week workforce program, that typically means CSAT at each of four milestone events plus NPS at program completion and 90-day follow-up — five instruments, each with clear timing. For a SaaS product, that means CSAT after each support touch plus quarterly relational NPS. See also pre-post survey design for integrating these with outcome measurement.

Can you use NPS and CSAT together?

Yes — running NPS and CSAT together produces the diagnostic signal that neither metric produces alone: the gap between them. When NPS is stable and CSAT is dropping, recent interactions are creating satisfaction friction that hasn't yet damaged the relationship — you have a narrow window to intervene. When NPS is dropping and CSAT is stable, something earlier — expectations, outcomes, perceived value — is eroding loyalty even though current interactions feel fine. Both diagnoses are actionable; neither is visible from one metric alone.

The resource argument against running both ("we don't have bandwidth for two surveys") treats survey length as the binding constraint. In practice, the real constraint is infrastructure: if NPS and CSAT data live in separate systems with no shared participant ID, running both produces two disconnected datasets rather than a linked diagnostic. When both instruments live in the same collection system with persistent stakeholder IDs — as they do in Sopact Sense — the marginal cost of adding the second instrument is close to zero, and the diagnostic value compounds.

Running Both Effectively · 2026
Six principles for running NPS and CSAT together without doubling your workload

The resource argument against running both assumes two separate surveys. On a linked collection architecture, the marginal cost of the second instrument is close to zero — and the diagnostic value compounds. These six principles separate programs that get the benefit from programs that add overhead.

01
Identity
Assign one persistent ID per participant

The same stakeholder ID must travel through every NPS rating, every CSAT score, and every open-ended comment. Without it, linking NPS and CSAT is a manual reconciliation project that gets skipped after the first quarter.

Email-matching after the fact is a fragile workaround, not architecture.
02
Timing
Use different timing for each instrument

CSAT runs transactionally — within 24–48 hours of a specific interaction. NPS runs relationally — at milestones like program completion or 90-day follow-up. Mixing timing conventions makes longitudinal comparison unreliable.

Quarterly NPS after every session exhausts respondents; CSAT at program-end misses interaction-level friction.
03
Qualitative
Pair each rating with one open-ended question

"What's the primary reason for your score?" for NPS. "What would have made this a 5?" for CSAT. One follow-up per instrument — never two. The reason-why is where the actionable signal lives on both.

Multi-question NPS or CSAT surveys collapse response rates below 15%.
04
Gap Analysis
Watch the gap, not just each score

NPS stable + CSAT dropping = interaction friction building. NPS dropping + CSAT stable = earlier relationship erosion. High CSAT + low NPS = outcome problem. The gap patterns are the diagnostic — not either number alone.

Reporting each score separately hides the divergence that matters most.
05
Theme Linkage
Read both comment sets as one dataset

If "unclear communication" appears in both NPS comments and CSAT comments for the same participants, that's a systemic theme — not two unrelated complaints. Cross-instrument theme analysis is where linked data earns its keep.

Analyzing NPS comments and CSAT comments separately misses the strongest patterns.
06
Churn Linkage
Link both to exit events

Participants who drop out, don't renew, or disengage have a score history. Linking their last NPS and last CSAT to their exit date reveals which score is the stronger dropout predictor for your specific program type.

Programs that skip this analysis invest retention resources against the weaker signal.

Apply all six and NPS + CSAT become one diagnostic system instead of two disconnected dashboards. Apply identity only (principle 01) and skip the rest, and you still have two unlinked datasets.

See the linked system live →

How to link NPS and CSAT to qualitative feedback and churn data

Link NPS and CSAT by assigning a persistent stakeholder ID at first contact — the same ID travels through every NPS rating, every CSAT score, every open-ended comment, and (when it happens) every churn or dropout event. Without that shared ID, linking is a manual reconciliation project run in Excel through email matching, prone to error, and typically skipped after the first quarter. With a shared ID, compound queries become trivial: "Show me participants whose CSAT dropped more than 0.5 points last month but whose NPS is still above +30 — and list the themes in their CSAT comments."

The linkage enables four high-value analyses that neither metric supports alone. Churn prediction: which score was the stronger predictor of dropout in your last two cohorts? Gap analysis: which participants show high CSAT but low NPS (the "likes the sessions, won't re-enroll" segment)? Cross-instrument theme analysis: does the same phrase — "unclear communication," "pace too fast" — appear in both NPS and CSAT open-ended responses for the same participants? Participant-level diagnostic view: a single record per stakeholder showing their full score history across both instruments, rather than two aggregate averages.

This is the exact problem pattern showing up across our search data: "how to link NPS, CSAT, or churn data to the specific qualitative feedback that explains the score." The architectural answer is the same regardless of which metrics you pair — identity at collection, not retrofitted from exports. Sopact Sense treats NPS, CSAT, and open-ended responses as three fields on the same participant record rather than three separate datasets. Intelligent Column analysis reads comments from both instruments in one view.

[embed: scenario]

NPS vs CSAT: comparison of tools and platforms

Dedicated NPS tools (Delighted, AskNicely) and generic survey platforms (SurveyMonkey, Google Forms) handle NPS and CSAT as two separate forms. They can collect both, but the linking architecture — persistent IDs, cross-instrument theme analysis, gap visualization — requires custom configuration, CRM integration, or separate BI tooling. Enterprise CX suites (Qualtrics, Medallia) offer both instruments and deeper analysis capability, but at $30K–$150K annual contracts with implementation cycles measured in quarters.

For a program that needs NPS and CSAT linked to the same participant record, running in the same platform, with qualitative analysis across both instruments, the architecture decision is upstream of the tool choice. Sopact Sense was built as a data-collection origin system rather than an NPS-only or CSAT-only tool — both instruments live on the same participant record from day one, with no integration project required to see the gap between them.

Three Contexts · Three Gap Patterns
Where the NPS/CSAT gap surfaces — and what it reveals

Select your context to see how the gap between loyalty and satisfaction shows up in real programs.

A 12-week workforce development program runs quarterly NPS at program completion and CSAT after each of four milestone events. The program director's quarterly review shows NPS holding at +38 — no alarm bells. But the last two session CSAT scores have dropped noticeably, and the pattern is specific to cohort 7. Without the linked view, leadership would wait until next quarter's NPS to act.

NPS · end of cohort
+40 +38

Stable over three quarters — no visible alarm

CSAT · session 6
4.1 3.5

Dropped 0.6 pts — cohort 7 specifically

Disconnected stack
NPS in one tool, CSAT in another
  • NPS trend review says "fine" — data team reassures leadership
  • CSAT drop noticed but attributed to "a tough session" — not investigated
  • NPS falls to +29 the following quarter — but the window to intervene cheaply has closed
  • Dropout rate for cohort 7 comes in 18% higher than cohorts 5–6 at 90-day mark
Linked via Sopact Sense
One participant ID, both instruments
  • Gap dashboard shows CSAT divergence from NPS in week 7 — flagged automatically
  • CSAT comment theme "pace too fast" surfaces in week 7 across cohort 7
  • Intervention deployed week 8 — additional office hours + pace check-in
  • End-of-cohort NPS holds at +41 (vs projected drop to +29) — intervention validated

For nonprofit programs: CSAT is the early-warning signal that buys the intervention window before NPS confirms the relationship damage.

Nonprofit Programs →

A B2B SaaS company runs CSAT after every support ticket close — scores average 4.4/5 across Q3, which leadership cites as evidence the platform is performing well. Nobody has run NPS in 18 months. When the board requests a loyalty measure, a one-time NPS comes in at +18 — a gap no one had noticed. The support experience is strong; the product relationship is thin.

CSAT · post-support
4.3 4.4

Strong and consistent — the support team is doing great work

NPS · one-time relational
+18

Hidden loyalty gap — customers like support, won't recommend

CSAT-only stack
Support satisfaction ≠ product loyalty
  • CSAT reported to leadership quarterly — "we're at 4.4, we're winning"
  • Churn rate creeps up 3% over the year — attributed to "market conditions"
  • Expansion revenue flattens — no framework to connect the two
  • Board requests NPS — the one-time snapshot reveals a loyalty gap nobody was watching
Linked via Sopact Sense
Both running on the same customer ID
  • NPS at renewal + CSAT per ticket both tied to customer account ID
  • High CSAT + low NPS segment identified — 22% of customer base
  • Open-ended comments reveal product-roadmap friction, not support complaints
  • Targeted intervention — product roadmap outreach to the segment — moves NPS +12 points over two quarters

For SaaS and customer experience teams: CSAT measures your team; NPS measures your product-market fit. You need both to tell which one is the problem.

Impact Intelligence →

A corporate training team runs CSAT after each training module and NPS at the end of the 10-week program. For cohort 4, individual module CSAT scores average 4.3/5 — every session feels good in the moment. But program-end NPS comes in at +12, well below the +40 the team had been running. Modules were great; the overall program didn't deliver on the promised skill outcomes.

CSAT · per module average
4.2 4.3

Individual sessions rate well — facilitators getting strong marks

NPS · program completion
+40 +12

28-point drop — program didn't deliver end-to-end

Session CSAT only
Module ratings ≠ program outcomes
  • Each module ends strong — facilitators get positive feedback, keep doing what they do
  • Program-end NPS drop attributed to "this cohort was tougher"
  • Exit interviews reveal skills didn't transfer to the job — but by then the cohort is over
  • No mid-program signal that the content arc wasn't landing
Linked via Sopact Sense
One learner ID across every module + NPS
  • Mid-program NPS check-in at week 5 — CSAT stays strong, NPS already dropping
  • Learner-level comments surface "not sure how this connects" theme at week 5
  • Content arc adjustment deployed week 6 — explicit application exercises added
  • Program-end NPS for cohort 5 recovers to +38 — same facilitators, corrected structure

For training and workforce development: strong session CSAT can mask a weak program arc. NPS at mid-program is the early warning for curriculum design.

Training Intelligence →

NPS vs CSAT vs CES: the three feedback metrics compared

Customer Effort Score (CES) is the third major feedback metric, measuring how much effort a customer had to expend to get something done — "How easy was it to resolve your issue today?" on a 1–7 scale. CES is narrower than CSAT: CSAT asks if you were satisfied, CES asks specifically whether the interaction felt easy or difficult. In process-heavy contexts — support tickets, self-service flows, onboarding friction — CES often surfaces specific usability problems that CSAT averages miss.

The three metrics measure three different dimensions: NPS (relationship loyalty, long time horizon), CSAT (transactional satisfaction, short time horizon), CES (interaction friction, very short time horizon). Programs with mature feedback systems often run all three at different moments — CES on support close, CSAT on milestone completion, NPS on relational milestones. Programs designing from scratch usually start with CSAT plus NPS and add CES once the core system is stable. Not every program needs all three, but understanding what each measures prevents the wrong instrument choice at the wrong moment.

The Single-Signal Trap: why running one metric costs more than running both

The Single-Signal Trap is what happens when organizations treat NPS and CSAT as interchangeable and pick one to report. The resource argument is always the same ("we don't have bandwidth for two surveys") but the real cost of choosing one is a diagnostic blind spot the other metric was designed to cover. Three common manifestations show how expensive this trap is in practice.

Invisible satisfaction erosion. A nonprofit running NPS-only reports a stable +40 for three quarters while participants are quietly having worse experiences with specific program components. CSAT at the session level would have surfaced the friction weeks before NPS caught up. By the time the relationship score drops, the intervention window has already closed. Hidden loyalty problem. A SaaS company running CSAT-only reports strong post-support scores (4.4/5) — but NPS, if collected, would show a +18: customers like the support team, don't trust the product. CSAT alone hides the loyalty gap. Comment disconnection. Both instruments produce open-ended responses; without a shared participant ID, comments live in separate exports and the same participant signaling the same problem across both instruments never gets connected. Escaping the Single-Signal Trap requires infrastructure — shared identity — more than it requires a bigger survey budget.

Frequently Asked Questions

What is the difference between NPS and CSAT?

NPS measures relationship-level loyalty on a 0–10 scale — "How likely are you to recommend us?" CSAT measures transactional satisfaction on a 1–5 scale — "How satisfied were you with [specific interaction]?" NPS is a lagging indicator that changes slowly; CSAT is a leading indicator that changes quickly. They measure different things at different moments and are most useful together.

Is NPS better than CSAT?

Neither is better. They measure different dimensions. NPS is the right metric for relationship-level loyalty benchmarking; CSAT is the right metric for transactional satisfaction at specific moments. Programs that run both — linked to the same participant record — have a diagnostic system. Programs that run only one have a number.

When should I use NPS vs CSAT?

Use NPS at relationship milestones — program completion, 90-day post-enrollment, quarterly pulse. Use CSAT after specific interactions — workshop sessions, support ticket close, onboarding events. NPS is the wrong instrument to ask after every touch (you exhaust respondents); CSAT is the wrong instrument to measure whether someone would recommend you (it doesn't ask).

Can I use NPS and CSAT together?

Yes — running both produces a diagnostic signal neither metric produces alone: the gap between them. When NPS is stable and CSAT is dropping, friction is building at the interaction level. When NPS is dropping and CSAT is stable, something earlier in the relationship is eroding loyalty. Both diagnoses are actionable.

What does NPS stand for?

NPS stands for Net Promoter Score. It was introduced by Fred Reichheld in a 2003 Harvard Business Review article and has become the standard loyalty metric across consumer and B2B contexts. The score ranges from −100 to +100 and is calculated as the percentage of Promoters (scores 9–10) minus the percentage of Detractors (scores 0–6).

What does CSAT stand for?

CSAT stands for Customer Satisfaction Score. It measures how satisfied a customer or participant was with a specific interaction, deliverable, or moment — typically on a 1–5 scale. CSAT is reported either as an average rating or as the percentage of respondents giving a 4 or 5 (the "top-two-box" score).

What's the difference between NPS, CSAT, and CES?

NPS measures relationship loyalty ("would you recommend us?"); CSAT measures transactional satisfaction ("were you satisfied?"); CES measures interaction friction ("how easy was it?"). The three metrics cover three different dimensions at three different time horizons. Mature feedback systems often run all three at different moments; simpler systems start with CSAT + NPS.

How do I link NPS and CSAT data together?

Link NPS and CSAT data by assigning a persistent stakeholder ID at first contact. The same ID travels through every NPS rating, every CSAT score, every open-ended comment. Without that shared ID, linking is a manual reconciliation project in Excel. With it, compound queries ("participants whose CSAT dropped but NPS held") become trivial.

Can I link NPS, CSAT, and churn data to qualitative feedback?

Yes — this is one of the highest-value analyses a program team can run. Linking requires a shared participant ID across all three data sources plus qualitative analysis that reads comments from both NPS and CSAT instruments in one view. Sopact Sense's Intelligent Column performs this cross-instrument theme analysis automatically; traditional stacks require manual BI pipeline work.

Why do NPS and CSAT sometimes move in opposite directions?

Because they measure different things. A stable NPS with a dropping CSAT means recent interactions are creating friction that hasn't yet damaged the relationship. A dropping NPS with stable CSAT means something earlier — expectations, outcomes, perceived value — is eroding loyalty even though current interactions feel fine. The divergence is the diagnostic, not a contradiction.

What is The Single-Signal Trap?

The Single-Signal Trap is the pattern of treating NPS and CSAT as interchangeable and picking one to report — collapsing two structurally different signals into a single blunt metric. The cost is a diagnostic blind spot: NPS-only programs miss interaction-level friction; CSAT-only programs miss loyalty erosion. The resource argument ("bandwidth") rarely survives a cost-of-blind-spot analysis.

How much does running NPS and CSAT cost?

Dedicated NPS tools (Delighted, AskNicely) run $200–$3,000/month; CSAT-capable survey tools range similarly. Enterprise suites (Qualtrics, Medallia) run $30K–$150K/year for integrated NPS + CSAT + analysis. Sopact Sense starts at $1,000/month for both instruments on one participant record with linked qualitative analysis — no separate NPS and CSAT tool fees.

Should small programs run both NPS and CSAT?

For programs with fewer than 30 participants per cohort, prioritize CSAT plus open-ended comments — NPS at small sample sizes has too much statistical noise to benchmark reliably. Once cohort size reaches 50+ participants, adding NPS at relationship milestones becomes worthwhile. The principle: don't run a metric you can't interpret with confidence.

Ready to run both?
Run both. See the gap. Act on the diagnostic.

NPS and CSAT live in the same platform, tied to the same participant ID, with open-ended comments analyzed across both instruments. The diagnostic gap between them becomes visible on day one — not as a BI project to commission, but as the default view of your feedback system.

  • Both instruments on one participant record — no manual join, no VLOOKUP, no two-tool integration
  • Gap dashboard visible on day one — the divergence between NPS and CSAT is the default view, not an add-on
  • Cross-instrument theme analysis — the same theme in both NPS and CSAT comments auto-detected
Stage 01 · Collect
Both instruments, one schema

NPS at relationship milestones, CSAT post-interaction — paired with one open-ended "why" each, tied to the participant ID at collection

Stage 02 · Link
One persistent ID

Score + CSAT + open-ended comments + exit events all tied to the same stakeholder record — no reconciliation, no BI layer

Stage 03 · Diagnose
The gap dashboard

Divergence patterns (A/B/C/D) surfaced automatically — NPS stable + CSAT dropping, high CSAT + low NPS, all four patterns flagged as they emerge

One intelligence layer runs all three — powered by Claude, OpenAI, Gemini, watsonx.