play icon for videos

NPS Feedback Analysis With Qualitative Insights | Sopact

NPS feedback analysis: extract qualitative themes in hours, follow up with detractors in 48 hours, and verify the loop actually closed.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 21, 2026
360 feedback training evaluation
Use Case

NPS Feedback: How to Collect, Analyze, and Close the Loop on the "Why" Behind the Score

Your Q2 NPS report is ready. Overall score: +38. Below it sits a CSV with 847 open-ended responses — the "why" answers to "What's the primary reason for your score?" Nobody has opened the file. By the time someone does, in three weeks, the decision window has closed, the detractors have churned, and the themes that would have reshaped the roadmap are archaeology. You have the data. You don't have the system.

Last updated: April 2026

NPS feedback is the combination of a 0–10 recommendation score and the open-ended "why" response that explains it. The score is trivial to report; the feedback is where the actionable signal lives. Most NPS programs collect both and analyze only one — the score goes into the dashboard, the feedback lands in an export nobody codes. This guide covers how to collect NPS feedback that respondents actually complete, how to analyze open-ended responses at scale without weeks of manual coding, and how to close the loop with detractors before the feedback loses its operational value.

NPS Feedback · Collection & Analysis
The score is easy. The feedback is where the signal lives.

NPS feedback — the open-ended "why" behind every rating — is where your churn drivers, roadmap priorities, and equity gaps are hiding. This page covers how to collect, analyze, and act on that feedback before it loses its operational value.

The Verbatim Decay, visualized
NPS feedback loses value every hour it sits uncoded
Operational value of NPS feedback decays over time when uncoded 100% 75% 50% 25% reporting only archaeology minutes 0 hrs arrival 48 hrs action window 2 wks deck-ready 6 wks post-churn 3 mo
Sopact Sense Traditional stack
value decays, data doesn't
Ownable Concept
The Verbatim Decay

Open-ended NPS feedback has a short operational half-life. A detractor comment read within 48 hours is an intervention opportunity. The same comment read six weeks later is a post-mortem on a churn that already happened. The text didn't change — the decision window closed. Fixing the Verbatim Decay isn't a code-faster project. It's collapsing the gap between response arrival and analysis output from weeks to minutes.

60%
of NPS responses include an open-ended comment — nearly all go unread in traditional programs
3–4 wk
typical manual coding time per cycle — the gap that makes NPS feedback unactionable
48 hr
detractor contact window before the intervention opportunity begins to collapse
15–25 pt
NPS gain in affected segments within two quarters of closing the loop consistently

What is NPS feedback?

NPS feedback is the qualitative open-ended response that accompanies a Net Promoter Score rating — most commonly the "What's the primary reason for your score?" follow-up question. Where the numeric NPS (% Promoters − % Detractors) tells you the size of the loyalty problem, the feedback tells you what the problem actually is: pricing, onboarding, a specific feature gap, a support experience.

The feedback side is where NPS programs succeed or fail. Teams that treat NPS as a single-question metric produce a score nobody can act on. Teams that pair every rating with one open-ended question — and systematically analyze the resulting text — produce a roadmap. The distinction is architectural, not philosophical.

How to collect NPS feedback effectively

Collect NPS feedback in three moves: (1) pair every 0–10 rating with exactly one open-ended follow-up — "What's the primary reason for your score?" or "What would make this a 10?"; (2) attach every response to a persistent stakeholder ID so you can link the comment back to the customer, cohort, and segment; (3) trigger the survey transactionally, tied to the customer moment (onboarding completion, support close, renewal window) rather than a calendar quarter.

The collection architecture determines what's possible downstream. Anonymous surveys produce scores you can aggregate but detractors you can't contact. Calendar-based surveys produce trends you can report but specific moments you can't diagnose. A single "why" question attached to every rating produces 2–3x the response rate of multi-question NPS surveys because respondents finish what they start — the 30-question "NPS survey" most tools default to is the primary reason response rates collapse below 15%.

NPS Feedback Pipeline · Live Demo
One detractor comment. Four stages. Twelve minutes, not six weeks.

Follow a single NPS response through the Sopact Sense pipeline — from raw arrival through closed-loop verification.

1
Collect Raw response arrives
NPS: 4 · Detractor
What's the primary reason for your score?

"Onboarding was rushed. The migration tool worked but left us patching data for two weeks after go-live. We like the product but the first month was stressful."

Received · 09:14 · Day 14 post-signup

One rating. One open-ended reason. Submitted in 38 seconds via the transactional onboarding-completion survey.

2
Link Attached to customer record
ID acc_4f82c1
Account Northpoint Logistics
Tier Mid-Market · $68K ARR
Stage Onboarding · Day 14
Owner Priya Shah (CSM)
Prior 3 support tickets · migration

The persistent ID assigned at signup attaches the response to a known account record. The comment is no longer anonymous feedback — it's a traceable signal.

3
Analyze Theme extracted at scale
Onboarding speed 38%
Mid-Market detractors, last 60 days
Data migration gaps 24%
Cross-segment, recurring
Reporting clarity 17%
Primarily SMB tier

Intelligent Column clusters this response alongside 142 others from the same cycle — the "onboarding speed" pattern quantifies in minutes, not after weeks of manual coding.

4
Act Loop closed, outcome tracked
Priority · Routed to Priya
Northpoint Logistics — Day-14 detractor · $68K ARR · onboarding-speed pattern match
Day 3 · Follow-up completed
45-min call scheduled · dedicated migration-patch sprint agreed

Q2 re-survey: NPS 4 → NPS 8. Pattern adopted company-wide — SMB onboarding NPS up +14 points after migration playbook rollout.

Detractor flagged, contacted within 48 hours, and re-surveyed at Day 90. The pattern becomes a process change — not a quote in a slide deck.

From raw response to closed-loop action: traditional stack takes 6 weeks, by which point the detractor has churned. Sopact Sense runs all four stages in minutes — identity, analysis, and alerting are the platform, not projects.

See it with your data →

How to analyze NPS feedback responses

Analyze NPS feedback in four passes, applied to every open-ended response: sentiment (tone and satisfaction signal), thematic coding (recurring topics — pricing, onboarding, support, features), causation (what specifically drove the score — "the migration tool failed" vs. "migration was slow"), and segmentation (how themes differ across tier, cohort, and touchpoint). AI-native analysis produces all four passes in minutes; manual coding takes three to four weeks per cycle and usually gets skipped after the second quarter.

The bottleneck in most NPS programs is not the analysis method — it's the absence of a method at all. Teams export to CSV, paste a few quotes into the quarterly deck, and call it done. The remaining 95% of open-ended responses accumulate in a file nobody opens. That file is where your churn drivers, roadmap priorities, and equity gaps are hiding. The volume compounds: by Q4, a mid-sized program has 3,000+ unread comments — a qualitative dataset more valuable than the scores themselves, but with zero operational impact.

How to link NPS scores to qualitative feedback comments

Link NPS scores to qualitative feedback by assigning a persistent stakeholder ID at the moment of survey response — so the rating, open-ended comment, customer record, segment attributes, and prior survey history all tie back to one ID. When that ID is present, you can ask compound questions like "What are the top three themes in detractor responses from Enterprise-tier accounts in the past 60 days?" and get an answer in minutes. When that ID is absent, the score and the comment live in different files joined manually in Excel — a 3–4 week reconciliation that usually doesn't happen.

This linkage is the exact query pattern showing up across our GSC data: "how to link NPS, CSAT, or churn data to the specific qualitative feedback that explains the score." The structural answer is the same regardless of metric: identity at collection, not retrofitted from an export. Sopact Sense assigns unique stakeholder IDs at first contact that persist across every subsequent survey, so score + comment + segment + history travel together as a single record rather than living in disconnected systems.

NPS Feedback Best Practices · 2026
Six principles that turn NPS feedback from a scoring ritual into a decision system

The score is not the output. The closed-loop action based on what the score and the comment reveal together — that's the output. These six practices separate programs that move the needle from programs that archive data nobody reads.

01
Collect
Pair every rating with one open-ended question

"What's the primary reason for your score?" or "What would make this a 10?" — one follow-up, never two. Multi-question NPS surveys drop response rates below 15%; the single-question pairing holds at 40–60%.

Up to 60% of NPS responses include open-ended text — and in traditional programs, nearly all go unread.
02
Link
Assign a persistent ID at the response moment

The ID ties the rating, open-ended comment, customer record, and segment together so score + context + history travel as one record across every cycle.

Anonymous responses cannot be closed-looped — the detractor becomes unreachable the moment they click submit.
03
Analyze
Apply four analysis passes, not one

Sentiment, thematic coding, causation, and segment disaggregation — on every open-ended response. AI-native analysis completes all four in minutes; manual coding takes weeks and gets skipped.

Sentiment alone misses the causal driver. Themes alone miss the mismatch signals. You need all four.
04
Act
Alert the owner in hours, not weeks

Detractor alerts must route to the account owner or case manager within 24 hours — with score, open-ended reason, segment, and prior engagement attached. A detractor contacted in 48 hours retains at 2–3x the rate of one contacted in 6 weeks.

Generic "new detractor" email notifications don't count — the owner needs the full context, not a re-open task.
05
Verify
Re-survey within 60 days

The loop is not closed until you've measured whether the intervention moved the score. Re-surveying the specific detractor — not a fresh cohort — is how you know the process change worked.

Programs that skip the re-survey report "improvement" that is often regression to the mean.
06
Disaggregate
Theme by segment, not in aggregate

"Onboarding speed" may dominate detractor comments from SMB tier and barely register in Enterprise. The aggregate hides both signals. Theme frequency by segment is where the roadmap decisions emerge.

Without segment structure at collection, disaggregated theme analysis is a manual reconciliation project.

Apply all six and NPS feedback becomes a quarterly process change engine. Apply the first two and skip the rest and it remains a scoring ritual — which is where most programs live.

See the system live →

What is the NPS feedback loop?

The NPS feedback loop is the end-to-end process of collecting, analyzing, and acting on NPS responses — from survey trigger through theme extraction through closed-loop follow-up with specific respondents. "Closing the loop" specifically refers to the final step: contacting detractors within days of their response with an acknowledgment, a resolution plan, and a follow-up verification. Programs that close the loop consistently see 15–25 point NPS gains in the affected segment within two quarters.

The loop has four stages: collect (the survey trigger and response), link (the response gets attached to the customer record and segment), analyze (themes and sentiment extracted at scale), act (detractor alerts routed to owners with full context; thematic patterns fed to product and program leadership). Most NPS tools stop at stage 2. Sopact Sense treats all four as one connected workflow — see the live pipeline in the feedback anatomy widget above and the three context-specific examples in the scenarios below.

Three NPS Feedback Contexts · One Architecture
Customer, beneficiary, or employee — the Verbatim Decay breaks every program the same way

Select your context to see how real open-ended feedback becomes themed, linked, and actionable within a single cycle.

A Product VP receives 847 open-ended NPS comments across Q2. The PM team reads 20 of them in the roadmap planning meeting. The other 827 sit in the export file unread. Inside that file are the specific feature frictions, pricing objections, and onboarding failures that would reshape Q3 priorities — but nobody has the time or method to extract them systematically. That's the Verbatim Decay in action: the data is collected, archived, and operationally dead.

Sample open-ended verbatims · Q2 2026
Score 4 · SMB"The onboarding was rushed. We spent two weeks fixing data after go-live. Good product, terrible first month."
Score 6 · Mid-Market"Reporting is hard to share with non-technical stakeholders — I end up copying into slides every week."
Score 9 · Enterprise"Support response times have been outstanding. My team actually prefers your platform to the one we're migrating off."
Traditional stack
Survey tool export + Excel + manual read
  • Score aggregated to +38, comments exported to CSV nobody opens
  • PM team reads ~20 of 847 comments for the quarterly deck
  • Detractors not contacted — tool has no identity layer
  • Theme frequency estimated from the 20 quotes that got read
With Sopact Sense
Every comment linked, themed, routed
  • All 847 comments themed and ranked by frequency by segment in minutes
  • 12 high-ARR detractors flagged within 48 hours with full context
  • Onboarding speed pattern (38% of SMB detractors) surfaced for roadmap
  • Passive-negative mismatches flagged — future detractors caught early

For customer-experience and product teams: NPS feedback becomes the primary roadmap signal — not through better sampling, through better reading.

Impact Intelligence →

A Program Director runs quarterly beneficiary NPS across three workforce development programs — 280 participants, ~180 open-ended responses per cycle. Two funders require the aggregate score. Nobody requires the "why" analysis — which means in most nonprofits, it never happens. But it's inside those open-ended responses where equity gaps become visible — where justice-involved participants articulate scheduling barriers, where long-term unemployed participants describe pacing friction, where real program improvement signal lives.

Sample beneficiary verbatims · Mid-program Week 6
Score 3 · Justice-involved cohort"Class times conflict with my parole appointments. I've missed three sessions because of scheduling."
Score 5 · Long-term unemployed"The pace is too fast. I haven't touched a computer like this in 10 years. I feel behind every week."
Score 10 · Career transitioner"Best program I've done. My mentor has been incredible — specifically Priya's weekly check-ins."
Traditional stack
Paper exit survey + Excel aggregation
  • Aggregate beneficiary NPS reported to funders at program end
  • Open-ended responses typed into a spreadsheet — never coded
  • Equity gaps discovered only after program completion — too late
  • Detractor participants cannot be contacted (anonymous surveys)
With Sopact Sense
Themed by cohort, mid-program, with identity
  • Participant IDs link every comment to demographic + program cohort
  • Scheduling friction surfaced Week 6 in justice-involved cohort — intervention possible Week 7
  • Funder reports from same collection: aggregate for Funder A, disaggregated for Funder B
  • "What would make this a 10?" responses themed across 180 participants in minutes

For workforce and social programs: beneficiary NPS feedback is a real-time equity instrument — not a post-program artifact.

Nonprofit Programs →

An HR Director runs a quarterly eNPS at a 600-person healthcare organization. 427 employees respond. 280 include an open-ended "why." The annual engagement report aggregates the score to +24, rolls up to the executive team, and goes in a deck. The 280 open-ended comments are never themed — which means the specific manager, department, and tenure signals driving turnover stay invisible until the exit interview.

Sample eNPS verbatims · Q2 pulse
Score 2 · Night shift nursing, 8 months"Scheduling feels arbitrary and last-minute. I don't get consistent answers from my lead."
Score 5 · New hire, 4 months"Onboarding week was great. After that, I haven't had a structured check-in with my manager."
Score 9 · Tenured staff, 4+ years"Strong team. Clear priorities. The new project management cadence has actually reduced my stress."
Traditional stack
Annual engagement survey + action planning deck
  • One eNPS number per year — no manager-level visibility
  • Open-ended comments summarized into "themes" by HR manually
  • At-risk teams visible only when turnover hits or exit interviews happen
  • New-hire drift invisible until the 6-month mark
With Sopact Sense
Manager + tenure visibility, quarterly
  • Employee IDs link every response to department, manager, tenure band
  • Manager-level themes surfaced without exposing individual responses
  • New-hire check-ins (weeks 4, 8, 12) themed independently — catches drift early
  • Threshold alerts — team eNPS drops >10 pts trigger HR notice same week

For HR and training contexts: eNPS feedback becomes a turnover-prevention instrument rather than an annual engagement deck.

Training Intelligence →

NPS feedback tools: traditional surveys vs. AI-native analysis

The dedicated NPS tool market splits into three tiers: low-cost generic survey platforms (Google Forms, SurveyMonkey basic) that collect responses but offer no qualitative analysis; NPS-specific tools (Delighted, AskNicely) that add transactional triggers and basic sentiment but treat identity as an integration afterthought; and enterprise CX suites (Qualtrics, Medallia) that offer deep analysis but at $30K–$150K annual contracts with configuration projects measured in quarters.

None of these are the right fit when your NPS program spans customer, beneficiary, and employee feedback — which is increasingly common as programs extend NPS into program evaluation and workforce development contexts. Sopact Sense was built as a data-collection origin system rather than an NPS-specific tool: the identity layer, qualitative analysis engine, and segment architecture are the platform, not bolt-ons. This is what makes linking qualitative feedback to quantitative scores automatic rather than a project.

NPS Feedback Analysis · Tool Comparison
Traditional stack vs. AI-native: where feedback analysis actually breaks

Four structural gaps in how most NPS tools handle open-ended feedback — then the capability comparison.

Gap 01
Comments go unread

NPS tools collect 100–1000s of open-ended responses per cycle. Traditional programs read 20 and cite them as "themes."

The other 95% sit in exports nobody opens.
Gap 02
Score ≠ comment linkage

Rating and open-ended response live in different fields, usually joined manually during reporting.

Segment-level theme frequency stays out of reach.
Gap 03
Sentiment ≠ causation

Basic sentiment (positive/negative/neutral) tells you tone — not what specifically drove the score.

"Onboarding" is a theme; "migration tool failed" is causation.
Gap 04
No follow-up architecture

Anonymous responses cannot be closed-looped. The detractor becomes unreachable the moment they submit.

Re-surveying the same detractor is impossible without identity.
Capability Comparison · NPS Feedback Analysis
Traditional NPS tools vs. Sopact Sense
Capability Traditional NPS tools
SurveyMonkey · Qualtrics · Delighted · AskNicely
Sopact Sense
AI-native feedback analysis
Open-Ended Analysis
Verbatim theme extraction

Cluster hundreds of comments into recurring themes

Manual coding or word clouds

Sentiment scoring in higher tiers; true theme taxonomy requires add-ons or manual work

Intelligent Column analysis

Themes extracted and ranked by frequency by segment — minutes, not weeks

Sentiment + causation analysis

What specifically drove the score, not just tone

Sentiment only (when available)

Enterprise tiers offer sentiment; causation layer typically missing

Four-layer analysis

Sentiment + themes + causation + rubric — per response, automatically

Passive-negative mismatch detection

Scores that contradict their comment sentiment

Not standard

Typically requires custom analysis or BI pipeline

Automatic flagging

Passives with negative sentiment surfaced as future-detractor risk signals

Linkage & Identity
Score-to-comment linkage

Rating and reason-why on the same customer record

Two fields, often two exports

Joined manually in Excel or BI tool for segment analysis

Single linked record

Score + comment + customer ID + segment + history travel together automatically

Persistent participant IDs

Track the same respondent across cycles

Form-by-form basis

Email matching after the fact, or integration with CRM required

Unique IDs at first contact

Persists across every survey, every cycle, every cohort — no reconciliation project

Closed-Loop Action
Detractor alerting with context

Alert includes score, reason, segment, prior history

Generic notification emails

Owner must open the tool to see context — friction in the critical first 48 hours

Context-rich alerts

All relevant context attached — owner can act without leaving their inbox

Re-survey the same detractor

Measure whether the intervention moved the score

Not native

Requires custom workflow or CRM integration to track the same respondent

Same-ID re-survey

The persistent ID enables cohort-of-one tracking across every cycle

Cross-stakeholder NPS (customer + beneficiary + employee)

Same platform for all three feedback types

CX-focused by design

Beneficiary and employee NPS require workarounds or separate tools

First-class across all three

Customer, beneficiary, and employee feedback run on one identity + analysis layer

Pricing

Starting price for mid-sized programs

$200–$150K/year

Delighted ~$224/mo · AskNicely custom · Qualtrics / Medallia enterprise contracts

$1,000/month

Full platform — identity, analysis, and reporting, not a feature tier

Dedicated NPS tools remain a strong fit for high-volume anonymous consumer CX where theme extraction is not the priority. For any program requiring identity, segmentation, and closed-loop follow-up, the architecture needs to be different.

Qualitative survey approach →

When your NPS program needs to read every comment, link every response to a customer, and close the loop within days — the feedback analysis architecture is the product. That's the Sopact Sense wedge.

See Sopact Sense →

How to do NPS sentiment analysis and verbatim theme extraction

Modern NPS sentiment analysis classifies every open-ended response as positive, negative, or neutral — and critically, flags mismatches where the sentiment contradicts the numeric score. A Passive (score 7–8) who writes a highly negative comment is a future Detractor; a Detractor (0–6) who writes constructively is a salvageable relationship. These mismatches are invisible in aggregate NPS reporting but surface immediately in sentiment-linked analysis.

Verbatim theme extraction goes further: AI-native analysis reads every open-ended response, clusters them into recurring themes (onboarding speed, pricing clarity, specific feature gaps), and reports theme frequency by segment. Within a quarter, you know that 38% of SMB detractors mention onboarding speed and 22% mention reporting clarity — not as a hunch from reading twenty quotes, but as a quantified pattern across every response. Sopact's Intelligent Column analysis produces this extraction in minutes rather than the weeks of manual coding that traditional NPS tools require.

How to close the loop with NPS detractors

Close the loop with NPS detractors in three steps: (1) alert the account owner within 24 hours with the detractor's score, open-ended reason, and prior engagement history; (2) initiate a structured follow-up — acknowledgment, resolution plan with timeline, and scheduled check-in; (3) re-survey within 60 days to measure whether the intervention moved the score. Programs that complete all three steps consistently convert 40–60% of responding detractors to Passives or Promoters on the next cycle.

The loop requires identity at collection. Anonymous NPS cannot be closed-looped — the account is unreachable. This is the single structural reason most NPS programs produce a score that never moves: the collection architecture makes follow-up impossible, so detractors remain detractors and eventually churn. See pre-post survey design for the identity architecture that makes closed-loop follow-up the default rather than the exception.

The Verbatim Decay: why NPS feedback loses value over time

Open-ended NPS responses have a short operational half-life. A detractor comment arriving Monday and read Tuesday is an intervention opportunity; the same comment read six weeks later is a post-mortem on a churn that already happened. The Verbatim Decay is the pattern in which qualitative feedback depreciates in operational value the longer it sits uncoded and unlinked to action — not because the text changes, but because the decision window closes.

Three forces accelerate the decay. First, context fades: the detractor's recent experiences, usage patterns, and recent support interactions are freshest in the first 48 hours. Second, theme freshness: what's driving detraction this quarter (a new pricing page, an onboarding regression) is not what drove it last quarter. Third, relationship timing: a follow-up two weeks after a support ticket feels responsive; the same follow-up two months later feels corporate theater. The architectural fix is the same in all three cases — collapse the lag between response arrival and analysis output from weeks to minutes, so feedback arrives, gets linked, gets themed, and gets routed before the decay window closes.

Frequently Asked Questions

What is NPS feedback?

NPS feedback is the qualitative open-ended response — typically "What's the primary reason for your score?" — that accompanies a 0–10 Net Promoter Score rating. The score measures how customers feel; the feedback explains why. Together they transform a loyalty number into a roadmap.

How do you analyze NPS feedback effectively?

Analyze NPS feedback in four passes: sentiment (tone and satisfaction signal), thematic coding (recurring topics), causation (specific drivers behind scores), and segmentation (how themes differ by tier, cohort, touchpoint). AI-native analysis completes all four in minutes; manual coding takes three to four weeks per cycle.

How do you link NPS scores to qualitative feedback?

Link NPS scores to qualitative feedback by assigning a persistent stakeholder ID at the moment of survey response. The ID ties the rating, open-ended comment, customer record, and segment attributes together so compound queries like "top detractor themes in Enterprise accounts last 60 days" answer automatically rather than requiring Excel reconciliation.

What is the NPS feedback loop?

The NPS feedback loop is the end-to-end process of collecting, analyzing, and acting on NPS responses — from survey trigger through theme extraction through closed-loop follow-up with specific detractors. "Closing the loop" refers to the final step: contacting detractors within days with an acknowledgment, resolution plan, and follow-up verification.

How do I close the loop on NPS detractor feedback?

Close the loop on NPS detractors in three steps: alert the account owner within 24 hours with the detractor's score and open-ended reason, initiate a structured follow-up with resolution timeline, and re-survey within 60 days to measure whether the intervention moved the score. Programs that complete all three convert 40–60% of detractors.

Is NPS qualitative or quantitative data?

NPS is both. The 0–10 rating is quantitative and aggregates into a single score (% Promoters − % Detractors). The open-ended "why" response is qualitative and contains the actionable context. Treating NPS as only quantitative misses the entire story — the feedback component is where the signal that drives improvement lives.

What is NPS sentiment analysis?

NPS sentiment analysis classifies every open-ended response as positive, negative, or neutral and flags mismatches against the numeric score. A Passive (7–8) with negative sentiment is a likely future Detractor; a Detractor (0–6) with constructive sentiment is a salvageable relationship. Mismatches are invisible in aggregate reporting but surface immediately in sentiment-linked analysis.

What tools extract insights from NPS comments?

Traditional survey tools (SurveyMonkey, Google Forms) collect NPS comments but require manual coding for themes. AI-native platforms like Sopact Sense apply four-layer analysis — sentiment, thematic coding, causation, rubric scoring — automatically as responses arrive. The critical differentiator is persistent participant IDs that enable longitudinal analysis across cycles.

What is transactional NPS vs relational NPS feedback?

Transactional NPS (tNPS) measures satisfaction with a specific interaction — post-onboarding, after a support ticket, following service delivery. Relational NPS (rNPS) measures overall brand loyalty, typically quarterly or annually. tNPS produces actionable feedback tied to specific moments; rNPS produces strategic trend lines. Best practice: run both.

What is The Verbatim Decay in NPS feedback?

The Verbatim Decay is the pattern in which open-ended NPS feedback loses operational value the longer it sits uncoded and unlinked to action. A detractor comment read within 48 hours enables intervention; the same comment read six weeks later is a post-mortem on a churn that already happened. The fix is collapsing lag from weeks to minutes.

How large should my NPS feedback sample size be?

Below 50 open-ended responses per segment, qualitative themes can swing substantially from sample-to-sample variance. For stable theme frequency estimates, aim for 150+ open-ended responses per segment per cycle. The fix for smaller cohorts is multi-cycle aggregation rather than single-point thematic claims.

How much do NPS feedback analysis tools cost?

Dedicated NPS tools range from free (Google Forms) through $200–$3,000/month (Delighted, AskNicely) to enterprise ($30K–$150K/year for Qualtrics, Medallia). Sopact Sense starts at $1,000/month and includes the identity layer, qualitative theme extraction, and cross-stakeholder NPS support dedicated tools miss.

Ready to read every comment?
Read every comment. Close every loop.

The score sits in a dashboard. The feedback sits in a file nobody opens. Sopact Sense reads every open-ended NPS response as it arrives, clusters themes by segment, and routes detractors to owners with full context — before the Verbatim Decay closes the window.

  • Every rating paired with one "why" question — and every answer linked to a known record
  • Four-layer analysis on every comment — sentiment, themes, causation, rubric — in minutes
  • Detractor alerts within 48 hours — with score, reason, segment, and prior engagement attached
Stage 01 · Collect
Identity-linked intake

Every rating paired with one "why" — attached to a persistent stakeholder ID at the response moment

Stage 02 · Analyze
Four-layer theme extraction

Sentiment + themes + causation + rubric — on every open-ended response, by segment, in minutes

Stage 03 · Act
Closed-loop with context

Detractor flagged, named, and routed to owner within 48 hours with full context — then re-surveyed at Day 60

One intelligence layer runs all three — powered by Claude, OpenAI, Gemini, watsonx.