play icon for videos

NPS Analysis: Score, Segment, Sentiment & Theme | Sopact

Analyze NPS data beyond the aggregate score: the 4-method framework for segment, sentiment, theme, and trend analysis. Close The Segment Blind Spot.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 21, 2026
360 feedback training evaluation
Use Case

NPS Analysis: Score, Segment, Sentiment, and Theme in One Framework

A company reports quarterly NPS of 47 to its board. Strong. The customer success director knows the real number: B2B clients at 62, self-serve customers at 22, and the enterprise segment that just went through a pricing change at −8. The aggregate is accurate and completely useless — three different management situations compressed into one reassuring number. This is The Segment Blind Spot: the structural failure that occurs when NPS analysis stops at the aggregate score, hiding the disaggregated distributions where actual intelligence lives.

Last updated: April 2026

NPS analysis is a four-method framework — segmentation, mismatch detection, theme extraction, and longitudinal trend — not a single calculation. Most organizations run method one (calculate the aggregate score, display on dashboard) and call it NPS analysis. This guide covers the complete analytical stack: how to analyze NPS data by segment, how to run NPS sentiment analysis on open-text responses, how to link verbatim comments back to scores, how to build an NPS report that drives decisions, and how to avoid the dashboard trap where a single number hides three different stories.

NPS Analysis · 4-Method Framework
An aggregate NPS of 47 can hide three different management situations.

A score of 47 composed of enterprise at −8, self-serve at 22, and B2B at 62 is three different decisions compressed into one reassuring number. Real NPS analysis is a four-method framework: segment the distribution, detect sentiment mismatches, extract themes per segment, track trajectories across cycles. Most tools deliver method one and stop.

Aggregate vs. Segment Distribution
The same NPS of 47 — three different companies
Aggregate NPS of 47 decomposed into segment distribution showing -8, 22, 62 spread Three segments. One aggregate. Different management situations. AGGREGATE (WHAT THE BOARD SEES) NPS = +47 Strong. Satisfied leadership. Segment the distribution 0 +50 −50 Enterprise Post pricing change −8 Self-serve Mid-market +22 B2B clients Long-tenure +62 aggregate +47 THE SEGMENT BLIND SPOT Aggregate reporting compresses three situations — enterprise is a fire, B2B is a testimonial machine, self-serve is neutral.
Segment-level signal Aggregate (hides distribution)
Real data, common pattern
Ownable Concept
The Segment Blind Spot

The structural failure that occurs when NPS analysis stops at the aggregate score. An NPS of 47 composed of one segment at −8 and another at 62 is three management situations compressed into one reassuring number. The Blind Spot persists when aggregation is the default reporting view, demographic data lives separately from survey data, and qualitative themes are never connected to the populations that generated them. Closing it requires architecture — segment attributes at collection, persistent IDs, qualitative analysis on the same schema as the score — not more analysis on the same data.

4 methods
segment · sentiment · theme · trend — the complete analysis framework
~5%
of NPS context actually used when open-text responses stay unread
3 cycles
minimum data points for longitudinal segment trend analysis
1 schema
score, comments, segments, themes — on the same data fabric, not three

What is NPS analysis?

NPS analysis is the process of moving from the raw 0–10 scores respondents provide to the decisions those scores should inform. At minimum it includes calculating the Net Promoter Score (% Promoters − % Detractors) and displaying the score's trend over time. Real NPS analysis goes further: segmenting the score by customer type, program, cohort, and demographic group; detecting mismatches between numerical scores and the language of open-text responses; extracting themes from verbatim comments within each segment; and tracking segment trajectories across multiple cycles.

The distinction between "running an NPS survey" and "NPS analysis" is what the data gets used for. A survey with a dashboard displays. Analysis diagnoses. The gap between the two is where most organizations operate — reporting a score every quarter, never using it to change a specific decision. The NPS calculation methodology is solved. The analysis layer above it is where 90% of programs lose signal.

How do you analyze NPS data?

Analyze NPS data in four methods applied in sequence: segment, detect mismatches, extract themes, track trends. Start by segmenting the score distribution across every relevant dimension — customer type, program type, cohort, geography, demographic group, tenure — because aggregate scores compress different management situations into one reassuring number. Then run sentiment analysis on open-text responses to find mismatches (Passives scoring 7–8 with strongly negative language are Detractors-in-transition; Detractors scoring 0–6 with specific constructive feedback are recoverable).

Third, extract themes from verbatim comments within each segment — not just across the full dataset. If the theme "pricing clarity" appears in 40% of Detractor responses from the enterprise segment and 8% from the self-serve segment, that is a targeted business problem, not a company-wide communication issue. Fourth, track segment trends across at least three cycles — position tells you where a segment sits; direction tells you whether the intervention is working. See the NPS survey design principles for the collection architecture these four methods require.

The Complete Analysis Stack
The 4 methods of NPS analysis — and what most tools stop at

Segmentation, mismatch detection, theme extraction, and longitudinal trend. Each method answers a different question. Skip any of them and the aggregate score becomes all you have. Click each layer to expand.

1
Segment
2
Mismatch
3
Theme
4
Trend

What segmentation reveals

The aggregate NPS of 47 compresses three different management situations: enterprise at −8, self-serve at 22, B2B at 62. Without segment distribution, leadership responds to an average that doesn't exist in any real customer's experience.

Segment by every relevant dimension — customer type, program type, cohort, geography, tenure, product line, demographic group — before interpreting the score. Comparative segment views are decision-useful; the aggregate is reporting furniture.

Most tools offer segment filters as a drill-down interface — you click through to find the distribution. Segment-first architecture inverts that: the default dashboard view is the segment breakdown, with aggregate displayed as a single line near the top.

What's structurally required

  • Segment attributes at collectionCustomer type, cohort, demographic — captured when the respondent is identified, not retrofitted from an export.
  • Persistent stakeholder IDsSame ID across intake, surveys, and outcome measures — so every response carries the full attribute set.
  • Segment-first default viewDashboard leads with segment distribution, not aggregate score. Trains the team to think in populations.

What mismatch detection reveals

Three mismatch patterns carry the highest decision-value in any NPS dataset. Passives with strongly negative language are Detractors-in-transition — the score has not dropped yet, but the language says it will. Detractors with specific constructive feedback are recoverable — they scored low, but the written response maps to a fixable issue rather than a generic complaint.

Third: Promoters with qualified language ("great, but…") are referral risks. They scored high, but the qualification reveals friction that could undermine the referral if not addressed. All three patterns are invisible to tools that segment only by score category.

Mismatch detection is where next cycle's score movement usually comes from. Passives in transition become the Detractors of the next cycle if the language signal is ignored.

What's structurally required

  • Sentiment on the same schema as the scoreNot a separate text-analysis tool applied to an exported CSV — analysis attached to each individual response in real time.
  • Mismatch flag in the response recordEach record tagged with a mismatch indicator, surfaceable as a filter or alert queue.
  • Real-time processing, not batchMismatch queue updated as responses arrive — so CS or program teams can act inside the Recovery Window.

What theme extraction reveals

Theme frequency within each segment — not just across the full dataset — is the output that drives targeted action. If "pricing clarity" appears in 40% of Detractor responses from the enterprise segment and 8% from self-serve, that is a targeted business problem, not a company-wide communication issue.

Three outputs matter per cycle: theme frequency per segment, theme trajectory across cycles (growing / stable / declining), and attribution links connecting each theme back to the specific responses that generated it — so program teams can read representative comments rather than only seeing the summary.

Manual coding produces this in 3–4 weeks. By the time themes are ready, the Recovery Window has closed and the next cycle has already launched. Automated analysis produces the same three outputs in hours.

What's structurally required

  • Automated theme extractionReal-time processing of open-text responses as they arrive — not an analyst coding sprint every quarter.
  • Per-segment theme frequencyThemes grouped by customer segment, program, demographic — not only by score category.
  • Attribution link to source responsesClick a theme, see the verbatim responses that generated it — with full respondent context attached.

What trend tracking reveals

Position tells you where a segment sits today. Direction tells you whether the intervention is working. A segment at −8 recovering from −22 is a success story in progress. A segment at +35 declining from +55 is a crisis in motion. Both get averaged into the aggregate trend line and disappear.

Three-plus cycle trajectories per segment reveal which segments are converging and which are diverging from the aggregate trend. Convergence is predictable. Divergence is where the next cycle's score movement is baked in — usually in the segment your leadership isn't looking at.

First-cycle and second-cycle segment comparisons are positions, not trends. Three cycles is the minimum for direction. Four-plus is where variance smooths enough to trust.

What's structurally required

  • Consistent collection schema across cyclesSame scale, same question wording, same segment attributes — every cycle. Methodology drift destroys comparability.
  • Segment-level time seriesOne trajectory per segment, on one chart — not an aggregate line with filter options.
  • Theme trajectory overlayWhich themes are growing or declining per segment, across cycles. The qualitative version of a trend strip.

The infrastructure difference. Generic tools cover method 1 as a drill-down filter and call it NPS analysis. The four-method framework requires one data schema — score, comments, segments, themes — not three disconnected systems.

See it live →

What are the 4 methods of NPS analysis?

The four methods are segmentation (quantitative), mismatch detection (sentiment), theme extraction (qualitative), and longitudinal trend (comparative). Each method answers a different question. Segmentation answers who — which customer groups, programs, or cohorts drive the aggregate score. Mismatch detection answers intensity — which scores overstate or understate the emotional content of the response. Theme extraction answers why — which specific drivers are concentrated in which segments. Longitudinal trend answers direction — which segments are improving, declining, or diverging from each other.

Most NPS tools deliver method one and stop. Qualtrics, SurveyMonkey, Typeform, and Delighted produce clean score dashboards with segment filters — but segment filters are a drill-down interface, not an analytical workflow. Real segment analysis requires segment attributes structured at the point of collection (from persistent stakeholder IDs), not post-hoc retrieved from an exported CSV. Method three — qualitative theme extraction — requires analysis infrastructure that reads open-text responses as they arrive, not manual coding sprints that take 3–4 weeks per cycle. The gap between the four-method framework and what most tools deliver is The Segment Blind Spot.

Analysis Discipline · 6 Principles
NPS analysis best practices — what separates scoring from diagnostic

Six principles that turn raw NPS data into intelligence. Skip any of them and the aggregate score becomes all you have — a number reported to the board, never used to change a specific decision.

01
Segment-first
Lead the dashboard with segment distribution, not aggregate score

A dashboard that leads with the aggregate trains the team to think in single numbers. A dashboard that leads with segment distribution trains the team to think in populations. The aggregate belongs near the top as one line — not as the whole report.

Default dashboards in Qualtrics, SurveyMonkey, Delighted all display aggregate first. Override.
02
Identity
Structure segment attributes at collection, not from exports

Retroactive segmentation from an aggregated dataset is unreliable. Issue unique stakeholder IDs at first contact and capture customer type, cohort, demographics in the intake form. Every subsequent response automatically carries those attributes.

Anonymous surveys make segment analysis structurally impossible at the individual level.
03
Mismatch
Flag score–language mismatches — where next cycle's trend is born

Passives with strongly negative language are Detractors-in-transition. Detractors with specific constructive feedback are recoverable. Both patterns are invisible to tools that only segment by score category. Automate the mismatch queue, review it weekly.

Aggregate sentiment trend restates what the score already says — mismatch is the signal.
04
Themes
Extract themes per segment, not just across the full dataset

If a theme appears in 40% of Detractor responses from one segment and 8% from another, that's a targeted business problem, not a company-wide issue. Theme frequency by segment reveals which interventions belong in which playbook — concentrated vs. distributed signal.

Aggregate theme lists obscure where the issue actually lives — and who owns the fix.
05
Speed
Produce themes within hours, not three to four weeks

Manual coding of open-text responses takes 3–4 weeks. By the time themes are ready, the Recovery Window has closed and the next cycle has launched. Automated theme extraction on the same schema as the score lands signal inside the window where recovery is still possible.

Coding sprints are not a workflow — they're a delay that costs next cycle's signal.
06
Direction
Track direction, not just position — minimum 3 cycles per segment

Position tells you where a segment sits today. Direction tells you whether the intervention is working. Three cycles is the minimum for trend — four-plus is where variance smooths enough to trust. First and second cycles are positions, not trajectories.

Segments converging or diverging from the aggregate is where tomorrow's score movement is already baked in.

Apply all six and NPS becomes decision-useful. Skip any of them and the aggregate score becomes all you have — reported to leadership, never used to change a specific operational decision.

See qualitative feedback workflow →

How do you link NPS scores to qualitative feedback?

Link NPS scores to qualitative feedback through persistent stakeholder IDs assigned at first contact — not through post-hoc export-and-match operations. Anonymous surveys produce a score column and an open-text column that cannot be attributed to the same respondent's history, cohort, or demographic profile. Matching after the fact requires a manual join that happens quarterly at best, never at worst, and always loses respondents whose identifiers don't align across systems.

The architectural answer is to issue a unique stakeholder ID at the first touchpoint (intake form, account creation, program enrollment) and carry it through every subsequent survey, interaction, and outcome measure. When a respondent submits an NPS score plus an open-text comment, both arrive attached to the same ID and automatically linked to every other data point about that respondent. This is the difference between feedback analysis as a research project and feedback analysis as a default output. The comment explaining the score arrives in the same row as the score itself, segmented by every relevant attribute, already themed — not sitting in a coding backlog while the next cycle ships.

What is NPS sentiment analysis?

NPS sentiment analysis is the classification of emotional tone in open-text NPS responses — positive, negative, or neutral — typically using natural language processing applied to verbatim comments. The highest-value application isn't aggregate sentiment trend (which restates what the score already shows). It's mismatch detection: finding the respondents whose numerical score doesn't align with the emotional intensity of their text.

Three mismatch patterns produce the most actionable signal. Passives with negative language are Detractors-in-transition — the score has not dropped yet, but the language says it will. Detractors with specific constructive feedback are recoverable — they scored low, but their written response maps to a fixable issue rather than a generic complaint. Promoters with qualified language ("great, but...") are referral risks — they scored high, but the qualification reveals friction that could undermine the referral if not addressed. NPS sentiment analysis tools that only summarize overall tone miss all three patterns. See NPS feedback analysis for the full workflow.

What is NPS verbatim analysis?

NPS verbatim analysis is the extraction of themes, concerns, and language patterns from the open-text responses respondents leave alongside their 0–10 score. The verbatim (respondent-written text) is the richest diagnostic layer in any NPS dataset — and the most commonly ignored. Most programs collect verbatim comments, export them to a spreadsheet, assign an analyst to code them, and receive a coded dataset 3–4 weeks later by which point the next cycle has already launched. This is The Coding Bottleneck and it is why most NPS programs produce scores but not insights.

Effective verbatim analysis produces three outputs per cycle. Theme frequency per segment — the top 5 themes in Detractor responses from each customer segment or program cohort, showing whether an issue is company-wide or concentrated. Theme trajectory — whether each theme is growing, stable, or declining across cycles. Attribution links — each theme connected back to the specific responses that generated it, so program teams can read representative comments rather than only seeing the summary. Sopact Sense Intelligent Column produces all three automatically as responses arrive — no analyst coding sprint required at any volume.

Three Analysis Problems · One Architectural Answer
Where NPS analysis breaks — and what the rebuild looks like

Three analysis problems that map to the same structural fix: segment attributes at collection, persistent IDs, qualitative analysis on the same schema as the score.

A B2B SaaS company reports quarterly NPS of +47 to its board. Leadership is satisfied. The CS director knows the real distribution: enterprise at −8 (post pricing change), self-serve at +22, B2B at +62 (long-tenure clients). The aggregate is accurate and completely useless — three management situations compressed into one reassuring number. The segment-first rebuild takes the same data and turns it into three different operational playbooks.

Aggregate-first analysis
Dashboard leads with NPS = +47
  • Board sees headline score, stops. "Strong NPS, no concerns flagged this quarter."
  • Segment filters buried behind drill-down — CS director has to manually pull segment view
  • Enterprise crisis invisible in the headline; the next quarter's churn is already baked in
  • Pricing-change feedback never connects to the enterprise segment specifically — stays generic "pricing concerns"
Segment-first analysis
Dashboard leads with segment distribution
  • Enterprise −8 highlighted as variance alert — CSM playbook triggered
  • Self-serve +22 flagged as "stable, watch for passive-to-detractor drift"
  • B2B +62 tagged for testimonial collection and referral activation
  • Pricing-change theme attributed specifically to enterprise segment — intervention designed for the right population

For B2B SaaS NPS: segment-first dashboards, theme attribution by segment. Three operational playbooks where leadership saw one headline number.

Impact Intelligence →

A program evaluator at a workforce nonprofit collects 400–600 open-text NPS responses per cycle. Two evaluators on staff, already at capacity. Manual coding sprints take 3–4 weeks, produce inconsistent themes across coders, and arrive after the next cycle has launched. The qualitative data exists, has never changed a single program decision. The rebuild moves coding from a 3-week sprint to a real-time process attached to collection.

Manual coding workflow
600 responses per cycle · 2 analysts · 3–4 weeks
  • Responses export to CSV, sit in coding queue — no connection to respondent record
  • Two coders produce inconsistent themes because coding rubric drifts across the 3-week sprint
  • Themes ready after next cycle launches — findings describe a program state that no longer exists
  • Program team never reads the themes; last quarter's coded file sits unopened in shared drive
Automated theme extraction
Themes within hours, attached to respondent records
  • Real-time theme extraction — Intelligent Column reads responses as they arrive, not weekly
  • Consistent rubric across all responses — no coder drift, no inter-rater reliability issues
  • Themes available inside the decision window — program team acts within the cycle, not two cycles later
  • Attribution links — click a theme, see the responses that generated it with full respondent context

For program evaluators: theme extraction on the same schema as the score. Six hundred responses themed in hours, not weeks — ready inside the decision window.

Nonprofit Programs →

A grant-funded nonprofit's largest funder asks whether NPS improvements are equitable across demographic groups — or whether a specific group is being underserved while the aggregate improves. NPS data lives in the survey tool. Demographic data lives in the CRM. They have never been connected. Producing the analysis the funder is asking for would require a month of manual work — and the result would not be trustworthy. The rebuild connects demographics and NPS through shared IDs at the intake layer.

Disconnected systems
Survey tool · CRM · manual export-merge quarterly
  • Respondent emails don't align across systems — manual merge loses 20–30% of records
  • Demographic segmentation requires a month of data cleanup every quarter
  • Equity analysis shows "average across all demographics" — group-level variance hidden
  • Funder receives an aggregate score with a demographic footnote — not genuine equity analysis
Shared ID architecture
Demographics and NPS linked at collection
  • Unique participant ID at intake — demographics captured there, carried through every NPS response
  • NPS by demographic group as a default view — no manual integration, no export-merge
  • Theme extraction per demographic group — not just "overall detractor themes"
  • Equity analysis in the funder report — NPS by group, themes by group, trajectory by group

For grant-funded programs: demographics and NPS linked at collection. Equity analysis as a default view — not a month-long data reconciliation project.

Nonprofit Programs →

What are the best NPS analytics tools?

The best NPS analytics tool is the one that delivers all four analysis methods on the same data schema — segmentation, sentiment, theme extraction, and longitudinal trend — rather than covering only the score layer with drill-down filters. Qualtrics XM and Medallia are strong at segment filtering and dashboard display but treat qualitative analysis as a separate product module. SurveyMonkey, Typeform, and Delighted produce clean score reporting but require exporting open-text data to dedicated text analysis tools (MonkeyLearn, Thematic, or manual coding in Excel). Specialized sentiment tools (Chattermill, Stratifyd) do theme extraction well but don't integrate with score collection infrastructure.

The comparison that matters isn't feature-by-feature — it's whether the tool's architecture supports persistent stakeholder IDs, segment attributes at collection, and qualitative analysis on the same schema as score collection. Tools that require three separate systems (survey, text analysis, CRM) produce analysis that is always one reconciliation cycle behind reality. Tools that unify collection and analysis on one schema produce analysis that is current as of the last response. This is the infrastructure difference — not an interface preference.

NPS Analysis Tools Comparison · 2026
Why three-system stacks produce analysis one reconciliation cycle behind reality

Generic survey tools cover method 1 as a drill-down filter. Specialized AI text analysis tools do method 3 but don't integrate with score collection. Four common risks, then the capability comparison.

Risk 01
Segment filters as drill-down, not default

Dashboard leads with aggregate score. Segment views are a filter option. The Segment Blind Spot is the architectural default — leadership trains on the headline number.

Qualtrics, SurveyMonkey, Delighted all default to aggregate-first display.
Risk 02
Separate qualitative tool required

Score lives in survey tool. Open-text gets exported to text analysis platform or coded manually. Three systems, two reconciliation cycles, theme-to-score attribution broken.

MonkeyLearn, Thematic, Chattermill all require CSV import from the survey tool.
Risk 03
Demographics live in a third system

CRM or HRIS holds demographic data. Survey tool holds NPS. Text tool holds themes. Equity analysis requires a three-way join that loses 20–30% of records to identifier mismatch.

Export-merge quarterly. Equity analysis is always at least a cycle behind reality.
Risk 04
Manual coding at cycle-end

Open-text responses sit in coding queue. Two analysts, 3–4 weeks. Themes land after the next cycle launches — findings describe a program state that no longer exists.

Inter-rater reliability drifts across multi-week coding sprints, even with a rubric.
Capability Comparison — 4-Method Framework
What each tool actually delivers across the analysis stack
Capability Generic survey tool Specialized AI text tool Sopact Sense
Method 01 — Segmentation
Who is driving the aggregate
Segment distribution view

Default dashboard layout

Drill-down filter

Aggregate score is primary display; segment is a filter option

Not supported

Focused on text analysis; no score dashboard

Segment-first dashboard

Default view shows segment distribution; aggregate is one line

Segment attributes source

Where the attributes come from

Re-asked in survey

Drives up survey length; segmentation limited to what's asked

Not applicable

No integrated segmentation — text analysis only

Captured at intake, carried automatically

Unique stakeholder ID at first contact; every response carries full attribute set

Method 02 — Mismatch Detection
Where score contradicts sentiment
Score–sentiment mismatch flag

Passive with negative language, Detractor with constructive

Not available

Score and open-text are separate columns; no sentiment classification

Partial

Produces sentiment per response, but not cross-referenced with NPS score category

Mismatch queue as default view

Each response tagged with mismatch indicator; surfaceable as alert queue

Method 03 — Theme Extraction
Why the score is what it is
Theme extraction workflow

Open-text to themes

Manual coding or external export

3–4 week coding sprints; analyst bandwidth-bound

Automated theme extraction

Strong at themes, but disconnected from scores — requires CSV import

Themes within hours, attached to response

Intelligent Column analyzes in real time; attribution link preserved

Theme frequency per segment

Not just aggregate theme list

Manual segmentation of coded data

Requires joining coded themes back to respondent demographics

Possible, requires setup

Tagging themes by segment requires pre-structured segment metadata import

Default output per cycle

Top themes per segment auto-generated; attribution links preserved

Method 04 — Longitudinal Trend
Direction per segment, not just aggregate
Segment-level time series

Multiple trajectories on one chart

Aggregate trend only

Segment trends require separate filter + export per segment

Not applicable

Not a scoring or trend system

Segment trajectories by default

One chart with all segments; convergence/divergence flagged automatically

Pricing

NPS analysis capability at cycle frequency

$30K–$150K/yr for NPS-analysis tier

Qualtrics XM, Medallia — plus additional text analysis licensing

$15K–$60K/yr + survey-tool cost

Separate budget line on top of survey platform

$1,000/month — all four methods

Collection, segmentation, themes, trend on one schema

Generic survey tools cover method 1. Specialized AI text tools cover method 3. Unified-schema analysis is the architectural alternative — all four methods on one data fabric, not three disconnected systems.

NPS feedback analysis workflow →

Segment. Detect mismatches. Extract themes per segment. Track direction across cycles. Four methods on one data schema — not three disconnected systems producing analysis that's always one reconciliation cycle behind reality.

See Sopact Sense →

What is an NPS report?

An NPS report is a structured summary of NPS analysis outputs for a specific audience — typically leadership, funders, or program boards — combining the aggregate score with segment breakdown, theme highlights, and cycle-over-cycle trend. A defensible NPS report includes five elements: the aggregate score (with confidence interval for small samples), the segment distribution (minimum: customer type or program type), the top 3 themes from Detractor responses (with verbatim examples), the cycle-over-cycle trend (minimum three prior cycles), and the specific actions taken in response to previous cycle findings.

The failure mode most NPS reports fall into is displaying the aggregate score prominently and treating segment/theme/trend as optional appendices. This produces reports that are read for the headline number and filed. A report built around the four-method framework leads with the segment distribution (which carries the decision-useful signal), attaches the verbatim themes that explain the distribution, and closes with the actions taken or proposed. The aggregate score becomes one line near the top — where it belongs as a headline indicator, not as the whole document. For grant-funded programs, see the impact reporting guide for funder-facing report architecture.

How do you create an NPS dashboard?

Create an NPS dashboard organized around the four-method framework — segment distribution as the primary view, theme frequency as a connected panel, longitudinal trend as a time-series strip, and mismatch alerts as a flagged queue — rather than organized around the aggregate score. A dashboard that leads with the aggregate score trains everyone who sees it to use NPS as a single indicator. A dashboard that leads with segment distribution trains everyone to think in terms of customer populations.

Four panels make a complete NPS dashboard. Panel 1 — segment grid: a table or heatmap of NPS by every relevant segment, with color signaling variance from aggregate. Panel 2 — theme frequency: the top 5 themes in Detractor responses per segment, with frequency counts. Panel 3 — trend strip: segment NPS trajectories across the last 4–8 cycles. Panel 4 — mismatch queue: Passives with negative language and Detractors with constructive language, ordered by recency for CSM or program-team action. This dashboard architecture turns NPS from a metric into a workflow. Tools that can't express all four panels on the same data schema force the workflow into three disconnected systems — where the Segment Blind Spot sets in by architecture.

Frequently Asked Questions

What is NPS analysis?

NPS analysis is the process of moving from the raw 0–10 scores respondents provide to the decisions those scores should inform. At minimum it includes calculating the Net Promoter Score and displaying the trend. Complete NPS analysis adds segmentation by customer type, mismatch detection between scores and open-text sentiment, theme extraction from verbatim comments, and cycle-over-cycle trend tracking.

How do you analyze NPS data?

Analyze NPS data in four methods applied in sequence. First, segment the score by customer type, program, cohort, and demographic group. Second, run sentiment analysis on open-text responses to detect mismatches (Passives with negative language, Detractors with constructive feedback). Third, extract themes from verbatim comments within each segment. Fourth, track segment trends across at least three cycles.

How do you analyze NPS responses?

Analyze NPS responses by reading the verbatim open-text alongside the 0–10 score for each respondent. Run automated sentiment analysis to flag mismatches. Extract recurring themes per segment (not just across the full dataset). Link each score-plus-comment back to the respondent's full profile — cohort, program, tenure — so themes can be attributed to specific populations rather than generalized.

What is NPS score analysis?

NPS score analysis is the quantitative layer of NPS analysis: calculating the aggregate Net Promoter Score, breaking it into the Promoter / Passive / Detractor distribution, and segmenting that distribution across relevant dimensions. It is the first of four analysis methods. On its own it produces a number. Combined with sentiment, theme, and trend analysis, it produces intelligence.

What is NPS sentiment analysis?

NPS sentiment analysis is the classification of emotional tone in open-text NPS responses using natural language processing. The highest-value application is mismatch detection — finding respondents whose numerical score doesn't align with the intensity of their written response. Passives with negative language are Detractors-in-transition; Detractors with constructive language are recoverable.

What is NPS verbatim analysis?

NPS verbatim analysis is theme extraction from the open-text responses that accompany NPS scores. Three outputs matter: theme frequency per segment (which issues concentrate where), theme trajectory (which themes are growing or declining), and attribution links (each theme connected back to the specific responses that generated it). Manual coding produces this in 3–4 weeks; automated analysis produces it in hours.

What is NPS text analysis?

NPS text analysis is the broader category that includes both sentiment analysis (tone classification) and theme extraction (topic identification) applied to open-text NPS responses. Leading AI-native text analysis platforms read responses as they arrive and produce theme frequency by segment automatically — without manual coding sprints.

How do you do sentiment analysis on NPS responses?

Sentiment analysis on NPS responses classifies each open-text response as positive, negative, or neutral, then cross-references that classification against the respondent's 0–10 score. The decision-useful output is the mismatch set — respondents whose score category (Promoter / Passive / Detractor) doesn't match the sentiment category. Mismatches are where the next cycle's score movement usually comes from.

What is an NPS report?

An NPS report is a structured summary of NPS analysis outputs — typically for leadership, funders, or program boards. A defensible report includes the aggregate score with confidence interval, the segment distribution, the top 3 themes from Detractor responses with verbatim examples, the cycle-over-cycle trend across at least three cycles, and the specific actions taken in response to previous findings.

How do you create an NPS dashboard?

Create an NPS dashboard organized around segment distribution (not aggregate score). Four panels make a complete dashboard: segment NPS grid, theme frequency per segment, longitudinal trend strip across 4–8 cycles, and a mismatch queue flagging Passives with negative language and Detractors with constructive feedback. This architecture turns NPS from a metric into a workflow.

How do you link NPS scores to the qualitative feedback that explains them?

Link NPS scores to qualitative feedback through persistent stakeholder IDs assigned at first contact — not through post-hoc export-and-match. When a respondent submits a score plus an open-text comment, both arrive attached to the same ID and automatically linked to every other data point about that respondent. Anonymous surveys make this impossible by architecture.

What are the best NPS analysis tools?

The best NPS analysis tool delivers all four analysis methods on the same data schema — segmentation, sentiment, theme extraction, and longitudinal trend — rather than covering only the score layer. Tools that require separate systems for survey, text analysis, and CRM produce analysis that is always one reconciliation cycle behind reality. Unified-schema platforms produce current analysis as of the last response.

What is The Segment Blind Spot?

The Segment Blind Spot is the structural failure that occurs when NPS is reported as a single aggregate number, averaging over fundamentally different stakeholder populations. An NPS of 47 composed of one segment at −8 and another at 62 is three management situations compressed into one reassuring number. Closing the Blind Spot requires segment attributes at collection, not retroactive filtering.

How do you analyze NPS detractors?

Analyze NPS detractors by extracting themes from their open-text responses within each segment — not just across all detractors company-wide. A theme that concentrates in one customer segment or demographic group is a targeted intervention; the same theme distributed evenly is a company-wide communication issue. Mismatch detection flags Detractors with constructive feedback as recoverable within the 48-hour Recovery Window.

How much do NPS analytics tools cost?

NPS analytics tools range from free (Google Forms export to Excel, basic SurveyMonkey) to enterprise ($30K–$150K/year for Qualtrics XM, Medallia, Chattermill). The cost driver isn't the score dashboard — it's the qualitative analysis layer, persistent stakeholder IDs for segment attribution, and the infrastructure that reads open-text as responses arrive. Sopact Sense delivers all four analysis methods on one schema at $1,000/month — substantially below enterprise CX stacks.

Ship NPS Analysis, Not Just Scoring
Segment the distribution. Theme the verbatims. Track the direction.

Close the Segment Blind Spot with the three architectural choices that turn a score into a decision: segment-first dashboard architecture, themes extracted from open-text within hours, and per-segment trajectories across at least three cycles. One data schema, not three disconnected systems.

Stage 01 · Segment
Segment-first architecture

Segment attributes captured at intake via persistent stakeholder IDs — cohort, customer type, demographic, tenure. Every subsequent response carries the full attribute set automatically. Segment distribution is the default dashboard view; the aggregate is one line.

Stage 02 · Theme
Themes within hours

Intelligent Column analyzes open-text responses on the same schema as the score — no 3–4 week coding sprints, no analyst bottleneck, no inter-rater reliability drift. Theme frequency per segment, attribution links back to source responses, themes trajectory across cycles.

Stage 03 · Track
Direction per segment

Minimum 3-cycle trajectories per segment — convergence and divergence flagged automatically. Position tells you where a segment sits; direction tells you whether the intervention is working. Segment-level time series, not aggregate line with filter.

  • Segment-first dashboard — aggregate displayed as one line, not as the headline. Leadership trains on distributions, not averages.
  • Themes extracted per segment, not just across aggregate — concentrated vs. distributed signal drives which intervention belongs in which playbook.
  • Segment trajectories across cycles — position AND direction, with mismatch queue flagging Passives-in-transition.
One intelligence layer — powered by Claude, OpenAI, Gemini, watsonx. Segmentation, sentiment, theme extraction, and longitudinal trend on the same data fabric.