NPS Survey Questions: Wording, Examples, and Follow-Ups That Actually Diagnose
Your NPS score came back at +32. Leadership wants to know why it dropped four points from last quarter. You open the responses — 847 of them — and the follow-up field shows "ok," "good," "nothing," and two hundred blanks. You have a number. You have no intelligence. This is The Follow-Up Void: when an NPS program is built around the core question but treats the follow-up as an afterthought, leaving organizations with a score they can report but cannot act on.
Last updated: April 2026
An NPS survey is fundamentally a two-question instrument — the standard 0–10 likelihood-to-recommend question plus one diagnostic open-ended follow-up — but most teams either collapse it to one question (no diagnostic) or bloat it to 10+ questions (response rate crashes). This guide covers the standard NPS question wording, 12 follow-up question examples by intent, survey length benchmarks, and how to avoid The Follow-Up Void that produces dashboards full of scores with no story behind them.
NPS Survey Questions · 2026 Edition
An NPS survey is a two-question instrument. Most teams get the second question wrong.
The standard 0–10 likelihood-to-recommend question is solved — it has been for twenty years. The diagnostic follow-up question is where most NPS programs collapse. This guide covers the canonical NPS question wording, 12 follow-up question examples by intent, survey length benchmarks, and the wording decisions that separate scoring from diagnostic signal.
The structural gap between collecting a score and collecting an explanation. It appears in three forms: the generic-prompt failure ("What is the reason for your score?" produces "ok," "good," blanks), the anonymity failure (conditional branching without persistent stakeholder IDs — you know a detractor exists, you cannot reach them), and the volume failure (adding 5+ questions crashes completion rates 40–60%). The alternative isn't asking more questions — it's asking the right one, selected by diagnostic intent, analyzed as responses arrive rather than weeks later.
2 questions
optimal NPS survey length — score + one targeted follow-up
40–60%
completion-rate drop when NPS surveys exceed 3 questions
12
follow-up question examples in the library below — by intent and context
0–10
the only valid NPS scale — not 1–5, not 1–7, not stars
What is an NPS survey question?
An NPS survey question is the standardized 0–10 likelihood-to-recommend question developed by Fred Reichheld at Bain & Company in 2003. The canonical wording is: "On a scale of 0 to 10, how likely are you to recommend [organization/program/service] to a friend or colleague?" Respondents scoring 9–10 are classified as Promoters, 7–8 as Passives, and 0–6 as Detractors. The Net Promoter Score is calculated as the percentage of Promoters minus the percentage of Detractors.
The question has a specific structure that matters. The 0–10 scale must not be compressed or reversed — a 1–5 scale breaks the segmentation thresholds and produces results that are not comparable to any benchmark. The "friend or colleague" framing is deliberate: it anchors the recommendation to a real relationship rather than an abstract endorsement. And the object of the recommendation matters more than most teams realize — "our company" produces different scores than "this specific program" because respondents evaluate different things. Full calculation methodology is covered on the NPS calculation guide.
What is the standard NPS question?
The standard NPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization/program/service] to a friend or colleague?" with anchor labels 0 = Not at all likely and 10 = Extremely likely. This is the validated wording from Reichheld's original research and the phrasing most industry benchmarks are built on. Variants exist — "to a peer," "to someone you know," "to your network" — but departing from the standard introduces measurable score variation that makes benchmark comparisons unreliable.
Three elements of the standard question are load-bearing. Scale integrity: keep 0–10 numeric, with numbers visible, not compressed to five stars or emoji. Label placement: anchor only the endpoints; labeling every point inflates passive scores. Object of measurement: name what is being recommended specifically — "this training program" or "our pediatric clinic" — not the organization as a whole when the respondent only interacted with one part of it. These three choices, applied consistently across cycles, produce scores you can actually compare over time.
NPS survey question examples
A well-designed NPS survey contains exactly two questions: the standard 0–10 scale question and one targeted open-ended follow-up. Additional questions (demographics, segmentation, secondary ratings) should come from the stakeholder record, not from adding more items to the survey. The widget below shows the canonical NPS question plus 12 example follow-up questions organized by intent — diagnostic, segmentation, recovery, and activation — for four survey contexts: customer NPS, employee NPS (eNPS), transactional NPS, and relational NPS.
Question Library · 4 Contexts
NPS survey questions, filterable by context
Start with the canonical question for your context, then pick one follow-up by intent — diagnostic, recovery, activation, or milestone. Two questions total.
Standard · Customer NPSReichheld 2003 · validated wording
"On a scale of 0 to 10, how likely are you to recommend [organization/product/service] to a friend or colleague?"
0 — Not at all likely
012345678910
Extremely likely — 10
Wording rules: 0–10 numeric scale (not 1–5, not stars), anchor only the endpoints, "friend or colleague" framing (not "peer" or "network"), name one object (a specific product or program, not "our company as a whole").
Standard · Employee NPS (eNPS)Bain adaptation for employee research
"On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?"
0 — Not at all likely
012345678910
Extremely likely — 10
Wording rules: "place to work" (not "employer" or "company") tests better in response rate and produces more candid scores. eNPS tolerates 3–5 total questions where customer NPS demands 2 — but the follow-up discipline still applies.
Standard · Transactional NPSEvent-anchored · 24–48h post-event
"Based on your recent [event — e.g., support interaction, onboarding, purchase], on a scale of 0 to 10, how likely are you to recommend us to a friend or colleague?"
0 — Not at all likely
012345678910
Extremely likely — 10
Wording rules: name the specific event the score refers to. Send within 24–48h — outside that window the respondent's emotional anchor has faded. Two questions maximum; response rate collapses past that at transactional cadence.
Standard · Relational NPSQuarterly or annual · full-relationship view
"Thinking about your overall experience with [organization/program] over the past [period], on a scale of 0 to 10, how likely are you to recommend us to a friend or colleague?"
0 — Not at all likely
012345678910
Extremely likely — 10
Wording rules: "overall experience over the past [period]" frames the full relationship. Scores run lower than transactional NPS from the same customer base because the time window captures both good and bad events. Don't compare relational scores to transactional benchmarks.
Diagnostic follow-ups — understand the score
Pick one · do not ask multiple
Diagnostic
"What is the most important reason for the score you gave?"
When to use. The default diagnostic follow-up. "Most important reason" outperforms "why" because it focuses attention and produces shorter, more actionable responses.
Diagnostic
"What would make you rate us a 10 next time?"
When to use. Forward-facing diagnostic. Produces improvement-oriented rather than complaint-oriented responses. Works across all score levels.
Diagnostic
"Which one aspect of [working here / our program] most influenced your score?"
When to use. Narrows the response to a specific dimension. Works well when you already have a hypothesis about which dimensions are driving scores.
Diagnostic
"In one sentence, what drove your score?"
When to use. Brevity cue ("in one sentence") improves response quality and completion. Works especially well in transactional surveys where response rate is fragile.
Recovery follow-ups — detractor-focused
For programs targeting detractor recovery
Recovery
"What would need to change for you to recommend us in the future?"
When to use. Forward-facing recovery prompt. Reduces defensiveness vs. a direct "why the low score" and generates responses that map to fixable issues rather than grievances.
Recovery
"What one thing would move your score from where it is today to a 10?"
When to use. Single-fix framing. Produces specific, ranked responses rather than a list of everything wrong. Maps directly to prioritized action items.
Recovery
"Is there something specific we could have done differently during your experience?"
When to use. Retrospective, conversational tone. Best for relational cadences where you want the respondent to feel heard, not interrogated. Pair with named-detractor outreach.
Activation follow-ups — promoter-focused
Turn high scores into testimonials and referrals
Activation
"What would you tell a friend or colleague about us if they were considering us?"
When to use. Produces testimonial-quality language in the respondent's own words. Collect permission to use quotes. Works best when shown conditionally to promoters (9–10).
Activation
"What would you tell a friend about working here if they asked?"
When to use. eNPS variant of the customer activation prompt. Good source of employer-brand language and candidate-facing quotes (with permission). Show to 9–10 scorers only.
Milestone follow-ups — tie score to a specific moment
Best for transactional and program-milestone NPS
Milestone
"Which part of [event — e.g., onboarding, support] most influenced your score?"
When to use. Ties the score directly to a specific program or service component. Enables component-level improvements rather than generic satisfaction tracking.
Milestone
"What matters most to you in [category — e.g., your role, our program] right now?"
When to use. Discovery prompt. Surfaces priorities that haven't yet been captured in structured tracking. Strong signal for program roadmap decisions.
Milestone
"Compared to your previous experience with us, did this feel better, worse, or the same — and why?"
When to use. Anchors the current score against the respondent's own history. Produces per-respondent trajectory signal that aggregate NPS movement cannot show.
The two-question rule. Pick one canonical question for your context, then pick exactly one follow-up by intent. Every additional closed-ended question crashes completion rate 10–15%; segmentation data belongs in the stakeholder record, not in the survey form.
An NPS survey should contain exactly 2 questions: the 0–10 likelihood-to-recommend question plus one targeted open-ended follow-up. Research from Qualtrics and CustomerGauge shows completion rates drop 40–60% when surveys exceed three questions, and the diagnostic value of additional closed-ended items is typically weaker than the signal from a well-designed follow-up. Teams asking "what about demographics?" or "what about frequency of use?" should pull that data from the stakeholder record rather than asking respondents to re-enter it.
The exception is survey length by channel. Transactional NPS (immediately after a customer event) tolerates exactly 2 questions — any more and response rate collapses because the respondent is mid-task. Relational NPS (quarterly or annual) can tolerate up to 3 questions if the third is a single "what matters most to you right now" prompt. Employee NPS (eNPS) can run 3–5 questions because employees expect longer internal surveys, but the discipline to keep the follow-up to one question still holds.
Survey Design Discipline · 6 Principles
NPS survey best practices — what separates diagnostic surveys from scoring dashboards
Six design principles, every one of them backed by 20 years of NPS research. Apply all six and the survey becomes a decision tool. Skip any of them and the Follow-Up Void sets in.
Keep the scale at 0–10 numeric — no stars, no 1–5, no emoji
The 0–10 scale is load-bearing. Compressed scales (1–5, 1–7) break the promoter/passive/detractor thresholds and destroy benchmark comparability. Star and emoji scales eliminate numeric interpretation entirely. Override your survey tool's default if needed.
Typeform and SurveyMonkey default to stars — they need to be switched to numeric 0–10.
02
Wording
Use the standard wording — variations drift scores unpredictably
"How likely are you to recommend [object] to a friend or colleague?" is the validated Reichheld wording. Alternatives like "would you tell someone about," "rate our service," or "how would you describe" produce measurable score variation mistaken for real change in loyalty.
Small wording changes produce 5–10 point score differences on the same respondent base.
03
Length
Cap the survey at 2 questions for customer NPS, 3–5 for eNPS
Completion rates drop 40–60% past three questions in customer NPS surveys. Every demographic or segmentation item you add is a trade-off. Pull segmentation from the stakeholder record instead. eNPS tolerates more questions because employees expect longer internal surveys.
A 5-question "NPS" survey is not an NPS survey — it's an engagement survey with an NPS question in it.
04
Follow-Up
Pick one follow-up — by diagnostic intent, not by default
"What's the reason for your score?" is a placeholder, not a diagnostic. Select by intent: diagnostic for understanding drivers, recovery for detractors, activation for promoters, milestone for transactional. One follow-up question, selected deliberately, beats any five-question panel.
Generic follow-ups produce 40–60% blank-response rates — architecturally self-defeating.
05
Identity
Collect identified responses — anonymous scoring closes the Recovery Window
Anonymous NPS produces a number. Identified NPS (with persistent stakeholder IDs) produces a named detractor list you can contact within the 48-hour Recovery Window. If your platform requires anonymity, you cannot close the loop — the score is a report, not a workflow.
"Anonymous for honesty" is usually a rationalization for infrastructure that can't link responses to records.
06
Analysis
Theme the follow-up responses as they arrive — not at quarter's end
Manual coding of open-ended responses takes weeks. By the time themes are ready, the Recovery Window has closed and the cycle's score has already been reported. Automated theme extraction on the same schema as the score is the architectural answer.
A score reported without themes is a headline without a story — reporters don't stop there, and programs shouldn't either.
Apply all six and the survey becomes a decision tool. Skip any of them and the Follow-Up Void sets in — dashboards full of scores with no story behind them.
The best NPS follow-up question is the one selected based on what decision the data needs to inform — not a generic "why did you give this score?" placeholder. Four follow-up patterns consistently outperform the default. For understanding score drivers across all respondents:"What is the most important reason for the score you gave?" (focuses attention, produces shorter, more actionable responses). For detractor recovery:"What would need to change for you to recommend us in the future?" (forward-facing, reduces defensiveness). For promoter activation:"What would you tell a friend or colleague about us if they were considering us?" (produces testimonial-quality language). For milestone-specific surveys:"Which part of the program most influenced your score?" (ties feedback to specific components).
What to avoid: generic "why?" prompts (produce one-word answers), conditional branching without named IDs (you know a detractor exists but cannot reach them), and follow-up questions that re-ask the respondent what they already answered. The follow-up question budget for any NPS survey is one or two questions maximum — and those questions must be selected based on the diagnostic gap at that collection moment. Pair every follow-up with qualitative analysis that themes responses in hours, not weeks, or the diagnostic layer never reaches the people who could act on it.
How to word NPS questions correctly
Word NPS questions for the stakeholder, not the organization. Nonprofit participants don't think of themselves as customers recommending a company — reframing "our company" to "this program" or "our services" lifts response rates and produces more authentic scores. B2B SaaS customers respond better to "your team at [company]" than to "our product." Healthcare patients respond to "your care experience" more honestly than to "our hospital system." The stakeholder's own language should drive the wording.
Three specific wording rules hold across every NPS context. One object per question: don't ask "how likely are you to recommend our product and our support team" — that's two objects, and the response conflates them. Active voice, present tense: "how likely are you to recommend" outperforms "would you recommend" because it anchors the response to current state. Neutral anchor labels: "Not at all likely / Extremely likely" is validated wording; "Would never / Would definitely" over-frames the extremes and inflates scores in prosocial contexts where respondents feel social pressure to be positive. See the NPS wording variations guide for context-specific adjustments.
Three Contexts · Three Question Architectures
How NPS survey questions change by context — customer, employee, transactional
Select your context to see the right canonical question, the right follow-up, and the common design mistake in each case.
A mid-market B2B SaaS company runs quarterly customer NPS across 850 accounts. The CS team uses the results to route detractor CSMs. The wrong architecture — anonymous survey with "why did you give this score?" — produces a dashboard number but no named detractor list. Switching to identified NPS with a recovery-intent follow-up turns the same 850 responses into a CSM task queue.
Right architecture · 2 questions · identified
"Thinking about your overall experience with [Product] over the past quarter, on a scale of 0 to 10, how likely are you to recommend us to a friend or colleague?"
+ one diagnostic follow-up:
"What is the most important reason for the score you gave?"
Wrong architecture
Anonymous + generic follow-up
Anonymous NPS produces a dashboard number, not a named detractor list — CSMs cannot act
A 3,200-person healthcare system runs twice-yearly eNPS. HR uses results to drive department-level engagement planning. The wrong architecture — 15-question "engagement survey" with eNPS buried inside — produces a composite number that isn't actually eNPS-comparable to benchmarks. Running eNPS as its own 3-question instrument with department-level segmentation pulled from HRIS produces both the benchmark-compatible score and the drill-down.
Right architecture · 3 questions · HRIS-linked
"On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?"
+ diagnostic follow-up:
"What one thing would move your score from where it is today to a 10?"
+ optional discovery prompt:
"What matters most to you in your role right now?"
Wrong architecture
eNPS buried in a 15-question engagement survey
Response rate at 15 questions: 32% — the 10 extra items cost more than they produce
eNPS score collected alongside other items is not comparable to standalone eNPS benchmarks
Department and tenure re-asked in the survey — already in HRIS — adds friction, reduces trust
Scores reported at system level only — 50-point department spread stays invisible
Right architecture
eNPS as its own 3-question instrument
Separate eNPS instrument from broader engagement work — benchmark-compatible score
3 questions maximum — ~72% response rate at this length
Department + tenure from HRIS — segmentation at collection, no re-asking
Department-level reporting surfaces the spread that drives retention planning
For HR and people analytics: eNPS is its own instrument, not a question inside engagement survey bloat. Three questions. HRIS-linked. Department-segmented.
A customer support team runs post-ticket NPS within 24 hours of every support close. Transactional NPS has the tightest Recovery Window of any context — the respondent's emotional anchor fades within 48 hours. The wrong architecture — a 5-question survey two weeks after close — produces scores anchored to recollection, not experience. A 2-question event-anchored survey sent the same day produces immediate, actionable signal.
Right architecture · 2 questions · within 24h
"Based on your recent support interaction [ticket #12847], on a scale of 0 to 10, how likely are you to recommend us to a friend or colleague?"
+ one milestone follow-up:
"Which part of your support interaction most influenced your score?"
Wrong architecture
Delayed send · multi-question · generic follow-up
Survey sent two weeks post-close — emotional anchor to the event has faded
5 questions at transactional cadence — response rate drops to 18%
"Why?" as the follow-up — no link to the specific event the score refers to
Recovery Window closed before analyst sees detractor response
Right architecture
Same-day send · 2 questions · event-anchored
Within 24 hours of ticket close — respondent's anchor is fresh
Exactly 2 questions — 48%+ response rate at transactional cadence
Event-named follow-up ("Which part of your support interaction…") — ties score to specific component
Named detractor routing to the specific CSM who owned the ticket — loop closes inside the Recovery Window
For transactional NPS: speed matters more than comprehensiveness. Two questions, same day, event-anchored — inside the Recovery Window every time.
NPS survey examples by context (relational, transactional, eNPS)
Three NPS survey contexts require different question architecture. A relational NPS survey (quarterly or annual) uses the standard question with a diagnostic follow-up: "On a scale of 0 to 10, how likely are you to recommend [program] to a friend or colleague?" followed by "What is the most important reason for the score you gave?" A transactional NPS survey (triggered by a specific event) uses the standard question with an event-anchored follow-up: "On a scale of 0 to 10, how likely are you to recommend [program] after [specific event]?" followed by "What about [specific event] most influenced your score?" An employee NPS (eNPS) survey uses a recommend-as-employer variant: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" followed by "What would make you more likely to recommend us as an employer?"
The architectural difference matters because the wrong question architecture produces misleading scores. A transactional NPS collected immediately after a positive service event over-reports; a relational NPS from the same customer base weeks later under-reports by comparison. Mixing the two in a single "NPS trend" chart blurs signal and action. Pick the architecture that matches the decision and stay with it across cycles — consistency of methodology over time is more valuable than precision of wording at any single moment. Related context: NPS benchmarks vary by collection methodology.
Survey Tool vs. Survey Architecture · 2026
Why off-the-shelf survey tools produce the Follow-Up Void
Most survey platforms are built for form-creation, not for the NPS-specific architecture that turns a score into a decision. Four common risks, then a capability comparison.
Risk 01
Generic default follow-up
"What is the reason for your score?" is a placeholder. 40% blank response rate. Produces categories so broad they cannot drive action.
SurveyMonkey, Qualtrics, Typeform all ship this prompt as the default.
Risk 02
Anonymous by default
Anonymous NPS closes the Recovery Window by architecture. You know a detractor exists — you can't reach them. The score is a report, not a workflow.
"Anonymity for honesty" is usually infrastructure that can't link to stakeholder records.
Risk 03
Scale contamination
Tools defaulting to 1–5 stars or emoji break the NPS threshold math. Scores become uncomparable to any benchmark — and the team doesn't notice.
Typeform's default rating widget is 5-star, not 0–10 numeric. Override required.
Risk 04
Delayed qualitative analysis
Open-ended follow-up responses sit uncoded until quarterly review. Themes land after the Recovery Window closes. The signal is gone before anyone can act on it.
Manual coding by an analyst takes 2–4 weeks per cycle at scale.
NPS Survey Capability Comparison
What matters for actionable NPS — by capability
Capability
Generic survey tool
Sopact Sense
01 — Question Architecture
Core question + diagnostic follow-up
Scale default
The 0–10 scale is load-bearing
1–5 stars or emoji default
Typeform, SurveyMonkey require manual override to get 0–10 numeric
0–10 NPS as the template default
Anchor labels pre-set to Reichheld-validated wording
Follow-up question design
Selected by diagnostic intent, not by default
Generic "why?" prompt
Results in 40% blank-response rates, unusable categories
Library of 12 intent-selected follow-ups
Diagnostic, recovery, activation, milestone — pick by decision
Survey length discipline
2 questions for customer NPS, 3–5 for eNPS
Unlimited questions, no guidance
Survey bloat is the default path — completion rates collapse
2-question NPS template enforced by design
Segmentation pulled from stakeholder record, not re-asked
02 — Response Infrastructure
Identity · segmentation · analysis
Respondent identification
Named detractors you can contact
Anonymous default · email-optional
Recovery Window closes because no one can act on a detractor response
Persistent stakeholder IDs at first contact
Every response linked to full respondent history automatically
Segmentation variables
Cohort, location, department, tenure
Re-asked in the survey form
Adds items, drags down response rate, duplicates data already in CRM/HRIS
Pulled from stakeholder record automatically
No extra survey items — segmentation is structural, not question-based
Qualitative follow-up analysis
Theme extraction from open-ended responses
Manual coding · weekly or quarterly
Themes land after Recovery Window has closed; analyst bandwidth-bound
Intelligent Column themes within hours
Themes land inside the 48h Recovery Window; no analyst required
03 — Deployment & Pricing
Time to live · cost structure
Time to first NPS cycle
From "we need NPS" to first actionable report
2–6 weeks typical
Form build + integration + analyst coding setup
Days
NPS templates + stakeholder schema + theme extraction ship together
All NPS infrastructure — identity, theme extraction, Recovery Window workflow
Generic survey tools produce forms. NPS-specific architecture produces decisions — the difference is identity, segmentation, follow-up design, and analysis speed.
Two questions. Identified responses. Intent-selected follow-up. Themes extracted before the Recovery Window closes. That is the architecture that turns an NPS score into a decision — and it is not what generic survey tools are built for.
The Follow-Up Void: why most NPS programs produce scores, not answers
The Follow-Up Void is the structural gap between collecting a score and collecting an explanation. It appears in three predictable forms. The generic-prompt failure:"What is the reason for your score?" is a placeholder, not a diagnostic. It generates open-ended text that requires weeks of manual coding and produces categories so broad — "poor service," "good product" — they cannot drive specific action. SurveyMonkey, Qualtrics, and Typeform ship this prompt as the default because it is technically an open-ended question; it is not a diagnostic instrument.
The anonymity failure: programs that show promoters one follow-up and detractors a different one have the right structural instinct, but without persistent stakeholder IDs the responses remain anonymous. You know a detractor exists. You cannot reach them. The Recovery Window — the brief period when a detractor relationship is still recoverable — closes while the data sits in a pivot table. The volume failure: programs that add five or six additional questions to "get more data" drive completion rates down 40–60%. The follow-up question budget is one or two questions — and those questions must target the diagnostic gap that matters most at that collection moment. Anything more is architectural self-harm.
[embed: video]
Frequently Asked Questions
What is an NPS survey question?
An NPS survey question is the standardized 0–10 likelihood-to-recommend question developed by Fred Reichheld at Bain & Company. The canonical wording is: "On a scale of 0 to 10, how likely are you to recommend [organization/program/service] to a friend or colleague?" Respondents scoring 9–10 are Promoters, 7–8 are Passives, 0–6 are Detractors.
What is the standard NPS question?
The standard NPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization/program/service] to a friend or colleague?" with anchor labels "0 = Not at all likely" and "10 = Extremely likely." This is the validated wording from Reichheld's original research and the phrasing most industry benchmarks are built on.
How many questions should an NPS survey have?
An NPS survey should contain exactly 2 questions: the 0–10 likelihood-to-recommend question plus one targeted open-ended follow-up. Completion rates drop 40–60% when surveys exceed three questions. Demographic and segmentation data should come from the stakeholder record, not from adding more items to the survey.
What are good NPS follow-up questions?
The best NPS follow-up questions are selected by intent. For understanding score drivers: "What is the most important reason for the score you gave?" For detractor recovery: "What would need to change for you to recommend us in the future?" For promoter activation: "What would you tell a friend or colleague about us if they were considering us?" For milestone surveys: "Which part of the program most influenced your score?"
What are NPS follow-up questions examples?
Example NPS follow-up questions by intent: "What is the most important reason for your score?" (diagnostic), "What would need to change for you to recommend us?" (detractor recovery), "What would you tell a friend about us?" (promoter activation), "Which part of the program most influenced your score?" (milestone), "What matters most to you in [category] right now?" (discovery), "Which one change would move your score from where it is today to a 10?" (action).
How do you word the NPS question?
Word the NPS question for the stakeholder, not the organization. Use "this program" or "our services" rather than "our company" in nonprofit contexts. Keep the scale at 0–10 numeric; anchor only the endpoints ("Not at all likely" / "Extremely likely"); use active voice present tense ("how likely are you to recommend"); name one object per question, not multiple.
What are NPS survey best practices?
NPS survey best practices: use the standard 0–10 scale with anchored endpoints; keep the survey to exactly 2 questions (score plus one follow-up); pull segmentation data from the stakeholder record, not additional questions; select the follow-up wording by decision intent; collect within the Recovery Window (under 48 hours post-event for transactional); maintain methodology consistency across cycles; analyze qualitative follow-ups in hours, not weeks.
When should you send an NPS survey?
Send transactional NPS surveys within 24–48 hours of the customer event (support close, program milestone, purchase) — the Recovery Window narrows quickly after that. Send relational NPS surveys quarterly for customer programs, twice yearly for employee NPS (eNPS), and at program completion plus 90 days for educational/training contexts. Consistency of timing across cycles matters more than exact cadence.
What is NPS+?
NPS+ is an extension of the standard NPS survey that pairs the 0–10 score with a structured follow-up asking what specifically drove the score, often combined with a second follow-up asking what would improve it. NPS+ attempts to close The Follow-Up Void by design, but without persistent stakeholder IDs and qualitative analysis infrastructure, NPS+ surveys still produce anonymous responses that can't be acted on.
What are employee NPS (eNPS) survey questions?
The standard eNPS question is: "On a scale of 0 to 10, how likely are you to recommend [organization] as a place to work?" Common follow-ups: "What would make you more likely to recommend us as an employer?" (diagnostic), "Which aspects of working here most influenced your score?" (milestone), "What would need to change for your score to be a 10?" (action). eNPS surveys tolerate 3–5 questions; customer NPS surveys should stay at 2.
How do you design an NPS survey?
Design an NPS survey in four decisions. (1) Define the output — what decision will this data inform? (2) Set the context — customer, employee, transactional, relational. (3) Write the core question using validated wording for that context. (4) Select one follow-up question targeting the diagnostic gap that matters at this collection moment. Skip multi-question survey bloat; pull segmentation from the stakeholder record instead.
What is The Follow-Up Void?
The Follow-Up Void is the structural gap between collecting an NPS score and collecting an explanation that can drive action. It appears in three forms: generic follow-up prompts that produce unusable responses, anonymous collection that prevents detractor outreach, and survey bloat (5+ questions) that crashes completion rates. The Follow-Up Void is why most NPS programs produce dashboards but don't produce decisions.
How much does NPS survey software cost?
NPS survey software ranges from free (Google Forms, basic SurveyMonkey) to enterprise ($30K–$150K/yr for Qualtrics XM, Medallia). The cost driver isn't the survey form — it's the qualitative analysis layer, persistent stakeholder IDs for detractor follow-up, and the intelligence that reads the follow-up responses. Sopact Sense includes all of that at $1,000/month — substantially below enterprise CX platforms.
What's the difference between an NPS survey and a CSAT survey?
An NPS survey measures relational loyalty ("likelihood to recommend") on a 0–10 scale collected quarterly or annually. A CSAT survey measures transactional satisfaction ("how satisfied were you?") on a 1–5 or 1–7 scale collected immediately after a service event. NPS is directional and strategic; CSAT is tactical and diagnostic. Many programs benefit from running both — see NPS vs CSAT for the full comparison.
Ship NPS That Actually Diagnoses
Two questions. Named responses. Themes in hours.
Close the Follow-Up Void with the three architectural choices that turn a score into a decision: one core question plus one intent-selected follow-up, persistent stakeholder IDs at first contact, and qualitative theme extraction before the Recovery Window closes.
Stage 01 · Design
Two-Question Discipline
One canonical 0–10 question plus one follow-up selected by intent — diagnostic, recovery, activation, or milestone. Everything else comes from the stakeholder record, not from adding survey items.
Stage 02 · Collect
Identified Collection
Persistent stakeholder IDs assigned at first contact — so every detractor response is immediately contactable within the 48-hour Recovery Window. Anonymous collection closes the window by architecture; identified collection keeps it open.
Stage 03 · Diagnose
Themed As Arriving
Intelligent Column theme extraction within hours, not weeks — so diagnostic signal lands inside the 48-hour Recovery Window. Manual coding at quarter's end is architecturally too late; the signal is gone before anyone reads it.
Canonical NPS question plus one intent-selected follow-up — not five "because we might need it" extras.
Stakeholder IDs assigned at first contact — not anonymous by default, not an afterthought.
Follow-up themes extracted before the Recovery Window closes — not two weeks after the detractor has moved on.
One intelligence layer — powered by Claude, OpenAI, Gemini, watsonx. Survey design, identified collection, and qualitative theme extraction in the same infrastructure.