play icon for videos
Use case

NPS Survey Questions: Examples & Best Practices 2026

Get proven NPS survey question examples, follow-up templates, and design best practices for nonprofits. Learn how to collect scores linked to stakeholder context automatically.

TABLEΒ OFΒ CONTENT

Author: Unmesh Sheth

Last Updated:

March 27, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

NPS Survey Questions: Design, Examples, and Best Practices (2025)

Your NPS score came back at +32. Leadership wants to know why it dropped four points from last quarter. You open the responses β€” 847 of them β€” and the follow-up field shows: "What's your reason for this score?" answered with "ok," "good," "nothing," and two hundred blanks. You have a number. You have no intelligence.

That's The Follow-Up Void: when an NPS program is built around the core question but treats the follow-up as an afterthought, leaving organizations with a score they can report but can't act on. The follow-up question isn't decoration β€” it is the diagnostic layer that tells you whether a +32 reflects pricing friction, onboarding confusion, or a product gap that a competitor is about to exploit.

This guide covers every NPS survey question decision: what to ask, how to sequence questions, what follow-up architecture looks like for different programs, and how Sopact Sense collects and analyzes both quantitative scores and qualitative responses in one linked system β€” so you never export a spreadsheet hoping to match a comment to a score again.

Ownable concept
The Follow-Up Void
When an NPS program is built around the core question but treats the follow-up as an afterthought β€” producing a score you can report but cannot diagnose or act on.
390 monthly searches: nps example questions Survey design best practices Follow-up question architecture Nonprofits & social sector
1
Define output before questions
What decision will this data inform?
2
Get the core question right
Scale, wording, object of measurement
3
Design the follow-up layer
One targeted diagnostic question
4
Collect and analyze as one
Score + qualitative, linked by ID
Build Your NPS Survey in Sopact Sense β†’

Step 1: Define What Your NPS Survey Needs to Produce Before You Write a Single Question

The worst NPS surveys are written backward β€” someone opens a survey tool, types the core question, adds "why?" and launches. Three hundred responses later, the data tells them nothing except what they already suspected.

Effective NPS survey design starts with the output question: what decision will this data inform? Program teams managing member satisfaction need different follow-up architectures than SaaS companies tracking feature adoption. Nonprofits measuring participant experience after a workforce training cohort need follow-up questions tied to specific program milestones, not generic "tell us more" prompts.

Before selecting a single question, identify: Who is being surveyed β€” program participants, donors, clients, employees? At what moment in the relationship β€” post-enrollment, mid-program, post-completion, 90 days after exit? What segment breakdown matters for action β€” by location, by cohort, by program type? Sopact Sense assigns a unique stakeholder ID at first contact, so every NPS response collected inside the platform arrives already linked to the respondent's full program history β€” no manual matching required.

Describe your situation
What to bring
What Sopact Sense produces
High volume Β· No diagnostic data
I collect NPS but can't explain why the score is what it is
Program directors Β· M&E leads Β· Nonprofit CEOs
I'm the program director at a workforce development nonprofit with 400+ participants per cohort. We run a quarterly NPS survey and report the score to our funder. Last quarter the score dropped six points and leadership asked me what changed. I opened 300 responses β€” 200 of them left the follow-up blank, and the rest said things like "ok" and "it was fine." I have a number I can't explain and a funder expecting answers.
Platform signal: Sopact Sense is the right tool. Collect the core NPS question and a targeted follow-up in one form linked to stakeholder IDs. Intelligent Column extracts themes automatically β€” no manual coding.
Survey design Β· Starting from scratch
I'm launching an NPS program and need to know what questions to include
New program managers Β· Evaluation consultants Β· Data leads
I'm the evaluation lead at a community health organization launching our first NPS program across three service lines. I've read that we should ask one question, but our leadership wants to know why patients rate us the way they do, which cohorts have the most detractors, and what we can do about it. I need a survey design that answers all three without a six-question survey that nobody finishes.
Platform signal: Sopact Sense lets you build a two-question instrument (core + one targeted follow-up) and collect segmentation data automatically from the stakeholder record β€” no extra questions needed.
Small program Β· Simple needs
I have fewer than 50 participants β€” do I need a platform for this?
Small nonprofits Β· Pilot programs Β· Single-cohort evaluations
I run a mentorship program with 35 participants. I want to collect an NPS after our program completion. I'm not sure I need a sophisticated platform β€” a Google Form might be enough. What I do know is that if I can't follow up with specific detractors, the data won't change anything.
Platform signal: For fewer than 50 participants with no longitudinal tracking needs, a simple form with a follow-up question may be sufficient. If you need to link scores to individual participant histories, track changes over cohorts, or follow up with specific detractors, Sopact Sense is the better fit even at small scale.
🎯
Decision to be made
What action will the data inform? Name the specific decision before writing a single question.
⏱
Collection moment
Post-enrollment, mid-program, post-completion, or 90-day follow-up? Timing determines the object of the question.
πŸ‘₯
Stakeholder profiles
What demographic and program variables do you need to segment results? If they're in Sopact Sense, you don't need to ask.
πŸ“‹
Prior cycle data
Do you have previous NPS results to compare against? Historical score and theme data shape what your follow-up should target.
πŸ“¬
Delivery channel
Email, SMS, or in-portal? Delivery channel affects completion rates and determines how to keep the survey short.
πŸ”
Follow-up protocol
Who will contact detractors and when? If no follow-up process exists, identified collection is less valuable than it should be.
Multi-program consideration: If you run five or more program lines, consider whether a single NPS instrument works across all of them or whether you need program-specific follow-up questions with a shared core question.
From Sopact Sense
Linked score + qualitative response
Every NPS response collected in Sopact Sense arrives linked to the stakeholder's unique ID β€” score and open-ended follow-up in one record, no matching step.
Auto-segmented promoter / passive / detractor breakdown
Intelligent Grid segments the score distribution by any stakeholder variable β€” cohort, location, program type β€” without manual pivot tables.
Theme extraction from open-ended responses
Intelligent Column analyzes follow-up text in minutes β€” top themes per segment, no manual coding required for any volume of responses.
Detractor identification for follow-up
Named detractors (not anonymous scores) with their follow-up text, enabling targeted outreach within 48 hours of collection.
Longitudinal score trend
NPS tracked over multiple collection cycles per stakeholder β€” see whether individual scores improve over program milestones.
Shareable live dashboard
Funder-facing dashboard reflecting current NPS and theme data β€” no quarterly export, no manual deck preparation.
Follow-up prompt suggestions
Detractor recovery "Show me the top three themes in detractor follow-up responses for our Q3 cohort and which program touchpoints they reference most."
Promoter activation "Which promoters in the last cohort gave the most detailed follow-up responses? I'd like to reach out for testimonials."
Survey optimization "What percentage of our follow-up responses were blank or fewer than five words? Help me rewrite the follow-up question."

The Follow-Up Void: Why Most NPS Programs Produce Scores, Not Answers

The Follow-Up Void is the structural gap between collecting a score and collecting an explanation. It appears in three predictable forms.

The generic prompt failure. "What is the reason for your score?" is not a follow-up question β€” it is a placeholder. It generates open-ended text that requires weeks of manual coding and produces categories so broad ("poor service," "good product") they cannot drive a specific action. SurveyMonkey and Qualtrics ship this prompt as a default because it is technically an open-ended question. It is not a diagnostic instrument.

The conditional branch failure. Programs that show promoters one follow-up and detractors a different one are making the right structural instinct but the wrong execution. Without unique stakeholder IDs, conditional routing still produces anonymous responses that cannot be used for follow-up outreach. You know a detractor exists. You cannot reach them.

The volume failure. Programs that add five or six additional questions to "get more data" drive completion rates down 40-60%. The follow-up question budget for any NPS survey is one or two questions maximum β€” and those questions must be selected based on what segment or diagnostic gap matters most at that collection moment.

Sopact Sense collects the follow-up question response in the same form as the core NPS question, linked to the same stakeholder record. Intelligent Column analyzes qualitative responses automatically β€” themes emerge in minutes, not weeks. The result is a score that arrives with a diagnostic explanation already attached.

Step 2: The Core NPS Question β€” What to Get Right

The standard NPS question is: "On a scale of 0 to 10, how likely are you to recommend [Organization/Program/Service] to a friend or colleague?" Every word in that sentence is load-bearing.

Scale integrity. The 0-10 scale must not be compressed or reversed. A 1-5 scale breaks the promoter/passive/detractor segmentation β€” the scoring thresholds no longer apply, and your results will not be comparable to any external benchmark. If your form tool defaults to 1-10, override it to 0-10. If it defaults to stars or emoji, switch to numeric. Platforms like Google Forms and Typeform allow 0-10 numeric scales but require manual configuration.

Label placement. Anchor labels matter. "0 = Not at all likely / 10 = Extremely likely" is the validated phrasing. Labels like "Very unlikely" and "Very likely" are functionally similar. "Would never recommend" and "Would definitely recommend" over-frame the extremes and subtly inflate scores in prosocial contexts (nonprofits, education programs) where respondents feel social pressure to be positive.

Object of measurement. "Recommend us" is vague for multi-program organizations. "Recommend [specific program name] to a friend or colleague who might benefit from it" ties the score to a specific experience, which produces actionable data. Sopact's guide to measuring NPS covers how to scope the NPS question to specific programs and touchpoints.

Named competitor contrast. Qualtrics templates default to "our company" as the object of measurement. For nonprofits and social sector organizations, this wording generates low response rates β€” participants don't think of themselves as customers recommending a company. Reframe to "our program," "our services," or "this training" based on context.

Step 3: NPS Follow-Up Question Design β€” The Diagnostic Layer

One follow-up question. Selected based on what decision it needs to inform. Written to be answered in one or two sentences without coaching.

For understanding score drivers across all respondents:"What is the most important reason for the score you gave?" β€” This version outperforms "why did you give this score?" because "most important reason" focuses attention and produces shorter, more actionable responses.

For detractor recovery programs:"What would need to change for you to recommend [program] in the future?" β€” This is forward-facing, which reduces defensiveness and generates improvement-oriented responses rather than complaints.

For promoter referral activation:"What would you tell a friend or colleague about [program] if they were considering it?" β€” This produces testimonial-quality language that can be used in outreach with the respondent's permission.

For post-training or milestone surveys:"Which part of [training/program] most influenced your score?" β€” Ties qualitative feedback directly to program components, enabling curriculum-level improvements rather than generic satisfaction tracking.

The NPS benchmarks page covers how to interpret follow-up themes across industry ranges β€” so you can identify whether your detractor feedback clusters match patterns typical for your sector or signal a unique program problem.

Sopact Sense collects these follow-up responses in the same instrument as the core NPS question and runs Intelligent Column analysis automatically. A program director running a 300-person workforce training cohort does not code responses manually β€” the platform identifies the top five themes within each promoter/passive/detractor group and surfaces them in the dashboard.

1
Generic follow-up produces unusable data
"Why did you give this score?" generates open text that takes weeks to manually analyze and rarely surfaces actionable program-level insights.
2
Anonymous responses prevent detractor follow-up
Without stakeholder IDs linked to responses, you know detractors exist but cannot reach them β€” eliminating the highest-value action NPS enables.
3
Too many questions collapse completion rates
Each additional question past two reduces completion by 10-20%. Programs that add five questions for "richer data" get 40% response rates on worse data.
4
Score arrives disconnected from program context
Without participant history attached, a score of 7 from a first-week participant and a score of 7 from a program completer look identical β€” they're not.
Design element Generic survey tools (SurveyMonkey / Typeform) Sopact Sense
Follow-up question Default "why did you give this score?" β€” generic, requires manual coding Targeted follow-up designed per collection moment; AI theme extraction built in
Stakeholder linking Anonymous by default; manual ID matching requires exports and VLOOKUP Unique stakeholder ID assigned at first contact β€” score arrives pre-linked
Segmentation Requires additional questions or manual export to segment by cohort / location Stakeholder profile variables auto-populate segmentation β€” no extra questions
Qualitative analysis Manual read and code β€” 2-4 weeks for 300+ responses Intelligent Column extracts themes in minutes at any volume
Detractor follow-up Not possible without de-anonymizing responses post-collection Named detectors with contact info and follow-up text β€” actionable immediately
Longitudinal tracking Separate survey rounds; manual matching across time periods Score history per stakeholder β€” trend visible at individual and cohort level
Linked NPS score + follow-up response per stakeholder record
Promoter / passive / detractor breakdown by any stakeholder segment
Top follow-up themes per segment β€” no manual coding
Named detractor list for targeted follow-up within 48 hours
Longitudinal score trend per participant across program milestones
Funder-shareable live dashboard β€” always current, no export required
See how NPS survey design connects to benchmarking: NPS Benchmarks by Industry β†’

Step 4: Additional NPS Survey Questions for Segmentation and Context

Beyond the core and follow-up questions, two categories of additional questions produce reliably high analytical value.

Segmentation variables. If Sopact Sense is your collection platform, demographic and program variables are already linked to the stakeholder record β€” you do not need to ask respondents their cohort, location, or program type. If you are using a standalone survey tool, include no more than two segmentation questions: the variables most critical for the decisions you need to make. For a workforce program, that might be "Which training track did you complete?" and "What is your current employment status?" For a scholarship program, it might be "Which cohort are you in?" and "What is your field of study?"

Milestone context questions. For transactional NPS surveys (collected after a specific program event rather than at a relationship-level), add one question that anchors the score to that event: "How satisfied were you with [specific milestone] on a scale of 1-5?" This produces a correlation data point β€” you can see whether participants who rated the milestone poorly are more likely to become detractors, which lets you intervene before the program relationship deteriorates. The NPS vs CSAT comparison covers when to pair milestone satisfaction scores with NPS rather than running them independently.

What not to add. Do not add: multiple open-ended questions (response fatigue collapses completion), satisfaction rating scales on three or four dimensions (these belong in CSAT instruments, not NPS), or demographic questions already collectible from your data system. Every additional question reduces completion rate. If your stakeholders are already in Sopact Sense with complete profiles, the survey can stay at two questions β€” core NPS and one follow-up β€” and produce richer analysis than a six-question anonymous survey.

Step 5: NPS Survey Design Tips, Troubleshooting, and Common Mistakes

Send at the right moment, not on a quarterly schedule. Quarterly NPS surveys measure average satisfaction across the entire period, which washes out the specific events that drive scores up or down. Transactional NPS (sent within 24-48 hours of a specific milestone) captures the actual driver. Continuous collection in Sopact Sense β€” where the system sends follow-up instruments at program-defined intervals β€” produces a rolling score that reflects real experience, not calendar timing.

Test your question on five people before launch. Read their responses out loud. If you cannot identify a specific action from their answers, rewrite the follow-up question. "Great program!" and "Really helpful!" are not diagnostic responses β€” they mean your follow-up question is still too generic.

Match the completion mechanism to your population. Email-based surveys work for program alumni. SMS-based surveys produce higher completion for active participants. In-platform surveys (embedded in a portal or app) work best for digital-first programs. Sopact Sense supports all three delivery modes from a single form, with responses linking to the same stakeholder record regardless of delivery channel.

Never hide the scale endpoints. Some survey tools collapse the 0-10 scale visually, showing only the numbers without endpoint labels. Respondents interpret unlabeled scales inconsistently β€” some treat 10 as best, others treat 0 as best (reversing the scale). Always display "0 = Not at all likely" and "10 = Extremely likely" visible on the same screen as the question.

Analyze your passive cohort separately. Most NPS programs focus on detractors (recovery) and promoters (referral). Passives β€” scoring 7-8 β€” are overlooked, but they represent the largest conversion opportunity. A program that systematically analyzes what would move a passive to a promoter often finds one or two solvable friction points (a confusing onboarding step, a slow communication turnaround) that no detractor analysis would surface because detractors are signaling much larger dissatisfaction.

Video guide
Why NPS Programs Fail: The Data Lifecycle Gap in Survey Design
How the gap between data collection and analysis leaves organizations with scores they can't act on β€” and what a lifecycle-aware NPS design looks like.

Frequently Asked Questions About NPS Survey Questions

What are the best NPS survey questions?

The best NPS survey questions start with the validated core question β€” "On a scale of 0 to 10, how likely are you to recommend [program/organization] to a friend or colleague?" β€” followed by one targeted follow-up. The best follow-up for most programs is "What is the most important reason for the score you gave?" For detractor-focused programs, "What would need to change for you to recommend us in the future?" is more actionable. The single most important design rule: keep the total question count at two or three maximum to protect completion rates.

What is the NPS follow-up question?

The NPS follow-up question is an open-ended prompt that appears immediately after the 0-10 rating to capture the reason behind the score. Standard practice is one follow-up question per survey. The most common version is "What is the primary reason for your score?" but targeted versions β€” segmented by promoter/passive/detractor routing β€” produce more actionable data. In Sopact Sense, follow-up responses are automatically analyzed by Intelligent Column to extract themes without manual coding.

How many questions should an NPS survey have?

An NPS survey should have two to four questions maximum: the core NPS question, one open-ended follow-up, and optionally one or two segmentation variables if your platform cannot pre-populate them from existing stakeholder data. Research consistently shows completion rate drops 10-20% per additional question beyond the core. If you need more data, collect it in a separate instrument at a different point in the program cycle rather than extending the NPS survey.

What are NPS example questions for nonprofits?

Nonprofit-specific NPS question examples include: Core β€” "How likely are you to recommend [Program Name] to a friend or family member who might benefit from it?" Follow-up options β€” "What has been the most valuable part of the program for you?" (outcome-focused) or "What would you change about your experience?" (improvement-focused) or "What would you tell someone else who is considering joining?" (promoter activation). Avoid corporate NPS question wording ("recommend our company") which creates misalignment between question framing and participant context.

What is the best practice in NPS survey design?

NPS survey best practices include: (1) Use a 0-10 scale with labeled endpoints, never compressed to 1-5. (2) Keep the core question object-specific β€” "this training" or "our scholarship program" rather than "our organization." (3) Limit total questions to three maximum. (4) Send transactionally within 24-48 hours of a specific program event, not on a fixed quarterly calendar. (5) Collect responses in a system that links them to unique stakeholder IDs so follow-up is possible. (6) Analyze open-ended responses systematically, not by reading randomly selected comments.

How do you design an NPS survey for training programs?

NPS surveys for training programs perform best when the core question references the specific training: "How likely are you to recommend [Training Name] to a colleague who wants to develop similar skills?" The most effective follow-up for training context is "Which part of the training most influenced your score?" β€” this ties qualitative feedback directly to curriculum components. Collect transactionally (within 48 hours of training completion) and separately from any general program satisfaction survey. The training evaluation guide covers how to integrate NPS with Kirkpatrick Level 1-4 data.

What is the NPS survey question scale β€” 0 to 10 or 1 to 10?

The validated NPS scale is 0 to 10, not 1 to 10. The difference matters: the 0 option represents the "not at all likely" anchor and is part of the detractor range (scores 0-6). A 1-10 scale shifts the threshold calculation β€” a score of 6 on a 1-10 scale is not the same as a score of 6 on a 0-10 scale in terms of customer sentiment. Use 0-10 universally if you want your results comparable to industry benchmarks, which are all calculated on the 0-10 standard.

Should NPS surveys be anonymous?

Anonymous NPS surveys produce honest scores but eliminate the ability to follow up with specific detractors, activate specific promoters, or link scores to individual program histories. For most social sector programs, identified NPS collection is the right choice β€” participants understand their data is used for program improvement and are more willing to share when they feel the organization will actually act on feedback. Sopact Sense uses persistent unique IDs that allow longitudinal tracking and individual follow-up while managing data sensitivity at the program level.

What is the NPS question wording for employee engagement (eNPS)?

Employee NPS (eNPS) uses the same 0-10 scale but reframes the object: "How likely are you to recommend [Organization] as a place to work to a friend or colleague?" Follow-up questions for eNPS typically focus on: "What is the primary reason for your score?" and optionally "What would make [Organization] a better place to work?" The same benchmark logic applies β€” eNPS varies significantly by industry and organization size. The eNPS benchmarks page covers average eNPS by sector.

How do you increase NPS survey response rates?

Response rate improvements come from three design decisions: (1) Keep the survey to two questions maximum β€” every additional question reduces completion. (2) Send within 24 hours of a meaningful program moment β€” relevance drives completion. (3) Use the delivery channel that matches your population β€” SMS for active participants, email for alumni, in-portal for digital programs. Personalization also matters: a survey that opens with the participant's name and references their specific program generates 15-25% higher completion than generic survey links.

What happens after you collect NPS survey responses?

After collecting NPS responses, the standard workflow is: (1) Calculate the score (% promoters minus % detractors). (2) Segment the score by key variables β€” cohort, location, program type. (3) Analyze open-ended follow-up responses to identify themes in each promoter/passive/detractor group. (4) Act on detractors within 48-72 hours with targeted outreach. (5) Activate promoters with referral or testimonial requests. In Sopact Sense, steps 2-3 happen automatically β€” segmentation is built into the stakeholder record structure and Intelligent Column processes qualitative responses without manual coding.

Is NPS qualitative or quantitative?

NPS is a quantitative metric β€” the 0-10 score and the calculated NPS number are quantitative data. The follow-up open-ended question produces qualitative data. Effective NPS programs use both layers: the quantitative score for tracking and benchmarking, the qualitative follow-up for diagnosis and action. Programs that collect only the score without a follow-up question have quantitative data they cannot act on. The NPS vs CSAT guide covers how to pair NPS quantitative signals with CSAT and qualitative feedback for a complete customer intelligence picture.

Ready to close the Follow-Up Void?
Sopact Sense collects the core NPS question and follow-up in one linked instrument β€” so every score arrives with a diagnosis already attached.
Build With Sopact Sense β†’
πŸ“‹
Your NPS survey shouldn't end with a score you can't explain.
The Follow-Up Void is a design problem, not a data problem. Sopact Sense builds the follow-up architecture into the survey from the start β€” score, context, and stakeholder ID collected together, analyzed automatically.
Build With Sopact Sense β†’ Or request a 30-minute demo
TABLEΒ OFΒ CONTENT

Author: Unmesh Sheth

Last Updated:

March 27, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

TABLEΒ OFΒ CONTENT

Author: Unmesh Sheth

Last Updated:

March 27, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI