Sopact is a technology based social enterprise committed to helping organizations measure impact by directly involving their stakeholders.
Useful links
Copyright 2015-2025 Β© sopact. All rights reserved.

New webinar on 3rd March 2026 | 9:00 am PT
β
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
Get proven NPS survey question examples, follow-up templates, and design best practices for nonprofits. Learn how to collect scores linked to stakeholder context automatically.
Your NPS score came back at +32. Leadership wants to know why it dropped four points from last quarter. You open the responses β 847 of them β and the follow-up field shows: "What's your reason for this score?" answered with "ok," "good," "nothing," and two hundred blanks. You have a number. You have no intelligence.
That's The Follow-Up Void: when an NPS program is built around the core question but treats the follow-up as an afterthought, leaving organizations with a score they can report but can't act on. The follow-up question isn't decoration β it is the diagnostic layer that tells you whether a +32 reflects pricing friction, onboarding confusion, or a product gap that a competitor is about to exploit.
This guide covers every NPS survey question decision: what to ask, how to sequence questions, what follow-up architecture looks like for different programs, and how Sopact Sense collects and analyzes both quantitative scores and qualitative responses in one linked system β so you never export a spreadsheet hoping to match a comment to a score again.
The worst NPS surveys are written backward β someone opens a survey tool, types the core question, adds "why?" and launches. Three hundred responses later, the data tells them nothing except what they already suspected.
Effective NPS survey design starts with the output question: what decision will this data inform? Program teams managing member satisfaction need different follow-up architectures than SaaS companies tracking feature adoption. Nonprofits measuring participant experience after a workforce training cohort need follow-up questions tied to specific program milestones, not generic "tell us more" prompts.
Before selecting a single question, identify: Who is being surveyed β program participants, donors, clients, employees? At what moment in the relationship β post-enrollment, mid-program, post-completion, 90 days after exit? What segment breakdown matters for action β by location, by cohort, by program type? Sopact Sense assigns a unique stakeholder ID at first contact, so every NPS response collected inside the platform arrives already linked to the respondent's full program history β no manual matching required.
The Follow-Up Void is the structural gap between collecting a score and collecting an explanation. It appears in three predictable forms.
The generic prompt failure. "What is the reason for your score?" is not a follow-up question β it is a placeholder. It generates open-ended text that requires weeks of manual coding and produces categories so broad ("poor service," "good product") they cannot drive a specific action. SurveyMonkey and Qualtrics ship this prompt as a default because it is technically an open-ended question. It is not a diagnostic instrument.
The conditional branch failure. Programs that show promoters one follow-up and detractors a different one are making the right structural instinct but the wrong execution. Without unique stakeholder IDs, conditional routing still produces anonymous responses that cannot be used for follow-up outreach. You know a detractor exists. You cannot reach them.
The volume failure. Programs that add five or six additional questions to "get more data" drive completion rates down 40-60%. The follow-up question budget for any NPS survey is one or two questions maximum β and those questions must be selected based on what segment or diagnostic gap matters most at that collection moment.
Sopact Sense collects the follow-up question response in the same form as the core NPS question, linked to the same stakeholder record. Intelligent Column analyzes qualitative responses automatically β themes emerge in minutes, not weeks. The result is a score that arrives with a diagnostic explanation already attached.
The standard NPS question is: "On a scale of 0 to 10, how likely are you to recommend [Organization/Program/Service] to a friend or colleague?" Every word in that sentence is load-bearing.
Scale integrity. The 0-10 scale must not be compressed or reversed. A 1-5 scale breaks the promoter/passive/detractor segmentation β the scoring thresholds no longer apply, and your results will not be comparable to any external benchmark. If your form tool defaults to 1-10, override it to 0-10. If it defaults to stars or emoji, switch to numeric. Platforms like Google Forms and Typeform allow 0-10 numeric scales but require manual configuration.
Label placement. Anchor labels matter. "0 = Not at all likely / 10 = Extremely likely" is the validated phrasing. Labels like "Very unlikely" and "Very likely" are functionally similar. "Would never recommend" and "Would definitely recommend" over-frame the extremes and subtly inflate scores in prosocial contexts (nonprofits, education programs) where respondents feel social pressure to be positive.
Object of measurement. "Recommend us" is vague for multi-program organizations. "Recommend [specific program name] to a friend or colleague who might benefit from it" ties the score to a specific experience, which produces actionable data. Sopact's guide to measuring NPS covers how to scope the NPS question to specific programs and touchpoints.
Named competitor contrast. Qualtrics templates default to "our company" as the object of measurement. For nonprofits and social sector organizations, this wording generates low response rates β participants don't think of themselves as customers recommending a company. Reframe to "our program," "our services," or "this training" based on context.
One follow-up question. Selected based on what decision it needs to inform. Written to be answered in one or two sentences without coaching.
For understanding score drivers across all respondents:"What is the most important reason for the score you gave?" β This version outperforms "why did you give this score?" because "most important reason" focuses attention and produces shorter, more actionable responses.
For detractor recovery programs:"What would need to change for you to recommend [program] in the future?" β This is forward-facing, which reduces defensiveness and generates improvement-oriented responses rather than complaints.
For promoter referral activation:"What would you tell a friend or colleague about [program] if they were considering it?" β This produces testimonial-quality language that can be used in outreach with the respondent's permission.
For post-training or milestone surveys:"Which part of [training/program] most influenced your score?" β Ties qualitative feedback directly to program components, enabling curriculum-level improvements rather than generic satisfaction tracking.
The NPS benchmarks page covers how to interpret follow-up themes across industry ranges β so you can identify whether your detractor feedback clusters match patterns typical for your sector or signal a unique program problem.
Sopact Sense collects these follow-up responses in the same instrument as the core NPS question and runs Intelligent Column analysis automatically. A program director running a 300-person workforce training cohort does not code responses manually β the platform identifies the top five themes within each promoter/passive/detractor group and surfaces them in the dashboard.
Beyond the core and follow-up questions, two categories of additional questions produce reliably high analytical value.
Segmentation variables. If Sopact Sense is your collection platform, demographic and program variables are already linked to the stakeholder record β you do not need to ask respondents their cohort, location, or program type. If you are using a standalone survey tool, include no more than two segmentation questions: the variables most critical for the decisions you need to make. For a workforce program, that might be "Which training track did you complete?" and "What is your current employment status?" For a scholarship program, it might be "Which cohort are you in?" and "What is your field of study?"
Milestone context questions. For transactional NPS surveys (collected after a specific program event rather than at a relationship-level), add one question that anchors the score to that event: "How satisfied were you with [specific milestone] on a scale of 1-5?" This produces a correlation data point β you can see whether participants who rated the milestone poorly are more likely to become detractors, which lets you intervene before the program relationship deteriorates. The NPS vs CSAT comparison covers when to pair milestone satisfaction scores with NPS rather than running them independently.
What not to add. Do not add: multiple open-ended questions (response fatigue collapses completion), satisfaction rating scales on three or four dimensions (these belong in CSAT instruments, not NPS), or demographic questions already collectible from your data system. Every additional question reduces completion rate. If your stakeholders are already in Sopact Sense with complete profiles, the survey can stay at two questions β core NPS and one follow-up β and produce richer analysis than a six-question anonymous survey.
Send at the right moment, not on a quarterly schedule. Quarterly NPS surveys measure average satisfaction across the entire period, which washes out the specific events that drive scores up or down. Transactional NPS (sent within 24-48 hours of a specific milestone) captures the actual driver. Continuous collection in Sopact Sense β where the system sends follow-up instruments at program-defined intervals β produces a rolling score that reflects real experience, not calendar timing.
Test your question on five people before launch. Read their responses out loud. If you cannot identify a specific action from their answers, rewrite the follow-up question. "Great program!" and "Really helpful!" are not diagnostic responses β they mean your follow-up question is still too generic.
Match the completion mechanism to your population. Email-based surveys work for program alumni. SMS-based surveys produce higher completion for active participants. In-platform surveys (embedded in a portal or app) work best for digital-first programs. Sopact Sense supports all three delivery modes from a single form, with responses linking to the same stakeholder record regardless of delivery channel.
Never hide the scale endpoints. Some survey tools collapse the 0-10 scale visually, showing only the numbers without endpoint labels. Respondents interpret unlabeled scales inconsistently β some treat 10 as best, others treat 0 as best (reversing the scale). Always display "0 = Not at all likely" and "10 = Extremely likely" visible on the same screen as the question.
Analyze your passive cohort separately. Most NPS programs focus on detractors (recovery) and promoters (referral). Passives β scoring 7-8 β are overlooked, but they represent the largest conversion opportunity. A program that systematically analyzes what would move a passive to a promoter often finds one or two solvable friction points (a confusing onboarding step, a slow communication turnaround) that no detractor analysis would surface because detractors are signaling much larger dissatisfaction.
The best NPS survey questions start with the validated core question β "On a scale of 0 to 10, how likely are you to recommend [program/organization] to a friend or colleague?" β followed by one targeted follow-up. The best follow-up for most programs is "What is the most important reason for the score you gave?" For detractor-focused programs, "What would need to change for you to recommend us in the future?" is more actionable. The single most important design rule: keep the total question count at two or three maximum to protect completion rates.
The NPS follow-up question is an open-ended prompt that appears immediately after the 0-10 rating to capture the reason behind the score. Standard practice is one follow-up question per survey. The most common version is "What is the primary reason for your score?" but targeted versions β segmented by promoter/passive/detractor routing β produce more actionable data. In Sopact Sense, follow-up responses are automatically analyzed by Intelligent Column to extract themes without manual coding.
An NPS survey should have two to four questions maximum: the core NPS question, one open-ended follow-up, and optionally one or two segmentation variables if your platform cannot pre-populate them from existing stakeholder data. Research consistently shows completion rate drops 10-20% per additional question beyond the core. If you need more data, collect it in a separate instrument at a different point in the program cycle rather than extending the NPS survey.
Nonprofit-specific NPS question examples include: Core β "How likely are you to recommend [Program Name] to a friend or family member who might benefit from it?" Follow-up options β "What has been the most valuable part of the program for you?" (outcome-focused) or "What would you change about your experience?" (improvement-focused) or "What would you tell someone else who is considering joining?" (promoter activation). Avoid corporate NPS question wording ("recommend our company") which creates misalignment between question framing and participant context.
NPS survey best practices include: (1) Use a 0-10 scale with labeled endpoints, never compressed to 1-5. (2) Keep the core question object-specific β "this training" or "our scholarship program" rather than "our organization." (3) Limit total questions to three maximum. (4) Send transactionally within 24-48 hours of a specific program event, not on a fixed quarterly calendar. (5) Collect responses in a system that links them to unique stakeholder IDs so follow-up is possible. (6) Analyze open-ended responses systematically, not by reading randomly selected comments.
NPS surveys for training programs perform best when the core question references the specific training: "How likely are you to recommend [Training Name] to a colleague who wants to develop similar skills?" The most effective follow-up for training context is "Which part of the training most influenced your score?" β this ties qualitative feedback directly to curriculum components. Collect transactionally (within 48 hours of training completion) and separately from any general program satisfaction survey. The training evaluation guide covers how to integrate NPS with Kirkpatrick Level 1-4 data.
The validated NPS scale is 0 to 10, not 1 to 10. The difference matters: the 0 option represents the "not at all likely" anchor and is part of the detractor range (scores 0-6). A 1-10 scale shifts the threshold calculation β a score of 6 on a 1-10 scale is not the same as a score of 6 on a 0-10 scale in terms of customer sentiment. Use 0-10 universally if you want your results comparable to industry benchmarks, which are all calculated on the 0-10 standard.
Anonymous NPS surveys produce honest scores but eliminate the ability to follow up with specific detractors, activate specific promoters, or link scores to individual program histories. For most social sector programs, identified NPS collection is the right choice β participants understand their data is used for program improvement and are more willing to share when they feel the organization will actually act on feedback. Sopact Sense uses persistent unique IDs that allow longitudinal tracking and individual follow-up while managing data sensitivity at the program level.
Employee NPS (eNPS) uses the same 0-10 scale but reframes the object: "How likely are you to recommend [Organization] as a place to work to a friend or colleague?" Follow-up questions for eNPS typically focus on: "What is the primary reason for your score?" and optionally "What would make [Organization] a better place to work?" The same benchmark logic applies β eNPS varies significantly by industry and organization size. The eNPS benchmarks page covers average eNPS by sector.
Response rate improvements come from three design decisions: (1) Keep the survey to two questions maximum β every additional question reduces completion. (2) Send within 24 hours of a meaningful program moment β relevance drives completion. (3) Use the delivery channel that matches your population β SMS for active participants, email for alumni, in-portal for digital programs. Personalization also matters: a survey that opens with the participant's name and references their specific program generates 15-25% higher completion than generic survey links.
After collecting NPS responses, the standard workflow is: (1) Calculate the score (% promoters minus % detractors). (2) Segment the score by key variables β cohort, location, program type. (3) Analyze open-ended follow-up responses to identify themes in each promoter/passive/detractor group. (4) Act on detractors within 48-72 hours with targeted outreach. (5) Activate promoters with referral or testimonial requests. In Sopact Sense, steps 2-3 happen automatically β segmentation is built into the stakeholder record structure and Intelligent Column processes qualitative responses without manual coding.
NPS is a quantitative metric β the 0-10 score and the calculated NPS number are quantitative data. The follow-up open-ended question produces qualitative data. Effective NPS programs use both layers: the quantitative score for tracking and benchmarking, the qualitative follow-up for diagnosis and action. Programs that collect only the score without a follow-up question have quantitative data they cannot act on. The NPS vs CSAT guide covers how to pair NPS quantitative signals with CSAT and qualitative feedback for a complete customer intelligence picture.