Sopact is a technology based social enterprise committed to helping organizations measure impact by directly involving their stakeholders.
Useful links
Copyright 2015-2025 © sopact. All rights reserved.

New webinar on 3rd March 2026 | 9:00 am PT
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
Closed-ended questions: 6 types, 50+ stakeholder survey examples, pros and cons, and the Answer Architecture framework for better decisions.
Your evaluation team collects 800 survey responses. The data is clean, the spreadsheet is color-coded, and the executive summary is ready. Then your program officer asks a single question: "Which participants are actually changing their behavior six months out?" The spreadsheet has no answer. Every item was closed-ended, and closed-ended questions capture snapshots — not trajectories, not causes, not the story behind the numbers.
Closed-ended questions are the most widely used format in surveys, research instruments, and program evaluation. They constrain respondents to a defined set of answer options — yes/no, multiple choice, rating scales, ranked lists. That constraint is both their power and their limitation. This guide defines what closed-ended questions are, breaks down every major type with concrete examples, explains how they function in research, and surfaces the design problem most organizations never name: The Answer Architecture problem — questions built without a clear decision they're meant to support.
A closed-ended question is a survey or interview item that limits the respondent's answer to a predetermined set of options. Unlike open-ended questions, which invite narrative responses in the respondent's own words, closed-ended items require selection from a menu the designer defined before the first response arrived.
The most common closed-ended formats are yes/no questions, multiple-choice questions, Likert scale items (strongly agree to strongly disagree), rating scales (1–5 or 1–10), and ranked-order lists. Each format produces data that can be counted, averaged, or compared — which is why researchers and program teams reach for them first.
Tools like SurveyMonkey and Qualtrics make it easy to build closed-ended surveys in minutes. What they don't address is whether those questions will generate data that can answer the decisions your organization actually faces. That gap — between data collected and decisions supported — is the core of The Answer Architecture problem.
Most survey designers start with questions. They brainstorm what to ask, draft items, pilot-test for clarity, and launch. The data arrives clean and structured. Then the real problem surfaces: the questions produce answers, but not answers to the decisions that matter.
The Answer Architecture is the principle that a closed-ended question generates actionable data only when its response options precisely map to a decision the organization needs to make — and that mapping must happen before data collection begins, not after results arrive.
When organizations reverse this sequence — collecting first, then figuring out what the data might support — they produce structured noise. Aggregated numbers that look meaningful but can't drive action. A 4.1 out of 5 satisfaction average that no one knows how to improve. A 73% completion rate that doesn't explain why 27% didn't finish.
Unlike SurveyMonkey and Qualtrics, which hand you a blank survey builder, Sopact Sense structures forms around participant journeys from the first interaction. Every closed-ended response links to a persistent participant ID assigned at intake — not reconciled from exports later. The decision architecture is embedded in the collection system, not bolted on after.
The Answer Architecture also explains why closed-ended surveys often produce data that satisfies reporting requirements but fails program improvement. When the response options were built to match last year's grant template, not this year's program questions, the data confirms your template, not your impact.
Understanding the six major types helps survey designers match format to purpose. Each type produces a different data structure and supports different analytical operations.
Dichotomous questions offer exactly two options: yes/no, true/false, agree/disagree. They produce the cleanest data but also the least nuance. Use them for factual verification ("Did you attend all three sessions?") or gating logic ("Are you currently employed?"). SurveyMonkey defaults heavily toward dichotomous questions — which is fine for screening but insufficient for measuring change.
Multiple-choice questions (single-select) present three or more options with one answer selected. They support categorical analysis and cross-tabulation. The design risk: options that don't cover actual participant experiences. An "other" category mitigates this but produces uncodeable data at scale.
Multiple-select questions allow respondents to choose all applicable options. They reveal co-occurring factors ("Which barriers did you face? Select all that apply.") but complicate analysis because each option becomes its own variable. Use them when intersecting factors matter; avoid them when clean rankings are required.
Likert scale questions present a statement and ask for degree of agreement across a symmetric scale — typically 5 or 7 points. They're the workhorse of program evaluation: "I feel confident applying what I learned." They support parametric statistical analysis when assumptions are met. The design trap: Likert scales measure agreement with a statement, not lived experience of an outcome. Researchers often confuse the two.
Rating scales ask respondents to assign a numeric value to a concept — satisfaction, importance, likelihood. The NPS question is a rating scale. Rating scales work well for benchmarking and trend tracking. They break down when the construct being rated is ambiguous or means different things to different respondents.
Rank-order questions ask respondents to sequence options from most to least preferred or important. They reveal relative priorities but are cognitively demanding and difficult to analyze when options exceed five. Use them for prioritization exercises with clear stakes.
For organizations tracking change over time — pre-program, mid-program, post-program, follow-up — Sopact Sense maintains the same closed-ended item across collection points through persistent participant IDs. The Answer Architecture holds across the full data lifecycle without any manual reconciliation.
The difference between a question that generates insight and one that generates noise is often a single design decision. These examples show both.
Dichotomous examples:
Weak: "Was the training helpful?" (Yes/No) — tells you nothing about what helped or why. Stronger: "Did you apply at least one skill from this training within 30 days of completion?" (Yes/No) — measures a specific behavioral outcome tied to a program decision.
Multiple-choice examples for nonprofits:
"What is your primary barrier to program participation?" with options: Transportation / Scheduling conflicts / Cost / Language / None of the above. This maps directly to program design decisions your team can act on.
"Which session format do you prefer?" with options: In-person / Virtual synchronous / Asynchronous / No preference. This informs delivery planning for the next cohort.
Likert scale examples for program evaluation:
"I feel better equipped to manage my household budget after completing this program." (Strongly disagree → Strongly agree.) "The program facilitator communicated expectations clearly." "I would describe my progress toward my employment goal as on track."
Rating scale examples:
"On a scale of 1–10, how confident are you applying the skills from Module 3?" — measures self-efficacy at a specific skill level. "How would you rate the overall quality of support you received?" (1 = Very poor, 5 = Excellent) — general satisfaction benchmark.
Rank-order examples:
"Rank the following program resources from most to least helpful: peer mentors, online materials, group workshops, one-on-one coaching, alumni network." "Order the following barriers from most to least significant: time, access, cost, confidence, family responsibilities."
The examples above share a design principle: each connects to a specific decision or analysis the organization needs to make. None are fishing expeditions. The Answer Architecture is visible in every item.
For social impact assessment and longitudinal research, Sopact Sense collects closed-ended responses across multiple time points linked to the same participant — without requiring export reconciliation. Every rating and scale response is stored against a persistent ID from enrollment forward.
In research methodology, closed-ended questions are structured data collection instruments — the foundation of quantitative designs where comparability across respondents is non-negotiable.
In quantitative research, closed-ended questions produce interval or ordinal data that supports statistical testing: frequency distributions, chi-square tests, ANOVA, regression. They enable researchers to make group comparisons, identify correlations, and test hypotheses with precision. When a study needs to compare outcomes across 20 sites or 10,000 participants, closed-ended questions are the only format that scales.
In qualitative research, closed-ended questions appear less frequently but aren't absent. Structured interview protocols sometimes include closed-ended items to establish baseline facts before moving to narrative exploration. In mixed-methods designs, they provide the quantitative anchor that qualitative data contextualizes and explains.
In program evaluation, closed-ended questions serve three primary functions: measuring changeover time (pre-post comparisons), enabling disaggregation by demographic or program variable, and satisfying funder reporting requirements that specify standard metrics. SurveyMonkey Apply and Submittable collect closed-ended application data but don't link it to program participation or outcome tracking — the data sits in the application system, disconnected from what happens downstream.
Research design and measurement levels. The type of closed-ended question must match the level of measurement the analysis requires. Nominal categories (race, program type, region) require multiple-choice. Ordinal rankings require Likert or rank-order. Interval constructs (confidence, self-efficacy) require rating scales with anchored endpoints. Using the wrong format produces data at the wrong measurement level — and statistical tests that are technically invalid.
For equity metrics measurement and DEI assessment, disaggregation is not optional. Closed-ended questions must be designed so that response options enable cross-tabulation by gender, race/ethnicity, age, location, or program type. Sopact Sense structures this at the point of collection — demographic fields are part of the participant record, not a separate export to merge later.
Closed-ended questions are not inherently better or worse than open-ended alternatives. They're different tools with different trade-offs. The error is defaulting to one format without considering what the other provides.
Advantages of closed-ended questions.
Standardization enables comparison. When every respondent answers the same options, you can compare across cohorts, sites, time periods, and demographics. This is irreplaceable for trend tracking and benchmark reporting.
Analysis is immediate. Counts, averages, and distributions emerge without coding. A 500-person survey produces reportable data the same day collection closes — something open-ended questions cannot offer without qualitative analysis infrastructure.
Response burden is lower. Closed questions are faster to complete, which improves response rates — particularly for follow-up surveys where participant fatigue is a real risk. A five-item Likert scale takes under two minutes. An open-ended equivalent can take ten.
They support quantitative analysis. Likert scales and rating items enable statistical testing. Multiple-choice data supports cross-tabulation and regression. Without closed-ended questions, quantitative studies lack the structured data format that makes statistical inference possible.
Disadvantages of closed-ended questions.
They capture what the designer anticipated, not what participants experienced. If participants face barriers your options didn't include, you'll never know.
They measure correlation, not causation. A closed-ended survey can show that participants who attended more sessions scored higher — but it can't explain why. Attendance drove improvement, or improvement drove attendance? Closed format can't tell you.
They produce measurement artifacts. Acquiescence bias (tendency to agree), social desirability bias (tendency to give the "right" answer), and response set effects (marking the same number down a column) all corrupt closed-ended data in ways the structured format can't detect.
They collapse nuance. A participant who rates their confidence "3 out of 5" for fundamentally different reasons appears identical in your dataset. The Answer Architecture problem compounds at scale — the larger the dataset, the more the nuance disappears.
The practical answer: Use closed-ended questions where comparability, standardization, and statistical analysis are required. Layer open-ended questions where causation, context, and emergent insight matter. Sopact Sense collects both in the same instrument, linked to the same participant record, from the start.
For NPS measurement and survey analytics, Sopact Sense pairs the closed-ended rating item with a follow-up open-ended prompt — so you have the score and the story behind it in the same data collection cycle.
[embed: video]
Write response options that are mutually exclusive and exhaustive. Options that overlap create unreliable data. Options that don't cover all experiences force respondents into "other" — uncategorizable at scale. Test your options against 10 real participant experiences before launching.
Avoid double-barreled questions. "The program was helpful and well-organized" is two questions in one. Participants who found it helpful but disorganized can't answer accurately. Split every compound statement into separate items before the survey goes live.
Match scale direction to question direction. If a high score means better outcomes, make sure your question asks about outcomes, not deficits. "How much did you struggle?" scored 1–5 produces inverted data that corrupts pre-post comparisons.
Don't ask about constructs participants can't directly observe. "How significant was the program's contribution to your income growth?" requires counterfactual reasoning most participants can't reliably perform. Ask about observable behaviors instead: "Have you applied for a new position since completing the program?"
Watch for leading questions. "How much did this excellent program improve your skills?" encodes a positive evaluation into the stem. Neutral framing — "How would you describe the change in your skills after participating?" — produces cleaner, more defensible data.
[embed: cta]
A closed-ended question is a survey or interview item that restricts the respondent's answer to a predefined set of options — yes/no, multiple-choice, rating scales, or ranked lists. Unlike open-ended questions, closed-ended items don't allow free-text responses, making data easier to aggregate and compare but less able to capture nuance, context, or emergent insights the survey designer didn't anticipate.
Closed-ended questions are structured data collection items where all possible responses are defined before the survey launches. Types include dichotomous (yes/no), multiple-choice, Likert scales, rating scales, and rank-order items. They are the foundation of quantitative research because they produce standardized, comparable data that supports statistical analysis — but they require careful design to avoid producing data that is clean but not actionable.
A closed questionnaire is a survey instrument composed entirely or primarily of closed-ended questions. All responses are pre-categorized by the designer. Closed questionnaires are efficient to complete and analyze, but can only confirm or disconfirm the designer's prior assumptions — they cannot surface unexpected findings. Most rigorous program evaluations use a mixed questionnaire that combines closed items for measurement with open items for context.
In research, a closed-ended question is a structured item that generates standardized, quantifiable data across all respondents. Researchers use them when comparability is required — to test hypotheses, compare groups, or track change over time. In mixed-methods designs, closed-ended questions provide the measurement anchor; open-ended questions provide the explanation.
The six main types are: (1) Dichotomous — yes/no or true/false; (2) Multiple-choice single-select — one answer from several options; (3) Multiple-select — choose all that apply; (4) Likert scale — degree of agreement with a statement; (5) Rating scale — numeric value assigned to a concept; (6) Rank-order — sequencing options by priority or preference. Each produces a different data structure and supports different analytical operations.
Examples include: "Did you attend all three sessions?" (dichotomous); "What is your primary barrier — transportation, scheduling, cost, or language?" (multiple-choice); "I feel confident applying what I learned. [Strongly disagree → Strongly agree]" (Likert); "Rate your satisfaction 1–5" (rating scale); "Rank these resources from most to least helpful" (rank-order). Each example maps to a specific program decision.
In research, examples include: "What is your highest level of education?" (multiple-choice, nominal); "How often did you attend program sessions?" (frequency scale, ordinal); "Rate your confidence in this skill before and after training" (pre-post rating scale, interval); "Which of the following factors influenced your decision?" (multiple-select). Each example produces data at a specific measurement level suited to the planned analysis.
Advantages include: standardization enabling comparison across groups and time; fast analysis without coding; lower response burden improving completion rates; and compatibility with statistical testing. For organizations tracking change across multiple program cohorts, closed-ended questions are the only format that produces comparable trend data at scale.
Disadvantages include: inability to capture experiences outside predefined options; correlation data without causal explanation; susceptibility to acquiescence bias and social desirability effects; and nuance collapse when meaningfully different experiences map to the same response. They also embed designer assumptions — if your options don't match participant reality, the data won't reveal that.
Open questions allow respondents to answer in their own words, producing narrative data that captures context, causation, and emergent insight. Closed questions restrict responses to predefined options, producing standardized data for statistical analysis. Effective surveys combine both: closed items for measurement, open items for explanation. Neither format alone is sufficient for rigorous program evaluation.
Closed-ended questions are quantitative. They produce numeric or categorical data that can be counted, averaged, and compared. Open-ended questions in the same survey produce qualitative data. Most rigorous program evaluations are mixed-method, combining both to produce measurement and explanation from the same data collection effort.
The Answer Architecture is the principle that a closed-ended question generates actionable data only when its response options precisely map to a decision the organization needs to make — and that mapping must happen before data collection begins. Organizations that design questions first and figure out decisions later produce structured noise: clean data that cannot drive action. Sopact Sense addresses this by building forms around participant journeys and decision points from the start.
Sopact Sense treats closed-ended responses as the origin of a participant data record, not a standalone survey transaction. Each response links to a persistent participant ID assigned at first contact, so closed-ended data from intake, mid-program check-ins, and outcome assessments connects longitudinally without manual reconciliation. Disaggregation by demographic or program variable is structured at collection — not retrofitted from an export after the fact.