play icon for videos

Closed-Ended Questions: Hidden Costs in Evaluation (2026)

Closed-ended questions: 6 types, 50+ stakeholder survey examples, pros and cons, and the Answer Architecture framework for better decisions.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 22, 2026
360 feedback training evaluation
Use Case

Closed-Ended Questions: Definition, Types, Examples, Advantages & How to Write Them

A nonprofit runs a post-program survey. The question: "What was the biggest barrier to completing the program?" The answer options are (a) cost, (b) time, (c) distance, or (d) other. Sixty-eight percent pick "time." The team builds next year's program around reducing time commitment. Six months later, qualitative interviews reveal the real top barrier — childcare. That word wasn't on the answer list, so respondents whose actual barrier was childcare picked the closest option instead of writing "other" with no detail. The survey data wasn't technically wrong. The answer list was.

This is The Missing Option Problem — when closed-ended answer lists don't include what respondents actually want to say, respondents pick the closest wrong option rather than skipping the question or selecting "other" with no explanation. The data looks clean. Everyone picked something. It's silently inaccurate because the real answers got buried in the wrong buckets. The fix isn't to abandon closed-ended questions. It's to design answer lists that actually cover respondent reality — and pair them with at least one open-ended follow-up that catches what the list misses.

Last updated: April 2026

This is the closed-ended educational anchor in a five-page cluster on survey question design. For the open-ended counterpart, see open-ended questions. For the direct comparison of the two formats, see open-ended vs closed-ended questions. For survey-specific open-ended templates, see open-ended survey questions. For analyzing qualitative follow-ups at scale, see how to analyze open-ended survey responses.

Closed-Ended Questions Guide

Closed-ended questions — fast and countable, only if the answer list covers reality

Closed-ended questions force a choice from a preset list. They produce structured data that analyzes itself — counts, averages, cross-tabs, pre-post tracking. Unless your answer list misses what respondents actually want to say. This guide shows you the 6 types, 30+ examples across contexts, and the rules for writing lists that cover the real ground.

Ownable Concept
The Missing Option Problem
When closed-ended answer lists don't include what respondents actually want to say, respondents pick the closest wrong option rather than skipping or selecting "other" with no detail. The data looks clean — everyone picked something from the list. It's silently inaccurate because the real answers got buried in the wrong buckets. The fix is better list design plus one open-ended follow-up to catch what the list misses.
6
types — each produces different data
30+
examples across six contexts
3
moves that make any list work
80:20
pair closed with open — never alone
Six types, rendered
Closed-ended isn't one format — it's six, each with its own shape

Yes/no, multiple choice, multiple select, rating scale, Likert scale, ranking. Each produces a different kind of data. Pick by purpose — not by what's fastest to build.

01
Binary
Yes / No (Dichotomous)
Yes No

Use for: factual checks, screening, eligibility gates. Cleanest data, least nuance.

02
Categorical
Multiple choice (pick one)
Option A
Option B
Option C
Other…

Use for: category selection, role, plan, barrier. Always include "Other" with a text box.

03
Categorical-multi
Multiple select (pick several)
Feature one
Feature two
Feature three
Feature four

Use for: co-occurring behaviors or preferences. Reveals combination patterns across respondents.

04
Ordinal
Rating scale (0–10 / 1–5)
07 / 1010

Use for: NPS, satisfaction, likelihood. Numeric scale that averages and tracks across waves.

05
Ordinal
Likert scale (agreement)
Strongly disagreeStrongly agree

Use for: attitudes and beliefs. 5 or 7 points; avoid "neutral" middle to discourage satisficing.

06
Ordinal
Ranking (put in order)
1First choice
2Second choice
3Third choice

Use for: preference ordering. Reveals priorities you can't get from independent ratings.

The single rule across all six types: design the answer list before writing the question, pilot-test it with real respondents, and always pair closed-ended with one open-ended follow-up. That combination catches The Missing Option Problem before the data ships to the board deck.

What are closed-ended questions?

Closed-ended questions are survey, form, or interview items where respondents choose from a preset list of answer options — they cannot write in their own words. Common formats include yes/no, multiple choice, rating scales, Likert scales, and ranking questions. The defining feature is the fixed answer list: every response falls into one of the predefined options, which makes the data immediately countable and comparable across groups or time.

The preset answer list is both closed-ended's greatest strength and its signature weakness. When the list fully covers respondent reality, the data is fast, clean, and statistical. When the list misses what respondents actually want to say — The Missing Option Problem — respondents pick the closest wrong option, and the data becomes silently inaccurate without anyone noticing.

What do closed-ended questions mean?

Closed-ended questions mean questions that can only be answered by selecting from a predetermined list of options — the respondent cannot write their own answer. This is the opposite of open-ended questions, where the respondent answers in their own words. Closed-ended produces countable data (numbers, percentages, averages). Open-ended produces narrative data (reasoning, stories, specific detail).

The word "closed" refers to the answer space being closed off — limited to the options the designer chose in advance. Every closed-ended question commits to an answer list before the first response arrives. That commitment is what makes the data analyzable immediately, and also what creates the core design challenge: anticipating what respondents will actually want to say.

What is an example of a closed-ended question?

An example of a closed-ended question is "On a scale of 1 to 5, how satisfied were you with the program?" The respondent must pick a number between 1 and 5 — they cannot write anything. Every response falls into one of exactly five predefined buckets, which makes the data immediately countable, averageable, and comparable across groups.

Other common examples of closed-ended questions include:

  • "Did you complete the full program?" (Yes / No)
  • "Which of these barriers affected you most?" (cost / time / distance / childcare / transportation / other)
  • "Rate how strongly you agree with this statement." (Strongly disagree → Strongly agree)
  • "How likely are you to recommend us to a friend?" (0–10 scale — the NPS standard)
  • "Which of these features do you use regularly?" (select all that apply)

Each one specifies the answer list up front. The respondent's job is to pick the option that best matches their situation — and hope that one of the options actually does.

Closed-ended questions examples across 6 contexts

Closed-ended questions show up in far more places than surveys. Below is a library of 30+ closed-ended question examples grouped by where they appear in practice. Adapt the wording for your situation.

In surveys (5 examples)

  1. How satisfied were you with the program? (1 = Not at all · 5 = Extremely)
  2. How likely are you to recommend us to a friend? (0–10 scale)
  3. Did you complete the full program? (Yes / No)
  4. Which of the following best describes your role? (pick from list)
  5. In the past month, how often have you used the service? (Never / Rarely / Sometimes / Often / Daily)

In intake forms and applications (5 examples)

  1. What is your current employment status? (Employed / Unemployed / Student / Retired / Other)
  2. Which program are you applying for? (pick from list)
  3. Do you require any accommodations? (Yes / No)
  4. How did you hear about us? (Website / Social media / Friend / Partner org / Other)
  5. Are you currently enrolled in another similar program? (Yes / No)

In polls and voting (5 examples)

  1. Which of these candidates do you support? (single choice from list)
  2. Do you approve of the current policy? (Approve / Disapprove / Unsure)
  3. Rank these priorities in order of importance to you. (ranked list)
  4. Will you vote in the upcoming election? (Yes / No / Unsure)
  5. Which issues matter most to you? (select all that apply)

In quizzes and assessments (5 examples)

  1. Which of these is the correct answer? (multiple choice)
  2. True or false: the formula shown is correct.
  3. Rate your confidence in this topic. (1–5 scale)
  4. Put these steps in the correct order. (ranking)
  5. Which of the following apply to this situation? (select all that apply)

In customer feedback and market research (5 examples)

  1. How would you rate your recent experience? (1–5 stars)
  2. How likely are you to purchase again? (0–10 likelihood scale)
  3. Which features do you use regularly? (select all that apply)
  4. How easy was it to find what you were looking for? (Very difficult → Very easy)
  5. Which plan are you on? (pick from list)

In screening and eligibility (5 examples)

  1. Are you currently over the age of 18? (Yes / No)
  2. Do you live in one of the following states? (list)
  3. Have you participated in a similar program in the past year? (Yes / No)
  4. Which income range best describes your household? (bracketed ranges)
  5. Do you have any of the following conditions? (checklist)

Notice the pattern across all six contexts: every closed-ended question requires a preset answer list that covers respondent reality. When the list is good, the data is fast and comparable. When the list is incomplete — The Missing Option Problem — the data is silently inaccurate.

Best Practices

Six rules for writing closed-ended questions that cover the real ground

The hero shows the six types. These six rules are what turns a preset answer list into data you can actually use — instead of The Missing Option Problem waiting to surface in your next board meeting.

01
Rule 01
Design the answer list before writing the question

Most closed-ended design runs in the wrong order — the question stem gets written, then options get brainstormed last. Flip it. Start with what respondent answers might actually look like, build the answer list that covers that range, then write the stem to match. This single reordering prevents most of The Missing Option Problem before it starts.

In practice
Bad sequence: stem → think of options → ship. Good sequence: anticipate the range of real answers → build list → write stem.
02
Rule 02
Pilot-test the list with real respondents

The answer options that feel comprehensive in a design meeting often miss the top signal in respondents' actual lives. Pilot-test your closed-ended questions with 10–15 real respondents before fielding at scale. Look for "other" picks over 10% — that's the signal your list is incomplete. Promote the write-ins that show up repeatedly to named options in the live version.

Test
If "other" gets picked by more than 1 in 10 respondents in the pilot, your list is missing something. Read the write-ins, add what's frequent, re-pilot.
03
Rule 03
Always include "Other" with a text field

For any closed-ended question where the answer space isn't strictly bounded (barriers, reasons, categories, descriptors), include "Other" with a text box — and actually read the responses. This is the escape valve that catches the Missing Option Problem in flight. A closed-ended question without an "Other" option forces every respondent whose real answer isn't on your list into a wrong bucket.

Exception
For bounded questions (age ranges, yes/no facts, standardized scales like NPS), "Other" isn't needed. For anything else, include it.
04
Rule 04
Keep options symmetrical and neutral

Leading options — "Very wonderful / Wonderful / Somewhat wonderful" — bias the answer distribution toward one direction. Every option level must be symmetrical and neutrally worded. "Strongly disagree" mirrors "Strongly agree." "Very dissatisfied" mirrors "Very satisfied." Asymmetry inflates the score you wanted to see and buries the signal you actually needed.

In practice
Bad: "Excellent / Great / Good / OK / Not bad" (all positive). Good: "Excellent / Good / Neither good nor bad / Poor / Very poor."
05
Rule 05
Split double-barreled items into separate questions

"How satisfied are you with the cost AND convenience of the program?" bundles two things into one rating. A respondent who loves the cost but hates the convenience has no clean answer — they pick the middle and the data is meaningless. Split every double-barreled item into two separate questions. One topic per question, every time.

In practice
Bad: "Rate the cost and convenience." Good: "Rate the cost." + "Rate the convenience." Two scores, cleanly separable.
06
Rule 06
Lock scale formats across survey waves

Once your study starts, the scale is locked. Swapping a 1–5 rating at baseline for a 1–10 at follow-up breaks comparability. Changing "Strongly agree" to "Completely agree" changes the measurement. If you need more resolution later, add questions alongside the locked ones — never replace. Scale stability is the foundation of longitudinal tracking.

Test
If you can't answer "is this exactly the same scale we used last wave?", you shouldn't be comparing the data. Lock before fielding.
Sopact Sense enforces all six rules by default — answer lists are built before question stems, "Other" write-ins get surfaced automatically for review, scale formats lock between waves, and every closed-ended question can be paired with an open-ended follow-up that analyzes into themes instantly.
See the 80:20 mix →

Types of closed-ended questions

There are six main types of closed-ended questions, each producing a different kind of data and supporting different analytical operations. Pick the type by what you want to learn, not by what's fastest to build.

1. Yes/No (dichotomous) questions offer exactly two options — yes or no, true or false, completed or didn't. They produce the cleanest possible data and the least nuance. Use them for factual verification ("Did you attend?") or gating logic ("Are you currently employed?"). Don't use yes/no for anything where the honest answer is "it depends."

2. Multiple-choice (single-select) questions present three or more options with one answer chosen. They produce categorical data — counts per category, cross-tabs across groups. The main design risk is The Missing Option Problem: if your list doesn't cover respondent reality, people pick the closest wrong option. Always include "other" with a text box when the answer space is open-ended.

3. Multiple-select (checkbox) questions let respondents pick more than one answer from a list. They reveal co-occurring behaviors or preferences — "Which features do you use regularly?" gets you not just one answer but the combination pattern across respondents. Use sparingly; too many checkbox questions tire respondents out.

4. Rating scale questions ask respondents to pick a number on a scale, typically 1–5, 1–7, or 0–10. They produce ordinal data that can be averaged and tracked over time. The classic version is NPS ("How likely are you to recommend us, 0–10"). Scale design matters: a 5-point scale compresses signal; a 10-point scale sometimes overwhelms.

5. Likert scale questions ask respondents to agree or disagree with a statement on a symmetrical scale — strongly disagree, disagree, neutral, agree, strongly agree. They measure attitudes and beliefs. Named after psychologist Rensis Likert, who developed the format in 1932. Five and seven points are the standard lengths. (Yes, a Likert scale is a closed-ended question — a specific ordinal subtype.)

6. Ranking questions ask respondents to put items in order — most important to least important, favorite to least favorite. They reveal preferences you can't get from independent ratings. The trade-off: ranking is harder cognitively. Limit to five or fewer items or completion rates drop.

The six types compared

Six types of closed-ended questions — each produces a different kind of data

Match the type to the purpose. Picking by what's easiest to build — not what fits the analysis — is where The Missing Option Problem usually starts.

Types of closed-ended questions
All six types compared across purpose, example, context, and data level
Type Purpose Example question Best context Data level
Yes / No (dichotomous)
Factual check or gate
Two options — cleanest data, least nuance
"Did you complete all three sessions? (Yes / No)"
Screening & eligibility
Attendance, employment, enrollment gates
Nominal
Count per category
Multiple choice (pick one)
Category selection
3+ options, one answer — cross-tab-ready
"Which barrier affected you most? (cost / time / distance / other)"
Demographics, roles, barriers
Always include "Other" with a text box
Nominal
Counts and percentages
Multiple select (pick several)
Co-occurring choices
Multiple answers allowed — reveals combinations
"Which features do you use regularly? (check all that apply)"
Behavior patterns, preferences
Use sparingly — too many tire respondents out
Nominal
Co-occurrence counts
Rating scale (numeric)
Measure intensity
1–5, 1–7, 0–10 — averageable and trackable
"How likely are you to recommend us? (0–10 scale)"
NPS, satisfaction, likelihood
10-point for resolution; 5-point compresses signal
Ordinal / interval
Means, medians, distributions
Likert scale (agreement)
Measure attitudes
Symmetrical agree/disagree scale — usually 5 or 7 points
"I feel confident I can apply what I learned. (Strongly disagree → Strongly agree)"
Beliefs, confidence, attitudes
Even-point scales force a direction; odd-point allow neutral
Ordinal
Frequency distributions, pre-post shift
Ranking (order the list)
Surface priorities
Order items by preference — reveals tradeoffs
"Rank these five priorities from most to least important."
Feature votes, priority exercises
Keep to 5 items or fewer — ranking fatigue kills completion
Ordinal
Mean rank, top-2 share
Strong surveys mix multiple types — yes/no for gates, multiple choice for demographics, rating or Likert for measurement, open-ended for the why behind each number. Never all one type.
See the 80:20 mix →
All six types plus an open-ended follow-up per section — that's the survey architecture Sopact Sense enforces by default. Counts and themes live in the same dashboard; the Missing Option Problem gets caught by the open-ended field before the data ships anywhere.
Explore Sopact Sense →

Closed-ended questions in quantitative research

Closed-ended questions in quantitative research are the workhorse of the survey instrument — the part that produces data you can run formal statistics on. Survey studies, clinical trials, market research, program evaluation, and government census work all rely on closed-ended formats for the same reason: structured data enables inferential testing, group comparison, and longitudinal tracking.

In quantitative research contexts, closed-ended questions serve five specific jobs:

  1. Demographic capture — age bracket, education level, income range, location. Needed for disaggregation in every downstream analysis.
  2. Outcome measurement — standardized scales (PHQ-9 for depression, NPS for loyalty, validated domain-specific instruments) that produce comparable scores across studies.
  3. Pre-post comparison — the same closed-ended item at baseline and follow-up, measuring change over time on the same scale.
  4. Group comparison — closed-ended items asked identically across cohorts or sites, enabling chi-square tests, ANOVA, or regression analysis.
  5. Eligibility screening — yes/no gates that route respondents to the right survey branches and exclude ineligible participants.

Closed-ended questions in quantitative research require more design rigor than casual survey questions because downstream analysis depends on measurement quality. A 5-point Likert scale with a poorly anchored middle option ("neither agree nor disagree") invites satisficing — respondents park there to skip the effort of deciding. For serious quantitative research, a 7-point scale with clear anchors at each position produces better data, and for some study designs an even-numbered scale forces a directional choice. Related reading: baseline survey and survey metrics and KPIs.

Advantages of closed-ended questions

Closed-ended questions have six main advantages that explain why they're the default format in most surveys, forms, and research instruments:

1. They're fast to answer. Respondents pick from a list instead of composing written responses. A well-designed 20-item closed-ended survey takes three to five minutes. The same 20 questions as open-ended would take 30+ minutes — and completion rates collapse.

2. They're fast to analyze. Structured data counts itself. The moment responses arrive, counts, percentages, averages, and cross-tabs are available. No coding step, no human review for each response, no weeks of qualitative analysis.

3. They allow researchers to collect and analyze large volumes of data efficiently. Closed-ended items scale to thousands of respondents without added analysis effort. This is why closed-ended dominates large market research studies, national surveys, and longitudinal panels — the structure makes volume tractable.

4. They produce directly comparable data across groups and time. When every respondent picks from the same five options, group differences and pre-post shifts are immediately visible. Open-ended answers require coding before comparison is possible; closed-ended answers are comparable by construction.

5. They're easy to chart and report. Percentages, bar charts, distribution graphs, and heatmaps all come naturally from closed-ended data. The visual language of survey reporting is built on closed-ended output.

6. They have higher completion rates. Respondents who would abandon a long open-ended question will pick from a list. For surveys sent to large or distracted populations, closed-ended questions often double or triple completion rates versus open-ended equivalents.

Advantages & Disadvantages

Closed-ended questions — the full pros and cons, side by side

Six advantages that explain why closed-ended dominates survey design. Seven disadvantages that explain why closed-ended alone is never enough.

+
Advantages
Why closed-ended dominates surveys
6 strengths
  • 01
    Fast to answer

    Respondents pick from a list instead of composing written responses. A 20-item closed-ended survey takes 3–5 minutes; the same 20 as open-ended takes 30+ minutes.

  • 02
    Fast to analyze

    Structured data counts itself — no coding step, no per-response human review. Counts, percentages, averages and cross-tabs are available the moment responses arrive.

  • 03
    Efficient for large data volumes

    Closed-ended items allow researchers to collect and analyze large volumes of data efficiently — the reason they dominate market research panels and national surveys.

  • 04
    Directly comparable across groups and time

    When every respondent picks from the same 5 options, group differences and pre-post shifts are visible immediately. Open-ended answers require coding before comparison is possible.

  • 05
    Easy to chart and report

    Percentages, bar charts, distribution graphs, heatmaps — the visual language of survey reporting is built entirely on closed-ended output.

  • 06
    Higher completion rates

    Respondents who'd abandon a long open-ended question will pick from a list. For large or distracted populations, closed-ended questions often double or triple completion rates.

Disadvantages
Why closed-ended alone isn't enough
7 limitations
  • 01
    Limit respondents from elaborating

    The most common disadvantage. Preset options constrain what can be said — respondents with specific reasons or nuanced situations have no way to express them.

  • 02
    Can't capture reasoning

    A 4-out-of-5 rating tells you the level, not the cause. Without an open-ended follow-up, the "why" behind every number stays invisible.

  • 03
    Risk of The Missing Option Problem

    When answer lists don't cover respondent reality, respondents pick the closest wrong option. The data looks clean and is silently inaccurate.

  • 04
    Poor scales compress signal

    A 5-point scale where 4 and 5 mean practically the same thing collapses real differences into the top two categories. Scale design shapes data as much as the stem does.

  • 05
    No quotes for reports

    Funders and boards respond to participant voices. Closed-ended produces numbers, not quotes. Reports built only on closed-ended data read like spreadsheets.

  • 06
    Leading options inflate bias

    Options with subtle positive or negative framing — "Excellent / Great / Good / OK / Not bad" — skew results toward the intended direction. Every level must be symmetrical.

  • 07
    Scale changes break comparability

    Swapping a 1–5 scale at baseline for a 1–10 at follow-up makes the two waves non-comparable. Once chosen, a scale is locked for the duration of the study.

=

The 80:20 mix resolves the trade-off

Strong surveys run a closed-ended backbone with open-ended spines — roughly 80% closed for the countable data, 20% open-ended to catch reasoning and the Missing Options. You keep every advantage of closed-ended and neutralize every disadvantage. That's the mix Sopact Sense enforces by default, with AI theme-coding on the open responses.

The first disadvantage — limiting respondents from elaborating — is by far the most cited. Every survey should pair closed-ended sections with at least one open-ended follow-up to preserve elaboration space.
See the full comparison →

Disadvantages of closed-ended questions

Closed-ended questions also have real disadvantages that justify pairing them with at least one open-ended follow-up per survey:

1. They limit respondents from elaborating on their thoughts. This is the most common disadvantage of closed-ended questions. A respondent who has a specific reason, a complicated situation, or a nuanced opinion has no way to express it — they pick the closest option and the nuance disappears.

2. They can't capture reasoning. A 4-out-of-5 rating tells you the level of satisfaction, not what drove it. Without an open-ended follow-up, the "why" behind every number is invisible.

3. They risk The Missing Option Problem. When answer lists don't include what respondents actually want to say, respondents pick the closest wrong option. The data looks clean but is silently inaccurate. This is the specific failure mode closed-ended questions are uniquely prone to.

4. Poorly designed scales compress signal. A 5-point scale where 4 and 5 mean practically the same thing collapses real differences into the top two categories. Scale design choices shape the data as much as the question stem does.

5. They produce no quotes for reports. Funders, boards, and stakeholders respond to real participant voices. Closed-ended questions produce numbers, not quotes. Reports built only on closed-ended data read like spreadsheets — accurate but unmemorable.

6. Leading options bias results. Options written with subtle positive or negative framing — "Excellent / Great / Good / OK / Not bad" — inflate scores toward the intended direction. Every option level must be symmetrical and neutrally worded; asymmetry creates systematic bias.

7. Changing scale formats breaks comparability. Swapping a 1–5 scale at baseline for a 1–10 scale at follow-up makes the two waves non-comparable. Once a scale is chosen for a study, it's locked until the study ends.

How to write a closed-ended question that actually works

Writing a closed-ended question that actually works comes down to three moves. Every strong closed-ended question across every context uses them.

Move 1 — Design the answer list before you write the question. Most closed-ended design goes in the wrong order. The question gets written, then options get brainstormed last-minute. Better: start with what respondent answers might look like, build the answer list that covers that range, then write the question stem to match. This single reordering prevents most of The Missing Option Problem before it starts.

Move 2 — Test the list against reality, not a conference room. The answer options that feel comprehensive in a design meeting often miss the top signal in respondents' actual lives. Pilot-test your closed-ended questions with 10–15 real respondents before fielding at scale. Look for "other" picks over 10% — that's the signal your list is incomplete.

Move 3 — Add "other" with a text field, and read what comes in. Closed-ended questions should always offer an "other" escape valve when the answer space is open-ended (like barriers, reasons, or categories the designer can't fully anticipate). Then actually read the "other" responses — that's where the Missing Option Problem shows itself. Promote frequent "other" write-ins to named options in the next wave.

For a deeper comparison of when to use closed-ended versus open-ended formats, and the 80:20 mix that strong surveys settle into, see open-ended vs closed-ended questions. For writing open-ended questions that produce usable answers, see open-ended survey questions. For making sense of open-ended responses at scale, see how to analyze open-ended survey responses.

Frequently Asked Questions

What are closed-ended questions?

Closed-ended questions are survey, form, or interview items that limit the respondent's answer to a preset list of options. Common formats include yes/no, multiple choice, rating scales, Likert scales, and ranking questions. They produce structured data — counts, percentages, averages — that can be analyzed immediately. The preset answer list is both their strength and their weakness.

What is an example of a closed-ended question?

An example of a closed-ended question is "On a scale of 1 to 5, how satisfied were you with the program?" The respondent must pick a number between 1 and 5 — they can't write their own answer. Other examples include "Did you complete the program? Yes or No" and "Which barrier affected you most: cost, time, distance, or other?"

What are the types of closed-ended questions?

The six main types of closed-ended questions are yes/no (dichotomous), multiple choice with one answer, multiple select with multiple answers, rating scale (numeric like 1–5 or 0–10), Likert scale (agreement from strongly disagree to strongly agree), and ranking. Each type produces a different data structure and supports different analytical operations.

What do closed-ended questions mean?

Closed-ended questions mean questions that can only be answered by selecting from a predetermined list of options. The respondent cannot write their own answer. This is the opposite of open-ended questions, where the respondent answers in their own words. Closed-ended produces countable data; open-ended produces words and reasoning.

What is one advantage of using closed-ended questions in market research?

One key advantage of closed-ended questions in market research is they allow researchers to collect and analyze large volumes of data efficiently. Structured responses count themselves — no coding step, no per-response human review. This is why closed-ended dominates large market research studies, national panels, and tracking surveys where hundreds of thousands of responses need quick analysis.

What is a common disadvantage of closed-ended questions?

A common disadvantage of closed-ended questions is that they limit respondents from elaborating on their thoughts. The preset options constrain what can be said — respondents with specific reasons or complicated situations have no way to express them. The fix is to pair closed-ended questions with at least one open-ended follow-up per survey section.

What are closed-ended questions in quantitative research?

Closed-ended questions in quantitative research are the structured items in a survey or assessment that produce data you can run statistics on. They handle demographics, outcome measurement with standardized scales, pre-post comparison, group comparison, and eligibility screening. Quantitative research requires careful scale design because downstream analysis depends directly on measurement quality.

What is a Likert scale question?

A Likert scale question is a closed-ended question that asks the respondent to agree or disagree with a statement on a symmetrical scale — typically strongly disagree, disagree, neutral, agree, strongly agree. Likert scales measure attitudes and beliefs. Named after psychologist Rensis Likert, who developed the format in 1932. Five and seven points are the standard scale lengths.

What are the advantages of closed-ended questions?

Closed-ended questions are fast to answer, fast to analyze, directly comparable across groups and time, easy to chart, and friendly to high-volume surveys. They produce countable data that generates averages, percentages, and cross-tabs automatically. They also enable large-sample analysis that would be impossible with open-ended data. Completion rates run higher than open-ended equivalents.

What are the disadvantages of closed-ended questions?

The main disadvantages of closed-ended questions are that they limit respondents from elaborating, can't capture reasoning behind answers, risk The Missing Option Problem when lists don't cover respondent reality, and produce no quotes for reports. Poorly designed scales compress signal; leading options inflate bias. The fix is to pair closed-ended with one open-ended follow-up per section.

How do you write a good closed-ended question?

Write a good closed-ended question with three moves: design the answer list before writing the question stem, pilot-test the list with real respondents to check it covers their reality, and always include "Other" with a text field. Then actually read the "Other" responses — that's where incomplete lists show themselves. Promote frequent write-ins to named options next wave.

What is The Missing Option Problem?

The Missing Option Problem is when closed-ended answer lists don't include what respondents actually want to say, so respondents pick the closest wrong option rather than skipping or selecting "other" with no detail. The data looks clean — everyone picked something — but it's silently inaccurate because real answers got buried in wrong buckets.

Next step

Run the full 80:20 mix — closed backbone, open spine, one pipeline

Closed-ended questions give you the countable backbone — rates, averages, cross-tabs, pre-post shifts. Open-ended questions give you the reasoning. Sopact Sense runs both in the same survey with the same participant IDs across every wave — so you see the what and the why on one dashboard, and the Missing Option Problem gets caught by the open-ended follow-up before the data ships.

  • All six closed-ended types supported out of the box
  • "Other" write-ins surfaced automatically for list improvement
  • Scale formats locked across waves — longitudinal tracking stays valid