play icon for videos

Open-ended questions: examples, types, and how to score them

Plain-language guide to open-ended questions. Examples, the six writing rules, and how a rubric turns long responses into decisions in hours.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 2, 2026
360 feedback training evaluation
Use Case
Open-ended questions

An open-ended question gathers a story. A rubric turns the story into a decision. Most programs stop at the story.

This guide explains what an open-ended question actually does, the six rules for writing one that produces useful answers, and how to score the responses with a rubric so the answer turns into a decision instead of a backlog. Examples come from rapid-relief grant programs, workforce training, and student feedback. No prior background needed.

Open-ended question
In two or three sentences, describe what brought you here today.
From a rapid-relief intake form. One open prompt. No checkboxes.
Rubric, 3 criteria
Urgency of need3 / 3
Specificity of situation2 / 3
Stability factors2 / 3
Total7 / 9
Decision
Approved. $1,500 rapid-relief disbursed.
Returned to applicant in two hours. Manual review path: three weeks.
What this guide covers
  • 01
    The five-step pathway from question to decision
  • 02
    Definition, types, and worked examples
  • 03
    Six rules for writing a question that produces useful answers
  • 04
    The choices that decide whether the system shortens response time
  • 05
    A worked example from rapid-relief grant intake
  • 06
    Common questions about types, surveys, and analysis
The pathway

From an open-ended question to a decision in five steps

Most programs treat open-ended questions as a collection task. The work is not the collection. The work is what happens between the response and the decision. Five steps, in order. The middle three are where rubric-based scoring removes the manual bottleneck that decides whether response time is hours or weeks.

From question to decision
01
Question

Phrase the prompt so a yes-or-no answer is impossible. Anchor it to a specific moment.

02
Response

A respondent writes or speaks an answer in their own words. Length is theirs to choose.

03
Rubric

Three to five criteria, written before responses arrive. Each criterion scored 0 to 3.

04
Score

AI applies the rubric to the response. Borderline scores route to a human reviewer.

05
Decision

Approve. Route to review. Follow up. The score is what shortens response time.

Where the bottleneck lives
Response · step 02
Quick to collect. Hundreds of paragraphs sitting in a spreadsheet column.
Rubric · step 03
Often skipped. Reviewers code on instinct. Two reviewers disagree on the same response.
Score · step 04
Manual coding takes weeks. Decision waits. The applicant waits longer.

Steps 03 and 04 are where rubric-based AI scoring matters. Write the rubric before the question goes out. Score against it as responses arrive. Surface borderline cases to human reviewers. The decision in step 05 returns in hours instead of the next quarterly review.

The pathway above describes a single open-ended question. A typical rapid-relief intake or post-program feedback form runs five to seven open-ended questions in parallel, each with its own rubric. The number of questions is not the constraint. The presence of a rubric for each one is.

Definitions

Open-ended questions, defined and distinguished

Five questions worth answering before writing any prompt. The answers also form the working vocabulary for the rest of this guide.

What is an open-ended question?

An open-ended question is a question phrased so the respondent has to write or speak an answer in their own words instead of picking from a list. There is no fixed answer set. The wording usually starts with what, how, why, describe, or tell me about.

Open-ended questions surface reasons, situations, and stories that closed multiple-choice questions cannot capture. They produce richer answers and require more analysis work, which is why a rubric matters.

What is an open-ended question example?

A working example reads as a single sentence that invites a paragraph in return. Each one starts with a verb that opens a story, anchors the answer to a specific moment, and cannot be answered with yes or no.

Open-ended question examples
  • In two or three sentences, describe what brought you here today.
  • What changed for you between the start of the program and now?
  • Describe the last time you used the skill we covered in week three.
  • Walk me through what happened when you tried to apply for the credential.
  • What was the most useful part of the program, and why?
  • Tell me about the moment you realized this approach would work.

What is the difference between open-ended and closed-ended questions?

A closed-ended question gives the respondent a fixed set of answers to choose from: yes or no, a Likert scale, multiple choice. An open-ended question asks the respondent to answer in their own words.

Closed questions are fast to score and comparable across people. Open questions surface reasons and stories that closed questions miss. Most useful surveys use both, with open-ended questions placed where the program needs context that no fixed-answer set could anticipate.

What are the types of open-ended questions?

Five common types appear across surveys, interviews, and feedback forms. The right type depends on what decision the answer has to inform.

Five types of open-ended questions
  • Descriptive. Describe a situation or behavior. "Describe what brought you here today."
  • Reflective. Think back on an experience. "What changed for you between week one and now?"
  • Comparative. Contrast two states. "How does this differ from what you tried before?"
  • Hypothetical. What would you do in a situation. "If you had two more weeks, what would you do?"
  • Evaluative. Judge or rate in your own words. "What worked, and what would you change?"

Open-ended questions: meaning and characteristics

The phrase open-ended means the answer is not constrained by a fixed answer set. The respondent decides the length, the angle, and the level of detail. The shape is open. The format is open.

Five characteristics show up across every working open-ended question. The prompt cannot be answered yes or no. It is anchored to a specific moment or situation. It asks one thing, not three. It uses plain language. And it has a rubric written for it before any responses arrive. Without the fifth, the answers pile up and never reach a decision.

Related but different

Open-ended vs closed-ended

Open invites the respondent's own words. Closed offers a fixed set to pick from. Most surveys use both. The question is which goes where, and which one carries the decision.

Open-ended vs leading

A leading question implies the answer in the wording. "Why did you find this program helpful?" is leading. "What part of the program, if any, was useful?" is open. Open-ended questions invite the answer the respondent actually wants to give.

Open-ended vs probing

A probing question follows up an answer to deepen it. "Tell me more about that." "What happened next?" Probes are tools used in interviews after an open-ended opener. Both are open in form. The probe is reactive. The opener is planned.

Open-ended vs hypothetical

A hypothetical question is one type of open-ended question, asking what the respondent would do in a scenario. Hypotheticals are useful for design work. They are weaker for outcome measurement, where descriptive or reflective open-ended questions land harder.

Design principles

Six rules for writing a question that produces useful answers

Most open-ended questions fail not because the topic is wrong but because the wording lets the respondent off the hook. These six rules show up in every survey, intake form, and interview script that produces decision-grade answers.

01 · Wording

Start with a verb that opens a story

No yes-or-no triggers. No "do you" or "are you" openers.

Open-ended questions start with describe, walk me through, what, how, or tell me about. The wording removes yes-or-no as a possible answer. "Did the program help you?" closes the door. "Describe one thing that changed for you during the program" opens it.

Why it matters: the verb at the front decides whether the response is one word or one paragraph.

02 · Specificity

Anchor to a specific moment

Time, place, or event the respondent can locate in memory.

A vague prompt produces a vague answer. "How did the training go?" gets generic praise. "Describe the last time you used the framework we covered in week three" gets a real story. Anchors give the respondent something to recall, not opinions to assemble.

Why it matters: anchored prompts produce responses you can actually score against a rubric.

03 · Single intent

Ask one thing, not three

One verb. One topic. One time horizon.

A double-barreled question forces the respondent to pick which half to answer. "What worked, what did not, and what would you change?" reads as a paragraph but acts as three prompts. Split it into three questions or pick the one that drives the decision.

Why it matters: one-intent prompts let two reviewers score the same response and land within one point.

04 · Order

Place the heaviest question last

Lightest first. Reflective and evaluative at the end.

Respondents warm up. The first open-ended question gets a shorter answer than the third. Lead with a descriptive question that lets the respondent tell what happened. Save evaluative or reflective prompts for after they have already started writing.

Why it matters: ordering decides whether the most important answer arrives or gets skipped.

05 · Length

Short questions, long answers

A thirty-word prompt produces a five-word reply.

Long prompts read as work. Respondents skim, miss the actual ask, and write a sentence that does not answer it. Cut the question to one sentence under twenty words. The space saved goes to the response, where it belongs.

Why it matters: short prompts raise response length and quality at the same time.

06 · Pre-rubric

Write the rubric before the question goes out

Three to five criteria. Score 0 to 3 each. Drafted first.

A rubric written after responses arrive is a rationalization. A rubric written first is a design. Two reviewers using the same pre-written rubric will land within one point on the same response. That is the quality bar that turns open-ended responses into decisions.

Why it matters: the rubric is what shortens response time from weeks to hours.

Method choices

The choices that decide whether the system shortens response time

Six decisions a program makes when designing an open-ended question. The first one cascades into the rest. Get the rubric decision wrong and the others stop mattering.

The choice Broken way Working way What this decides
When you write the rubric

The first decision. Cascades into all the others.

Broken

Responses arrive. Reviewers read them. Themes get coded on instinct. Two reviewers tag the same response differently. Categories drift across the cohort.

Working

Write the rubric before the question goes out. Three to five criteria, 0 to 3 each. Test it on five sample responses. Lock the criteria before launch.

Whether two reviewers agree on the same response within one point.

How responses get scored

Manual coding versus rubric-based AI.

Broken

A team member reads each response, tags it, counts themes. Two hundred responses takes three weeks. Decisions wait. Applicants wait longer.

Working

AI scores against the locked rubric as responses arrive. Borderline scores route to a human reviewer. Hundreds of responses scored in the time it took to read ten.

Time to respond: hours instead of weeks.

How outliers are handled

Average them out, or surface them.

Broken

High-stakes outlier responses get rolled into an average. The applicant in crisis reads the same as the applicant with a stable backup plan. The average hides both.

Working

Outliers route to a human reviewer. The system flags responses that score at the extremes or that match a watch-list pattern. Reviewers see them within the hour.

Whether high-stakes cases get human eyes on day one or day twenty.

How rubric ties to action

Reporting only, or the score routes the decision.

Broken

Rubric scores live in a quarterly report. The grant decision happens in a different meeting on a different timeline. The score informs nothing in real time.

Working

The score routes the decision. Above a threshold, approve. Below it, route to review. Edge cases queue for a reviewer. The action is in the same workflow as the score.

Whether the system actually shortens response time or only reports on it.

How questions evolve

Re-ask same wording, or iterate.

Broken

The same prompt runs every cohort, every quarter. Response patterns hint that the wording is unclear. The team notices but cannot rewrite without losing the historical comparison.

Working

Iterate the wording when the response data signals it. Keep the rubric stable across versions so scores remain comparable. Document each version.

Quality of next-cycle data versus a permanent dent in this one.

How many open-ended questions you ask

Maximize the count, or score every one.

Broken

Ten open-ended questions in the survey, none with a rubric. Six months later the team has thousands of paragraphs and no scoring. The data sits and ages.

Working

Ask fewer questions. Score every one. Three to five open-ended prompts, each with its own rubric, beats ten prompts with no scoring plan every time.

Whether the responses become evidence or remain a backlog.

Compounding effect

The first row controls the rest. Write the rubric before the question goes out and the other five choices follow naturally. Skip that one, and AI scoring has no rubric to apply, outliers have no thresholds to flag, and the scores have nothing to tie back to a decision. The whole system collapses into manual coding with extra steps.

A worked example

Rapid-relief grant intake: one open prompt, scored in hours

A grant program disbursing $200 gas cards through $2,000 eviction-support payments. One open-ended prompt on the application. The decision the program makes between collection and disbursement is what shortens response time from weeks to hours.

We run rapid-relief micro-grants for families in our county: gas cards through eviction-support payments, $200 to $2,000. Volume runs 80 to 120 applications a week. Every application has one open prompt: describe in two or three sentences what brought you here today. The prompt is the right ask. The problem is what happens next. A reviewer reads the response, copies a few notes into a case-management tool, opens a second tab to check eligibility, makes a call. Fifteen to twenty minutes a case on a good day. We have eight reviewers. The math says two weeks of backlog before applicants hear anything. The applicant who needed gas money to get to a job interview heard back after the interview already happened.

Workforce-and-relief program lead, mid-cohort cycle.

Quantitative axis

Response time

Hours from application submitted to applicant notified. The metric every program manager tracks. The metric every applicant feels.

Bound at scoring

Qualitative axis

Evidence depth

How much of the applicant's situation the decision actually accounts for. The thing the open-ended prompt was asked to surface in the first place.

What rubric-scored intake produces

Locked rubric, written first

Three criteria: urgency of need, specificity of situation, stability factors. Each scored 0 to 3. Tested on five sample responses before the form goes live.

AI scores every response on arrival

Ninety-second turnaround per response. The same rubric applied identically across every applicant. Two reviewers re-checking ten responses land within one point.

Borderline scores route to a reviewer

Total score in the middle band, or any single criterion at zero, queues for human review within the hour. The reviewer sees the rubric breakdown alongside the response.

Decision returns to the applicant in hours

Above-threshold scores trigger approval and disbursement. The applicant gets a decision the same day, not at the next case-review meeting two weeks out.

Why manual case review fails this volume

Reviewers code on instinct, not a rubric

No locked criteria written before launch. Each reviewer applies their own threshold for urgency. Two reviewers reading the same response disagree on the disbursement amount.

Each case takes fifteen to twenty minutes

Read the response. Switch to the case-management tool. Copy notes. Check eligibility. Type a decision rationale. Eight reviewers cannot keep up with 100 weekly applications.

Outliers and crisis cases sit in the same queue

The applicant facing eviction next week and the applicant filing as a precaution land in the same backlog. There is no signal to surface the urgent one earlier.

Response time stretches to weeks

By the time a decision returns, the moment the relief was needed has often passed. The data the program produces is on the activity, not on outcomes the funding actually changed.

Why this is structural, not procedural

The rapid-relief program is not slow because reviewers are slow. It is slow because the open-ended response and the scoring step live in different tools. Putting the rubric and the response in the same workflow is what removes the bottleneck. The fix is structural to how intake is built, not a process tweak applied on top of an existing case-management stack.

Applications

Three program contexts where rubric-scored open-ended responses change the workflow

Three different organizational shapes. Same architecture: write the rubric first, score the response on arrival, route the decision in hours. The shape of the decision changes per context. The structure does not.

01

Workforce training, essay scoring

Cohort-based programs scoring open-ended assessments against a competency rubric.

Workforce training programs ask trainees to write reflective essays at midpoint and end of program. The prompt asks the trainee to describe a moment when they applied a specific competency. Volume runs 30 to 80 essays per cohort, and a typical organization runs four to six cohorts a year.

What breaks. The instructor reads each essay, highlights themes, types comments. End-of-cohort scoring takes a full week of administrative time. By the time a trainee receives feedback, they have already started the next module. The signal arrives after the moment to act on it has passed.

What works. The competency rubric is written before the prompt goes out, with three to four criteria scored 0 to 3 each. AI applies the rubric to every essay on arrival. Borderline scores and any zero on a single criterion route to the instructor. Trainees see scored feedback within the same day. The instructor focuses time on the cases where their judgment matters most.

A specific shape

A workforce-development nonprofit running 200-trainee cohorts. End-of-module essay scoring went from one week to one afternoon. Instructors now spend that reclaimed time on the 15 to 20 essays the rubric flagged for human review.

02

Education, student reflection

Schools and after-school programs scoring open-ended reflections against learning outcomes.

Education programs ask students for open-ended reflections after lessons, projects, or units. The questions read as variations on "describe what you learned" and "describe a moment you got stuck." The volume per teacher per week is in the hundreds when the question goes out across several classes.

What breaks. Teachers read what they can, score against an internal sense of "looks like a strong reflection," and skip what they run out of time to read. Students who needed feedback the most get the least of it. Patterns across classes go unseen because no one is scoring at scale.

What works. The learning rubric pairs three criteria with student-friendly language: depth of reflection, specificity of example, evidence of next-step thinking. AI scores each reflection. Teachers see the score distribution per class and the responses flagged for follow-up. The teacher's time goes to conversations with students whose responses signaled a gap, not to scoring every paper.

A specific shape

A youth-development organization running afterschool programs across twelve sites. Weekly reflection scoring shifted from teacher-by-teacher inconsistency to a shared rubric. Site directors now compare scoring distributions across cohorts and surface the patterns that need program-level attention.

03

Multi-question intake forms

Programs collecting five to seven open prompts on intake, each with its own rubric.

Some programs need more than one open-ended question on intake. A family-resource center asks about current situation, prior services, immediate need, and stability factors. A scholarship program asks about goals, current obstacles, and academic history. A coaching service asks about three different domains.

What breaks. Each open-ended question gets a column in a spreadsheet. Five questions across 200 applicants is 1,000 paragraphs of free text. Reviewers attempt to read everything, give up halfway, and revert to scanning. Decisions get made on the closed-ended fields the form also collected, while the open-ended responses go unused.

What works. Each open prompt has its own rubric, written before launch. Each rubric has its own scoring lane. The form treats five open prompts as five scored variables, not as five bodies of text to read. Reviewers see the score breakdown across criteria for the applicant they are deciding on, not 1,000 paragraphs.

A specific shape

A scholarship program with a six-question intake form, 800 applicants per cycle. Decisions that used to wait on a reviewer reading every essay now happen as the rubric scores arrive. Reviewers see a profile, not an unread inbox.

A note on tools

A note on tools

Google Forms SurveyMonkey Typeform Qualtrics Submittable Sopact Sense

Google Forms, SurveyMonkey, Typeform, and Qualtrics handle the collection step well. They let respondents type free-text answers to an open-ended prompt and they store those answers in a column. The architectural gap is what happens between the response and the decision. Each of these tools exports the open-ended responses for a separate analyst, in a separate tool, on a separate timeline. The rubric, if one exists, lives in a document somewhere outside the workflow.

Sopact Sense closes the gap by putting the rubric, the response, and the routing in the same workflow. Rubrics are written before the form goes live, AI scores each open-ended response on arrival against the locked rubric, borderline scores route to a human reviewer, and the resulting score ties to action in the same system. The fix is not a better text-coding tool. It is making the rubric a structural part of how intake is built.

FAQ

Open-ended questions: common questions, answered

Q.01

What is an open-ended question?

An open-ended question is a question phrased so the respondent has to write or speak an answer in their own words instead of picking from a list. There is no fixed answer set. The wording usually starts with what, how, why, describe, or tell me about. Open-ended questions surface reasons, situations, and stories that closed multiple-choice questions cannot capture. They produce richer answers and require more analysis work, which is why a rubric matters.

Q.02

What is an open-ended question example?

A working example: "In two or three sentences, describe what brought you here today." Another: "What changed for you between the start of the program and now?" Another: "Describe the last time you used the skill we covered in week three." Each one starts with a verb that invites a story, anchors the answer to a specific moment, and cannot be answered with yes or no. The page includes more examples for surveys, interviews, and feedback forms.

Q.03

What is the difference between open-ended and closed-ended questions?

A closed-ended question gives the respondent a fixed set of answers to choose from: yes or no, a Likert scale, multiple choice. An open-ended question asks the respondent to answer in their own words. Closed questions are fast to score and comparable across people. Open questions surface reasons and stories that closed questions miss. Most useful surveys use both, with open-ended questions placed where the program needs context that no fixed-answer set could anticipate.

Q.04

What are the types of open-ended questions?

Five common types appear across surveys, interviews, and feedback forms. Descriptive questions ask the respondent to describe a situation or behavior. Reflective questions ask them to think back on an experience. Comparative questions ask them to contrast two states. Hypothetical questions ask what they would do in a situation. Evaluative questions ask them to judge or rate something in their own words. The right type depends on what decision the answer has to inform.

Q.05

Why use open-ended questions in research?

Open-ended questions in research surface the reasons behind a number. A closed question can tell you what people did. An open question can tell you why. They are useful when the response set cannot be fully predicted, when the program needs to learn how participants describe their own experience, or when the program wants quotes that travel into a report. They produce more analysis work, but the depth is often the point of asking in the first place.

Q.06

How do you analyze open-ended responses?

The traditional method is manual coding: a researcher reads each response, tags themes, and counts. This works for small samples and takes weeks for large ones. The faster method is rubric-based scoring. The team writes a short rubric, often three to five criteria with a 0-3 score each, before any responses arrive. AI applies the rubric to each response. Borderline scores route to a human reviewer. The team gets comparable scores across hundreds of responses without losing the qualitative depth.

Q.07

What is an open-ended questionnaire?

An open-ended questionnaire is a survey instrument made up mostly or entirely of open-ended questions. Each question asks for a written answer rather than a multiple-choice selection. Open-ended questionnaires are common in qualitative research, intake forms for grant programs, and post-program feedback. They produce rich data and demand more analysis time. Pairing the questionnaire with a rubric, written before responses arrive, is what keeps the analysis from becoming a backlog.

Q.08

What are open-ended interview questions for research?

Open-ended interview questions are prompts a researcher uses to invite a long answer in conversation. Common openers: "Walk me through what happened." "What made you decide?" "Describe the moment you realized." "How did that change what you did next?" These prompts are paired with quiet listening and follow-up probes. The point is to let the participant lead the answer, then code the recording or transcript against a rubric so insights are comparable across interviews.

Q.09

How do open-ended questions work in quantitative research?

Quantitative research mostly uses closed questions because the answers need to be counted. Open-ended questions still appear in two places. First, as one or two prompts at the end to surface reasons the multiple-choice items missed. Second, as a coded variable: each response gets a rubric score that is treated as a quantitative variable in the dataset. The rubric is what turns a written paragraph into something the analyst can put in a regression alongside age, income, or program type.

Q.10

What does open-ended response mean?

An open-ended response is the answer a respondent writes or speaks to an open-ended question. It is not constrained by a fixed answer set. Open-ended responses range from a single sentence to several paragraphs. They are the raw input for thematic coding or rubric-based scoring. The phrase is often used interchangeably with open text response, free-text response, or narrative answer. The handling is the same in each case: the program needs a method to turn the words into a decision.

Q.11

What are open-ended feedback questions?

Open-ended feedback questions invite a participant to describe what worked, what did not, and what changed. Common forms: "What was the most useful part of the program?" "What would you change?" "Describe one thing you applied this week." Feedback questions are the bridge between activity and outcome. Scored against a short rubric, they tell the program team where to invest the next cycle and which moments produced the most change.

Q.12

How many open-ended questions should a survey have?

Two or three for a short feedback survey. Five to seven for a program intake form where the answers drive a decision. Above seven, response quality drops and abandonment rises. The number that matters is not the maximum: it is whether the program has a rubric ready to score every open-ended question it asks. Asking ten open-ended questions and scoring none of them produces a backlog and slow response times. Ask fewer, score every one.

Q.13

How do rubrics make open-ended responses decidable?

A rubric is a short scoring guide written before responses arrive. It lists three to five criteria with a 0-3 score on each. Two reviewers using the same rubric on the same response should land within one point. That consistency is what turns a long written answer into a number the program can act on. AI scoring scales the rubric across hundreds of responses while keeping borderline cases on a human reviewer's queue. The result: rapid-relief decisions in hours instead of manual coding over weeks.

Q.14

Can I use Google Forms or SurveyMonkey for open-ended questions?

Yes for the collection step. Both let respondents type a free-text answer to a prompt. The gap is what happens next. Google Forms and SurveyMonkey export the responses as a column of text and leave the analysis to the team. For a small program with a few dozen responses, that is workable. For a program reviewing hundreds of intake responses every week, the team needs a layer that scores each response against a rubric so the decision happens in hours rather than the next quarterly review.

Open-ended, made decidable

Bring three open-ended questions. Leave with three rubrics.

A working session, not a demo. We sit with three open prompts you already use, draft a rubric for each, run sample responses through the rubric, and you leave with a working scoring kit you can take back to your program. No procurement decision required.

Format

Sixty-minute video call with Unmesh Sheth, founder of Sopact and author of this guide.

What to bring

Three open-ended prompts you currently use on intake, post-program feedback, or training assessment.

What you leave with

A drafted rubric per prompt, sample responses scored against each rubric, and a working setup you can run yourself.