play icon for videos

Qualitative and Quantitative Survey: Examples & Questions

Are surveys qualitative or quantitative? Direct answer plus question examples, Likert scale classification, and the Survey Question Pairing Principle

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 3, 2026
360 feedback training evaluation
Use Case
Qualitative and quantitative surveys

A score tells you how much. A story tells you why. Most surveys need both.

The question of whether a survey is qualitative or quantitative has the same answer in almost every real program: both, in different ratios. This guide is for impact practitioners who run surveys for grants, applications, and training programs. It covers what each type does, when to use which, and how three real program shapes (small grants, larger grant evaluation, workforce pre-post) mix the two. Live reports from each shape are linked throughout.

What this page covers
  • 01Three program shapes that mix the two
  • 02Definitions: qualitative, quantitative, mixed
  • 03Six rules for designing a mixed survey
  • 04Six method choices that decide the result
  • 05Workforce training: confidence vs score
  • 06Paired examples and FAQs
Three program shapes

Start with what you want to learn. The mix follows.

Three real programs, three different learning goals, three different mixes of qualitative and quantitative questions. All three are surveys. All three are mixed. The ratio depends on what each program is trying to learn. Each column links to a live Sopact report from a real cohort with the same shape.

01 · Use case

Small grants screening

200 applicants. One week to make a shortlist. Limited reviewer time.

What you want to learn

Who passes the eligibility bar, who is a strong fit, and which 30 to triage forward.

80% CLOSED
20% OPEN

Heavy quantitative: eligibility checks, demographics, budget bands, yes/no fit. One short narrative.

Role of the closed

Sort and screen at scale. Eligible? Region match? Budget band? Are key fields filled out?

Role of the open

A 200-word pitch: what would the grant fund, and what would change because of it?

Analysis output

A ranked triage table. Closed scores filter to the eligible pool; the pitch is read for fit by reviewers in a second pass.

Live report · Sopact See a triage report
02 · Use case

Larger grant evaluation

30 finalists. Long applications. Panel reviews each one in depth.

What you want to learn

Which 5 to fund. Each finalist must be understood as a project, not a checklist score.

30% CLOSED
70% OPEN

Heavy qualitative: project narrative, team capacity, theory of change. Closed for budget and metrics.

Role of the closed

Structured budget, beneficiary counts, timeline milestones. Things that need to be comparable across applications.

Role of the open

The project narrative. Five to seven prompts on the problem, the approach, the team, the risk, and the change being claimed.

Analysis output

A ranked finalist panel deck. Each finalist scored on rubric dimensions extracted from the narrative, alongside the structured budget.

Live report · Sopact See a scholarship panel report
03 · Use case

Workforce pre and post

One cohort. A pre survey, a post survey, a six-month follow-up.

What you want to learn

What changed for participants on the job, and why. Confidence is the working construct, but the story behind the score is what guides the next cohort.

50% CLOSED
50% OPEN

Even mix. Every Likert item has a paired open-ended follow-up on the same construct.

Role of the closed

Five Likert items measuring confidence, skill, and fit, scored at week 0 and again at week 12. The pre-post delta is what the funder reports on.

Role of the open

Three to four prompts about specific moments in the program. The same prompt at week 12 captures what the participant did with the training on the job.

Analysis output

A confidence-vs-score correlation report. Whose scores went up, what they wrote about, and which moments come up most among the high-rate-of-change cohort.

Live report · Sopact See the Girls Code cohort report

Three shapes, one architecture. What changes is the ratio and the role of each question type. What stays the same is that the survey closes the loop from learning goal to report.

Definitions

Six questions readers ask when they arrive

Each of the six questions below shows up tens of thousands of times a month in search. Plain-language answers, written for impact practitioners who have to design a survey by next week. Cross-references point to the relevant section of the page below each answer.

Is a survey qualitative or quantitative?

A survey is neither qualitative nor quantitative on its own. The question types decide. A survey of Likert ratings and yes/no answers is quantitative. A survey of open-ended prompts is qualitative.

Most real surveys are mixed: a few closed-ended questions for the numbers, a few open-ended ones for the why. The useful question is not which kind, but what you are trying to learn, and the mix follows from there. Section 04 above shows three program shapes with three different mixes.

What is a qualitative survey?

A qualitative survey is one designed to collect open-ended written responses, in the respondent's own words. The questions are prompts: describe a moment, tell us what happened, walk us through your decision. The data is text.

The work of analysis is reading across the responses to find recurring themes and to pull representative quotes. Qualitative surveys answer the why behind a number that another survey would only count.

What is a quantitative survey?

A quantitative survey is one designed to produce numbers and categories. It uses fixed-option questions: rate this on a 5-point scale, choose one, yes or no. The data can be aggregated, averaged, compared across cohorts, and charted.

A quantitative survey is what most funder reports run on. The limit is that it tells you how much something changed, not why. Pairing the closed-ended items with open-ended follow-ups closes that gap.

Is a questionnaire qualitative or quantitative?

A questionnaire is the form. The questions on the form decide whether the data is qualitative or quantitative. A questionnaire with Likert scales and multiple choice produces quantitative data. One with open-ended prompts produces qualitative data.

Most program-evaluation questionnaires include both, which makes them mixed-method instruments. The terms survey and questionnaire are often used interchangeably; the questionnaire is what you build, the survey is the whole process.

Is the Likert scale qualitative or quantitative?

The Likert scale is quantitative. It captures perception (agreement, confidence, satisfaction), but the answer comes back as a number on an ordered scale: 1 through 5, or 1 through 7. Numbers can be averaged, charted, and compared across groups.

The confusion comes from what the scale measures (a feeling) being mistaken for the format of the data (a number). If you also want the language behind the rating, you pair the Likert with an open-ended follow-up.

Can a survey be both qualitative and quantitative?

Yes, and most strong surveys are. Pair every closed-ended question on a topic with an open-ended follow-up on the same topic. The closed gives you the number you can compare across the cohort. The open gives you the language behind the number.

When the analysis joins the two at the respondent, you get something neither one produces alone: which kinds of stories show up at high scores, and which show up at low scores. Section 08 below walks through a workforce-training example showing exactly this kind of pairing.

Four neighboring terms, kept distinct

Useful when readers arrive from a query that conflates these.

Survey vs questionnaire

A questionnaire is the form, the set of written questions. A survey is the whole process: designing, distributing, collecting, and analyzing. Most people use the words interchangeably, and that is fine in casual use.

Open-ended vs qualitative

Open-ended is about the response format (no fixed choices). Qualitative is about the kind of evidence (texture, reasoning). Most open-ended responses are qualitative, but a free-text field that asks for a number is still open-ended in format and quantitative in content.

Closed-ended vs quantitative

Closed-ended is about the response format (fixed choices). Quantitative is about the kind of data (countable). Closed-ended questions almost always produce quantitative data: rating scales, multiple choice, yes/no, ranked lists.

Mixed-method vs multi-method

Mixed-method means combining qualitative and quantitative data on the same topic, joined at the respondent. Multi-method means using more than one collection method (survey plus interview, say). A survey can be mixed-method on its own; multi-method usually requires multiple instruments.

Design rules

Six rules for mixing qualitative and quantitative

Six rules that turn the abstract idea of "mixed-method" into a survey a respondent will finish and a team will actually read. Apply them in the order shown; rules one and two carry the most weight.

01 · Pair

Pair every score with a story

A score asks how much. A story asks what shape.

For every Likert item on a topic that matters, write one open-ended follow-up on the same topic. Confidence rated 1 to 5. Confidence described in a sentence. The pair joins at the respondent and produces what neither one does alone.


Why it matters. Without the pair, the score has no explanation.

02 · Match

Match the mix to the learning goal

Triage, evaluation, and pre-post each call for a different ratio.

A small-grants triage runs heavy quantitative. A larger grant evaluation runs heavy qualitative. A workforce pre-post runs a balanced pair. Section 04 above shows the three shapes side by side. The mix is a consequence of the goal, not a preference.


Why it matters. A wrong-mix survey collects the right data for the wrong question.

03 · Simple

Keep the survey short enough to finish

Length is the biggest predictor of completion.

Drop-off climbs sharply past twelve items in a non-incentivized survey. Two to five open-ended prompts is the working ceiling for most cohorts. Cut every item that does not earn its place. If you need more, split into two waves rather than asking everything at once.


Why it matters. Half-finished surveys produce data nobody trusts.

04 · Anchor

Anchor every open-ended prompt to a moment

Concrete prompts produce codable answers.

"What did you think?" produces "It was good." "Describe a moment when something clicked" produces a paragraph you can read for themes. Every open prompt should point at a specific time, decision, or scene the respondent can recall.


Why it matters. Vague prompts produce filler that nobody reads.

05 · Plan

Plan the analysis before the survey goes out

Decide the cross-tabs you want before writing the questions.

If your report needs to compare new participants to returning participants, the survey must collect that field. If your report needs to correlate confidence with the moments named in the open response, the open prompt must ask for those moments. Reverse-engineer the questions from the report shape.


Why it matters. Data you cannot disaggregate cannot answer the funder's follow-up question.

06 · Identity

Track the same respondent across waves

A persistent contact ID makes pre-post analysis possible.

Anonymous data shows trends across the cohort. Persistent ID data shows what changed inside the same person. Pre-post analysis, six-month follow-up, and longitudinal tracking all depend on the same ID surviving across waves. Built in at intake, not patched later.


Why it matters. Without identity, "the cohort improved" is the strongest claim you can make.

Method choices

Six choices that decide whether the survey works

Each row below is a decision the survey designer makes for every survey. The first column names the choice; the next two contrast the failure mode and the working approach; the last names what the choice decides downstream. Every row teaches a design principle.

The choice
Broken way
Working way
What this decides

Where to start

Question type or learning goal.

Broken

Start by deciding "we want a quantitative survey" or "we want a qualitative survey." Pick a tool. Write questions. The survey reflects the format the team picked, not the question the team is trying to answer.

Working

Start with the learning goal. Name the decision the data will inform. The mix and the question types follow from the goal. A small-grants triage and a workforce pre-post should not look alike.

Whether the survey answers your question. Format-first surveys answer the format's question, not yours.

How to mix

All closed, all open, or paired.

Broken

Run all closed-ended questions on one survey and all open-ended questions on another. Or run open-ended questions with no related closed-ended counterpart. The score and the reason live apart.

Working

For every topic that matters, pair a closed-ended item with an open-ended follow-up on the same topic. The closed gives the magnitude; the open gives the why. Analysis joins them at the respondent.

Whether the why connects to the score. Without pairing, low scorers and high scorers blur into one bucket of feedback.

Open-ended wording

Vague prompt or anchored prompt.

Broken

Use vague prompts: "tell us about your experience" or "any feedback?" The respondent reaches for a generic word and moves on. The data is filler.

Working

Anchor to a specific moment, decision, or change. "Describe a moment when something clicked" produces a paragraph the respondent can recall. Anchored prompts produce codable answers.

Whether the open data is usable. Vague prompts produce text nobody reads.

Survey length

All-in-one or split into waves.

Broken

Stack twenty-five items in a single survey to "get everything covered." Drop-off climbs past twelve items, and answers shorten to a phrase by item twenty.

Working

Eight to twelve closed-ended items, two to five open-ended prompts. If the topic genuinely needs more, split into two waves. Each item earns its place.

Whether respondents finish the survey. Length is the biggest predictor of completion.

Identity continuity

Anonymous, named, or persistent ID.

Broken

Strip identifiers in the name of privacy. Pre and post answers cannot be matched. Themes from one wave have no link to themes in the next. Email-based matching loses 15 to 25 percent to typos and address changes.

Working

Persistent contact ID with named consent. The same person is recognizable across every wave. Pre and post answers join automatically. Six-month follow-up still finds the same respondent.

Whether you can study change at the person level. Anonymous data shows trends but never inside a person.

Analysis approach

Read by hand, or read continuously.

Broken

Export to a spreadsheet at the end of the survey. Numbers go to a chart tool. Open responses go to an analyst who reads and codes by hand. Two weeks of work; report comes out after the next cohort starts.

Working

Themes extracted as responses arrive, joined to the closed-ended scores at the respondent ID. Quotes linked back to the source response. Cross-tabulation runs continuously. Report is ready the day the survey closes.

Whether the data informs the next cohort. Late analysis arrives after the decision has already been made.

Worked example

Workforce training: confidence vs score, paired at the participant

A 47-person workforce training cohort runs a pre survey at week 0 and a post survey at week 12. The four blocks below walk through how the same survey produced both the funder-facing pre-post number and the program-team-facing explanation, joined at the same respondent across both waves.

"Our pre-post confidence scores improved 35 percent on average across the cohort. The funder report was clear: the average went up. But I needed to know which participants gained confidence, what they wrote about, and whether the high-rate-of-change cohort all pointed to the same moments. Without that, I cannot tell the next cohort what worked. The closed-ended scores live in one tab; the open-ended reflections live in another. The link between them is what I needed."

Workforce training program leadPost-cohort funder debrief

Quantitative axis

Five Likert items, week 0 and week 12

  • Confidence on the job (1 to 5)
  • Skill self-rating, six dimensions
  • Likelihood to apply training
  • Curriculum relevance to my work
  • Likelihood to recommend the program
Qualitative axis

Three open prompts on the same constructs

  • Describe a moment in the training when your confidence changed.
  • Walk through the part you found hardest to use on the job.
  • What changed in how you approach your work since week one?
What Sopact Sense produces

Pre-post confidence delta

Average rating moved from 2.6 to 3.5 across the cohort. Each participant's individual delta is recorded against their persistent ID, ready for cohort comparison and funder reporting.

Theme card across the open responses

Seven recurring themes extracted across all 47 post-survey reflections. Each theme card shows count, share of respondents, and a one-line summary.

Paired score and theme correlation

Participants whose confidence rating jumped by two or more points all wrote about the same week-4 case study. Participants whose ratings did not move cited childcare conflicts during program hours.

Quote card linked to the score

Three to five representative quotes per theme, attributed to participant role and rating tier. Click a high-delta participant; the quote card jumps to their reflection.

Why traditional tools fail

Closed and open live in separate exports

Likert data exports as a CSV. Open responses export as a separate column or a separate file. Joining them requires manual matching by name or email, which breaks at typos and address changes.

Pre-post matching is rebuilt every wave

Without a persistent ID, matching week-0 to week-12 responses for the same participant means a manual join. Twenty percent of participants typically lose their match to typos or rotation.

Themes are coded by hand, after the fact

An analyst reads all 47 open responses and tags themes in a spreadsheet. Two weeks of work for one wave. By the time the codes are ready, the next cohort has started.

Score and story never connect

The funder report shows the average. The program report shows the themes. Nobody can answer: which participants drove the score change, and what did they write about? That answer is not in either output.

See the live report

The same architecture produced the live Girls Code cohort report on Sopact: skill movement across six dimensions, confidence change pre to post, demographic breakdown, and themes from participant reflections. All produced from one survey, with one persistent ID per participant, in the time it took the cohort to finish the program.

Open the live report library
Paired examples

The same outcome, asked two ways, across five program areas

One row per program area. Each row shows the same outcome captured both ways: a closed-ended quantitative item paired with an open-ended qualitative prompt on the same construct. Drop these into your next survey and adjust the wording for your context.

Program area
Quantitative (closed-ended)
Qualitative (open-ended)
01 · Area

Workforce and employment

Confidence and skill on the job.

"On a scale of 1 to 5, how confident are you in your ability to find employment in your field?"

Format: 5-point Likert. Output: a number you can chart.

"Describe the most significant change in your job-search skills since starting the program."

Format: open-ended prompt. Output: a paragraph you can read for themes.

02 · Area

Financial capability

Behavior and self-perception around money.

"Did you create a monthly budget in the past 30 days? Yes / No." Plus: "Rate your financial stress: 1 (very high) to 5 (very low)."

Format: yes/no plus 5-point Likert. Output: two countable items.

"In your own words, describe how your relationship with money has changed since beginning the program."

Format: open-ended prompt. Output: a participant-voice narrative for a funder report.

03 · Area

Youth and education

Engagement and belonging in a program.

"How many days did you attend program activities this month?" Plus: "Rate your sense of belonging: 1 to 5."

Format: numeric plus 5-point Likert. Output: attendance and a self-rating.

"Tell us about a moment in the program when you felt most supported. What made it meaningful?"

Format: moment-anchored prompt. Output: a story for the board update.

04 · Area

Health and wellbeing

Self-rated wellbeing change over time.

"How would you rate your overall wellbeing today compared to when you started? (Much worse / Worse / Same / Better / Much better)"

Format: 5-point ordered category. Output: a directional change indicator.

"What specific changes have you made to your daily routine since participating in the program?"

Format: behavior-anchored prompt. Output: concrete actions you can theme.

05 · Area

Community development

Resource access and lived experience.

"How many community resources or services did you access in the past 90 days?"

Format: numeric. Output: a usage count.

"Describe how your family's access to support and resources has changed since the program began."

Format: change-anchored prompt. Output: a description that explains the count.

A note on tools
SurveyMonkey Qualtrics Google Forms Typeform Microsoft Forms Sopact Sense

Most of the tools above collect mixed-method survey data cleanly. The Likert items go into the response table, the open-ended responses go into the same table. The architectural gap is not collection. It is reading and joining. Reading across hundreds of free-text answers, surfacing recurring themes, joining those themes to the closed-ended scores collected on the same form, and tracking the same respondent across multiple waves are not things any of these tools do in their analysis layer.

Sopact Sense closes the gap by treating the reading and the join as part of the survey workflow rather than as export steps. Theme extraction, paired-score correlation, and cohort comparison run as responses arrive, against a persistent contact ID that links the same respondent across every wave. The live report examples show what that looks like in practice across four different program shapes.

FAQ

Qualitative and quantitative surveys, answered

Q.01

Is a survey qualitative or quantitative?

A survey is neither qualitative nor quantitative on its own. The question types decide. A survey of Likert ratings and yes/no answers is quantitative. A survey of open-ended prompts is qualitative. Most real surveys are mixed: a few closed-ended questions for the numbers, a few open-ended ones for the why. The useful question is not which kind, but what you are trying to learn, and the mix follows from there.

Q.02

Are surveys qualitative or quantitative?

Surveys can be either, and most are both. A survey is a way of asking questions; the kind of questions you put in it decides the kind of data you get out. Closed-ended questions produce numbers. Open-ended questions produce text. A program that needs to count outcomes and explain them uses both, paired on the same topics, and analyzes them together.

Q.03

Is a questionnaire qualitative or quantitative?

A questionnaire is the form. The questions on the form decide whether the data is qualitative or quantitative. A questionnaire with Likert scales and multiple choice produces quantitative data. One with open-ended prompts produces qualitative data. Most program-evaluation questionnaires include both, which makes them mixed-method instruments. The terms survey and questionnaire are often used interchangeably; the questionnaire is what you build, the survey is the whole process.

Q.04

What is a qualitative survey?

A qualitative survey is one designed to collect open-ended written responses, in the respondent's own words. The questions are prompts: describe a moment, tell us what happened, walk us through your decision. The data is text. The work of analysis is reading across the responses to find recurring themes and to pull representative quotes. Qualitative surveys answer the why behind a number that another survey would only count.

Q.05

What is a quantitative survey?

A quantitative survey is one designed to produce numbers and categories. It uses fixed-option questions: rate this on a 5-point scale, choose one, yes or no. The data can be aggregated, averaged, compared across cohorts, and charted. A quantitative survey is what most funder reports run on. The limit is that it tells you how much something changed, not why.

Q.06

What's the difference between a qualitative and a quantitative survey?

The difference is what each one captures. A quantitative survey captures the magnitude of an experience: how much, how many, how often. A qualitative survey captures the shape of an experience: what happened, why it mattered, what the respondent did next. They are complementary, not competing. The choice between them is a question of what you need to know, not which one is better.

Q.07

Is the Likert scale qualitative or quantitative?

The Likert scale is quantitative. It captures perception (agreement, confidence, satisfaction), but the answer comes back as a number on an ordered scale: 1 through 5, or 1 through 7. Numbers can be averaged, charted, and compared across groups. The confusion comes from what the scale measures (a feeling) being mistaken for the format of the data (a number). If you also want the language behind the rating, you pair the Likert with an open-ended follow-up.

Q.08

Can a survey be both qualitative and quantitative?

Yes, and most strong surveys are. Pair every closed-ended question on a topic with an open-ended follow-up on the same topic. The closed gives you the number you can compare across the cohort. The open gives you the language behind the number. When the analysis joins the two at the respondent, you get something neither one produces alone: which kinds of stories show up at high scores, and which show up at low scores.

Q.09

How many questions should a qualitative survey have?

Two to five well-anchored open-ended questions. Each one costs the respondent thirty to ninety seconds of typing, and drop-off begins after question three. If your survey is mostly open-ended (an exit reflection, a deep grant evaluation), eight is the upper limit. More than that and answers shorten to one phrase. Quality of anchoring matters more than count: a single specific prompt produces more usable data than four vague ones.

Q.10

How many questions should a quantitative survey have?

Most quantitative surveys run eight to twenty closed-ended items, depending on the topic. Funder-reporting surveys lean longer (fifteen to twenty Likert and demographic items). Pulse surveys lean shorter (five to eight). The cap is respondent fatigue: drop-off climbs sharply past twenty items. If your survey needs more than twenty closed-ended items, split it into two waves rather than asking everything at once.

Q.11

Are open-ended questions qualitative or quantitative?

Open-ended questions produce qualitative data. The respondent writes in their own words, and the answer is text. Once those responses are read and grouped into recurring themes, the themes can be counted, which adds a quantitative layer. So open-ended questions start qualitative and become quantifiable after coding. Strong analysis pipelines preserve both: the theme counts give the pattern across hundreds of respondents; the raw quotes give the texture.

Q.12

Are yes/no questions qualitative or quantitative?

Yes/no questions are quantitative. The answer is a category, and categories can be counted: how many participants said yes, how many said no, what share each represents. Yes/no questions are common in eligibility screens, attendance tracking, and binary outcome reporting (employed yes/no). They cannot capture nuance, which is why a yes/no on a sensitive topic should usually be followed by an open-ended prompt asking the respondent to describe what was behind the answer.

Q.13

Is a survey research method qualitative or quantitative?

Survey research is most often used as a quantitative method, because surveys can reach hundreds of respondents and produce comparable numerical data. But survey research can also be qualitative when the survey is designed around open-ended prompts read for themes rather than aggregated. Many programs use survey research as a mixed-method tool: closed-ended items satisfy the funder's reporting needs, open-ended items surface the explanation the program team needs to act on.

Q.14

How does Sopact Sense analyze qualitative and quantitative survey data together?

Sopact Sense reads every open-ended response as it arrives, extracts recurring themes across the corpus, and joins those themes to whatever closed-ended scores live on the same survey. The same respondent's score and language stay linked across every wave through a persistent contact ID. The output is a set of analysis cards: theme prevalence, representative quotes, paired-score correlations, and cohort comparison. The reporting use cases on /use-case/survey-report-examples show what the cards look like in practice.

Q.15

Can I use Google Forms or SurveyMonkey for mixed-method surveys?

Both tools collect mixed-method data cleanly: closed-ended items go into the response table, open-ended items go into the same table. The break point is the analysis. Neither tool reads across hundreds of free-text answers, surfaces themes, or joins those themes to the closed-ended scores at the respondent level. Teams using these tools usually export to a spreadsheet and code by hand, which works at small scale and breaks above fifty responses. The architectural gap is in the reading layer, not the form field.

Working session

Bring your survey. Leave with the analysis preview.

A 60-minute working session with the Sopact team. Walk through your survey, your three use cases, and what you need the report to say. Leave with a paired-question structure you can deploy and a preview of how the analysis will look on real responses.

Format

60 minutes, video call. Working session, not a sales demo. Bring your draft survey, your reporting needs, and the decision the survey is meant to inform.

What to bring

Your current survey or draft. The funder questions you have to answer. One or two example responses if you have them, paired or not.

What you leave with

A paired-question structure. A preview of the analysis output on a sample of your real responses. A clear next step regardless of whether you use Sopact.