Sopact is a technology based social enterprise committed to helping organizations measure impact by directly involving their stakeholders.
Useful links
Copyright 2015-2025 © sopact. All rights reserved.

New webinar on 3rd March 2026 | 9:00 am PT
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
Are surveys qualitative or quantitative? Direct answer plus question examples, Likert scale classification, and the Survey Question Pairing Principle
A program evaluator spends three weeks designing a participant survey. She agonizes over every question: should she ask "How many job interviews did you complete?" or "How did the program change how you think about employment?" Both are legitimate research questions. Both belong in the same survey. When results arrive, she has 200 rows of numbers she can report and 200 open-ended paragraphs she cannot analyze — because her survey tool exported them to separate columns with no connection between them, and she has no system to process either alongside the other.
She didn't make the wrong design choice. She answered the wrong question. The question "should our survey be qualitative or quantitative?" is The False Binary — the assumption that a survey must choose one data type, when the real question is "what analysis does this survey need to support, and how do we design both types to work together from the moment of collection?"
Most survey tools — SurveyMonkey, Typeform, Google Forms — make this worse by treating qualitative and quantitative questions as separate outputs. You get charts for your scales and an exported text file for your open-ended responses. Neither the tool nor the workflow is designed to connect the two. Sopact Sense is built on the premise that the confidence score means nothing without the narrative that explains it — so both are collected in the same instrument, linked to the same participant ID, and analyzed in the same system.
The direct answer: A survey is neither inherently qualitative nor quantitative. It produces whatever type of data its questions are designed to generate.
A survey asking "On a scale of 1–5, how confident do you feel about your job search skills?" generates quantitative data: a number that can be averaged, trended, and compared across cohorts. A survey asking "Describe the most significant change in your job search skills since starting the program" generates qualitative data: a narrative that requires interpretation to produce insight.
The same survey can — and should — contain both. The scale question tells you what changed. The open-ended question tells you what caused it. Treating them as alternatives rather than as a paired system is the False Binary that makes most program surveys produce evidence that is either credible but shallow, or rich but unscalable.
Survey type by question structure:
A survey with only closed-ended, scaled, or multiple-choice questions produces quantitative data. A survey with only open-ended, narrative, or descriptive questions produces qualitative data. A mixed-method survey — the format most nonprofit and social sector programs should be using — produces both, with paired questions designed to complement each other analytically from the start.
For questionnaire templates, pairing frameworks, and sample instruments by design type, see mixed method surveys. This page covers the foundational question of survey type and question design. The mixed-method surveys page covers how to build the full instrument.
The False Binary appears in grant proposals ("we will use qualitative surveys to capture participant voice"), in evaluation plans ("our quantitative survey will track outcomes"), and in program team discussions ("we ran the qualitative interviews separately from the survey"). Each formulation treats the two data types as alternatives — when their value comes precisely from their connection.
Programs that choose only quantitative surveys can report that outcomes improved. They cannot explain what drove the improvement. Programs that choose only qualitative surveys can describe what participants experienced. They cannot demonstrate scale. The funder who asks "what were your outcomes?" followed immediately by "what drove those outcomes?" is asking a question that only integrated surveys can answer — with the number and the explanation from the same participant at the same program point.
The False Binary also obscures a practical design error: collecting both data types in separate instruments at separate points in the program lifecycle. A program runs monthly quantitative surveys and conducts exit interviews six months later. The quantitative data describes what happened in real time. The qualitative data describes a retrospective memory of a six-month experience. These are not the same thing, and they cannot be correlated to produce causal evidence — they are two separate studies dressed up as mixed-methods.
The design fix is pairing at the collection point: a confidence rating and an explanation of what built or blocked that confidence, collected from the same participant in the same survey, at the same program moment. When both data types live in the same record, linked by a persistent participant ID, Sopact Sense's Intelligent Column can answer "do participants who describe transportation barriers in their responses show lower confidence scores?" as a query — not a six-week reconciliation project.
A qualitative survey is a data collection instrument designed primarily to generate open-ended, narrative, and descriptive responses that require interpretation to produce insight. Qualitative survey questions cannot be answered with a number, a rating, or a selection from a predefined list.
"Describe what financial confidence means to you" is a qualitative survey question. "What specific barrier almost prevented you from completing the program?" is a qualitative survey question. "In your own words, how has your relationship with employment changed since joining the program?" is a qualitative survey question.
What distinguishes a strong qualitative survey question from a weak one is behavioral specificity. "How did you feel?" invites vague responses that cannot be coded consistently. "Describe a specific moment in the program when you felt most supported" produces a behavioral narrative with analyzable detail. "What would you tell a friend who was considering joining this program?" produces a revealed-preference answer that is both qualitative and highly predictive of outcomes.
Qualitative surveys are appropriate when: you need to understand the mechanism behind a quantitative finding, you are in an Exploratory Sequential design phase discovering what indicators matter before building a quantitative instrument, or you are capturing stakeholder voice for a funder report that requires qualitative evidence alongside outcome metrics.
Qualitative surveys are insufficient alone when: the program needs to demonstrate scale of impact, funder reporting requires comparable metrics across cohorts, or the research question asks "how many" or "by how much."
Every qualitative survey question should be paired with a corresponding quantitative question covering the same outcome domain — so both can be analyzed together for the same participant. The pairing is the design unit, not the individual question.
Workforce and employment programs:
Education and youth programs:
Community health and social services:
Scholarship and grant programs:
The qualitative question immediately following the quantitative question in the form produces the most analytically useful responses — participants naturally explain the experience they just rated, making the qualitative response a direct explanation of the quantitative score. For program evaluation instruments, this pairing structure is more reliable than conducting qualitative interviews weeks after quantitative surveys close.
A quantitative survey is an instrument designed primarily to generate numeric, structured, and comparable responses — Likert scales, binary yes/no responses, multiple-choice selections, ranked lists, and numeric fill-in questions that can be aggregated, trended, and statistically analyzed.
"On a scale of 1–10, how confident are you in your ability to find employment in your field?" is a quantitative survey question. "Did you complete the program? Yes / No" is a quantitative survey question. "Rate the following program elements on a scale of 1–5" is a quantitative survey block. "How many job applications did you submit in the past 30 days?" is a quantitative survey question.
Quantitative surveys are necessary when: funder reporting requires comparable metrics across cohorts and time periods, program leadership needs to track trend lines across multiple collection cycles, outcomes must be benchmarked against sector standards, or the research question asks "how many," "how much," or "compared to when."
What quantitative survey data structurally cannot do is the constraint that makes pairing essential: it cannot explain itself. A satisfaction score of 3.2 is a precise measurement. It tells leadership that something is wrong. It does not tell them what is wrong, who is experiencing it, or what would fix it. That information is in the qualitative question the quantitative question should be paired with.
For survey analytics that produces actionable intelligence rather than dashboards that raise more questions than they answer, the quantitative instrument is the scale layer and the qualitative instrument is the explanation layer — designed together, collected together, and analyzed together.
Is a questionnaire qualitative or quantitative? A questionnaire follows the same logic as a survey: the data type is determined by question structure, not instrument name. Narrative open-ended questions produce qualitative data. Scaled and structured questions produce quantitative data. A questionnaire containing both types is a mixed-method instrument. "Questionnaire" and "survey" are structural equivalents.
Is the Likert scale qualitative or quantitative? The Likert scale is a quantitative measurement instrument. It generates ordinal numeric data — a position on a numbered scale (1 through 5, 1 through 7) — that can be averaged, trended, and compared across cohorts. The response labels ("strongly agree," "somewhat agree") look like qualitative categories, but the data produced is numeric and analyzed using quantitative methods.
A persistent source of confusion: because Likert scales measure attitudinal constructs (confidence, satisfaction, agreement), some researchers classify them as qualitative. The construct is attitudinal — but the measurement instrument is quantitative. The data produced is a number, not a narrative. For impact assessment purposes, Likert scales are quantitative instruments that should be paired with qualitative follow-up questions to explain what the attitudinal shift actually represents.
Is survey research qualitative or quantitative? Survey research is most commonly classified as a quantitative method — large samples, structured responses, statistical analysis. It can be qualitative when designed with open-ended questions and interpretive analysis. It is mixed-method when both types are combined in a single instrument designed for integrated analysis. The classification follows the instrument design and analysis approach.
Learn how Sopact Sense co-locates both question types in the same participant record
The question pairing principle is the design rule that resolves The False Binary: for every critical outcome your program tracks quantitatively, design a corresponding qualitative question that captures the mechanism, barrier, or experience that explains the quantitative result.
The pairing must happen before data collection begins — not added when quantitative results raise unexplained questions. A qualitative question added after quantitative collection closes produces data from a different program moment — retrospective recall rather than contemporaneous experience.
The pairing structure: The quantitative question establishes scale ("On a scale of 1–5, how confident do you feel about your ability to find employment?"). The qualitative question immediately following establishes mechanism ("What specifically has changed in your job search approach since the program began?"). Both collected at the same program point, from the same participant, in the same form.
Why adjacent placement matters: When the qualitative question immediately follows the quantitative question, participants naturally reference the experience they just rated. The qualitative response becomes an explanation of the score rather than a separate reflection. This is the instrument-design equivalent of asking "rate this, then tell me why" — and the resulting data is analyzable as a pair, not as two separate outputs requiring reconciliation.
For the complete questionnaire structure, templates for each of the three research designs, and sample instruments by program type, see mixed method surveys. For choosing which research design governs the instrument sequence before building it, see mixed method design.
Surveys are neither inherently qualitative nor quantitative. A survey with only scaled or multiple-choice questions produces quantitative data. A survey with only open-ended narrative questions produces qualitative data. A mixed-method survey — the most effective format for program evaluation — combines both types in paired questions designed to be analyzed together. The data type is determined by question structure, not by the word "survey."
Survey research is most commonly used as a quantitative method — large samples, structured responses, statistical analysis. But it can be qualitative when designed with open-ended questions and interpretive analysis, or mixed-method when both types are combined in a single instrument. The classification is determined by instrument design and analysis approach.
A qualitative survey is an instrument designed primarily to generate open-ended narrative responses requiring interpretation. Qualitative survey questions use open-ended formats that cannot be answered with a number or a predefined selection. They answer "why" and "how" questions — explaining the mechanisms and experiences that quantitative scores can detect but not describe.
Qualitative survey question examples include: "Describe the most significant change in your professional confidence since starting the program," "What specific barrier almost prevented you from completing the program, and how did you manage it," and "In your own words, how has your relationship with employment changed?" Strong qualitative questions are behaviorally specific, not open-ended to the point of producing unusable vague responses.
A quantitative survey is an instrument designed to generate numeric, structured, comparable responses — Likert scales, binary responses, multiple-choice selections, and numeric fill-ins. Quantitative surveys are used when programs need to demonstrate scale of impact, track trends across cohorts, or satisfy funder reporting requirements for comparable metrics. Their structural limitation is that they show what changed but not why.
A questionnaire follows the same logic as a survey: the data type is determined by question structure, not instrument name. Open-ended questions produce qualitative data. Scaled and structured questions produce quantitative data. A questionnaire with both types is a mixed-method instrument. The word "questionnaire" does not determine the data type.
The Likert scale is a quantitative measurement instrument. It generates ordinal numeric data — a position on a numbered scale — that can be averaged, trended, and compared across cohorts. The response labels ("strongly agree," "somewhat agree") look qualitative, but the data produced is numeric and analyzed using quantitative methods. Likert scales are the most common form of quantitative measurement in program evaluation surveys.
Survey research is typically classified as quantitative — but this applies when surveys use structured closed-ended questions with large samples and statistical analysis. When surveys use open-ended questions with interpretive analysis, they function as qualitative instruments. When both question types are combined in a single instrument designed for integrated analysis, survey research is mixed-method.
Yes. A mixed-method survey combines both types in paired questions designed to be analyzed together. The key design principle is question pairing: for every quantitative outcome metric, a corresponding qualitative question captures the mechanism or barrier that explains it. Sopact Sense collects both in the same form, links them via persistent participant IDs, and analyzes them together without manual export cycles.
The False Binary is the assumption that a survey must choose between qualitative and quantitative questions — when the real question is "what analysis does this survey need to support, and how do both types work together from the point of collection?" Most program surveys should contain both types in paired format. The binary choice produces either credible-but-shallow quantitative data or rich-but-unscalable qualitative data. Integration produces both.
Survey platforms like SurveyMonkey, Typeform, and Google Forms export qualitative and quantitative responses to separate outputs — charts for scaled questions, text exports for open-ended ones. Neither connects both under a shared participant identifier. Analyzing what a participant's open-ended response says in relation to their rating score requires manual export-import cycles that introduce errors. Sopact Sense co-locates both in the same participant record, enabling AI-powered correlation without reconciliation.
Integrated analysis requires three conditions: shared participant identity, co-located storage, and pre-designed pairing. Sopact Sense's Intelligent Column correlates open-ended response themes with quantitative scores across all participants — answering "do participants who describe transportation barriers show lower attendance rates?" as a live query, not a weeks-long reconciliation project.