Qualitative Data Analysis Method: 6 Types and How to Choose
A program officer has 80 interview transcripts on her desk and a deadline in three weeks. She knows thematic analysis. Her colleague swears by grounded theory. Her advisor once suggested framework analysis. The decision she makes next — which qualitative data analysis method to apply — will shape every insight she surfaces from those interviews. This is the Method Inheritance problem: most researchers pick a method because it's the one they were trained on, not because it fits the question they're trying to answer.
A qualitative data analysis method is the structured approach a researcher uses to turn unstructured text — interview transcripts, open-ended survey responses, field notes, documents — into defensible findings. There are six dominant methods in wide use, and the choice between them is not a matter of taste. It's a matter of fit between the research question, the data available, and the claims you intend to make.
Last updated: April 2026
This page sits in Sopact's qualitative data analysis guide cluster alongside qualitative data analysis software and thematic analysis. Where the hub defines what qualitative analysis is, this page focuses on the decision: which method fits, and how to apply it without falling into Method Inheritance.
QDA Methodology Guide
Qualitative Data Analysis Method
A structured approach to finding meaning in interviews, open-ended responses, and field notes. There are six widely recognized methods — and picking between them is not a matter of taste. It's a matter of fit between the research question, the data, and the claims you intend to make.
Ownable Concept
Method Inheritance
The failure mode where researchers apply the qualitative data analysis method they were trained in — rather than the one that fits the current research question. The analysis looks rigorous, but the method-question fit is weak, and the conclusions drift.
6
core methods in widespread use
2
axes for selecting the right one
7
steps in every QDA process
3
diagnostic questions before you start
Latent — deeper meaning
Inductive → build theory
Deductive → apply framework
Inductive · Latent
Build theory from implied meaning
01
Grounded Theory
Generate new theory through open → axial → selective coding
02
Thematic Analysis (latent)
Identify underlying patterns via Braun & Clarke's six phases
Deductive · Latent
Apply theory to hidden meaning
03
Discourse Analysis
Examine language, power, framing, and what silence reveals
04
Narrative Analysis
Study stories, sequence, identity arcs, and plot structure
Inductive · Semantic
Generate categories from surface content
05
Content Analysis
Count presence, frequency, and co-occurrence of categories
06
Descriptive Thematic
Surface themes at face value, close to participants' words
Deductive · Semantic
Apply framework to surface content
07
Framework Analysis
Code against pre-set framework — dominant in applied policy
08
Directed Content Analysis
Apply existing codes, extend when data demands it
Inductive
Deductive
The single rule across all six methods: start from the research question, not the method you know best. If you can't justify the method against the question in one sentence, the method is probably wrong — that is Method Inheritance at work.
What is a qualitative data analysis method?
A qualitative data analysis method is a structured procedure for finding meaning in non-numeric data such as interviews, open-ended responses, field notes, and documents. It turns raw text into codes, codes into themes, and themes into claims that answer a research question. Without a method, analysis becomes pattern-matching colored by the researcher's first impressions.
Every qualitative data analysis method shares three moves: reduction (compressing the data), display (organizing what remains), and conclusion drawing (stating what the data means). Methods differ in what they prioritize at each step — some stay close to participants' words, others look for hidden structure, others build new theory from the data up.
A method is not the same as a tool. Qualitative data analysis software supports a method but does not substitute for it. You can run thematic analysis in a spreadsheet or in a purpose-built platform; the method is what determines whether the output holds up.
What are the types of qualitative data analysis methods?
There are six widely recognized qualitative data analysis methods: thematic analysis, content analysis, grounded theory, narrative analysis, discourse analysis, and framework analysis. Interpretative Phenomenological Analysis (IPA) is sometimes listed as a seventh. Each method answers a different type of research question and treats data differently.
Thematic analysis identifies recurring patterns of meaning across a dataset. It is the most flexible and the most commonly taught, following Braun and Clarke's six phases: familiarization, initial coding, generating themes, reviewing themes, defining themes, and writing up. Thematic analysis is method-agnostic — it can serve descriptive, interpretive, or critical research questions.
Content analysis counts the presence, frequency, or co-occurrence of categories in text. It bridges qualitative and quantitative work: a coder assigns categories, and the categories are tallied. Content analysis fits descriptive questions like how often does cost appear as a barrier in program feedback?
Grounded theory builds new theory from data through iterative coding and constant comparison. The researcher begins without a framework, generates open codes, groups them into axial categories, and eventually identifies a core category that organizes the theory. It is the right choice when no existing theory explains the phenomenon — and the wrong choice when one already does.
Narrative analysis examines the stories participants tell: their structure, sequence, characters, and plot. It fits questions about identity, experience over time, and how people make sense of events. A story about recovering from illness is analyzed as a narrative — not broken into codes that strip away its arc.
Discourse analysis studies language as a site of power and meaning-making. Analysts look at word choice, framing, silences, and how categories like beneficiary or at-risk shape institutional practice. Discourse analysis treats text as action, not description.
Framework analysis applies a predefined coding framework to data — often derived from a policy question, a literature review, or prior research. It is the dominant method in applied health and policy research because it delivers defensible findings on a deadline and makes the analytical logic transparent to stakeholders who did not do the coding.
Best practices
Six rules that beat Method Inheritance
Every rule below is a check against the default of applying whichever qualitative data analysis method you were trained in. Run them before you code a single transcript.
Full QDA guide →
01
Start with the research question, not the method
Write one sentence that justifies the method against the research question before coding begins. If the sentence is hard to write, the method is wrong. This is the single highest-leverage move against Method Inheritance.
Example
"Framework analysis fits because we need to test the Five Dimensions against 120 interviews on a 6-week deadline."
02
Pilot the method on 3–5 transcripts before committing
Code a small sample using the chosen method and check whether the output answers the research question. If the codes feel forced or the themes don't connect to the question, switch methods now — not after 80 transcripts are coded.
Example
Pilot thematic analysis on 5 interviews. If every theme reads as a count, content analysis likely fits better.
03
Keep a codebook that evolves with version history
Codes drift as analysis deepens. If early transcripts are not re-coded against the evolved codebook, the dataset becomes inconsistent. Track every code change — what was added, merged, split, or retired — and re-code accordingly.
Example
Codebook v1: 14 codes. v2: 'cost barrier' split into 'upfront cost' and 'hidden cost' — re-code transcripts 1–12.
04
Code with at least one other reviewer
Solo coding produces confident analyses of questionable fit. Inter-coder reliability — independent coding of the same transcripts, then reconciling disagreements — surfaces the assumptions a single coder never questions. Aim for a Cohen's kappa above 0.75.
Example
Two coders independently code 10 transcripts; discrepancies reveal one coder's unstated category for 'indirect feedback.'
05
Preserve every quote with its context and source
When a theme is reported, every supporting quote should link back to the transcript, the participant ID, and the exact segment — including the question that prompted it. Context-stripped quotes are the most common source of misleading qualitative findings.
Example
Quote 'They never listen' is meaningless without the question asked — and which 'they' the participant meant.
06
Don't mix methods without a unifying logic
Running thematic analysis alongside content analysis on the same dataset is defensible — if each answers a different research question and the boundary is stated explicitly. Mixing methods because the analysis is stuck produces incoherence dressed up as triangulation.
Example
"Content analysis for frequency of barriers; thematic analysis for how participants describe them" — two questions, clear logic.
The qualitative data analysis process
Every qualitative data analysis method follows a version of the same seven-step process: prepare the data, familiarize yourself with it, develop an initial coding scheme, code systematically, identify themes, interpret the themes, and write up the findings. The methods differ in how much structure each step carries.
Step 1 — Prepare. Transcribe recordings, anonymize identifiers, and organize files so every transcript carries a unique ID linked back to the data collection instrument. If identifiers drift at this stage, every later claim becomes contested.
Step 2 — Familiarize. Read each transcript at least twice before coding. Write memos about impressions, contradictions, and surprises. Familiarization is where you notice what the dataset is about — coding without it produces codes that miss the point.
Step 3 — Initial coding. Assign short labels to segments of text. In thematic analysis these are often descriptive; in grounded theory they use participants' own language (in vivo coding); in framework analysis they come from the pre-set framework.
Step 4 — Develop themes. Group codes that share meaning into candidate themes. Themes are not codes that repeat often — they are patterns that answer something about the research question.
Step 5 — Review themes. Check themes against the coded data and the full dataset. Refine, merge, or split until each theme has a clear boundary and internal coherence.
Step 6 — Interpret. Ask what the themes mean in the context of the research question. Link back to prior literature, to the conceptual framework, or to the theory being built.
Step 7 — Write up. Present themes with supporting evidence: short quotes, frequencies where appropriate, and a clear audit trail from raw data to claim.
| Method |
Purpose |
Example output |
Best fit |
Data type |
|
Thematic Analysis
Braun & Clarke — six phases
|
Identify recurring patterns of meaning across the dataset. The most flexible and widely taught method. |
"Participants described trust as 'showing up consistently' — theme: reliability-as-trust."
|
Most studies
exploratory or applied
|
Interviews, focus groups, open-ended survey responses |
|
Content Analysis
Quantitative-qualitative bridge
|
Count the presence, frequency, or co-occurrence of predefined categories in text. Produces numeric summaries. |
"42% of responses mentioned cost as a barrier; 28% mentioned time; 19% mentioned transportation."
|
Descriptive & comparative
large datasets, reporting
|
Documents, media, open-ended responses at scale |
|
Grounded Theory
Open → axial → selective coding
|
Build new theory from data through iterative coding and constant comparison. Starts without a framework. |
"A theory of how small nonprofits sustain volunteer retention through informal recognition rituals."
|
New phenomena
no existing theory, long timeline
|
Interviews, field notes, iterative data collection |
|
Narrative Analysis
Stories, sequence, arc
|
Analyze the stories participants tell — structure, plot, characters, temporal flow — without fragmenting them into codes. |
"How a borrower describes her path from debt to stability as a three-act arc of crisis, resolve, and steady recovery."
|
Identity & experience
life histories, trajectories
|
Interview narratives, journals, long-form accounts |
|
Discourse Analysis
Language, power, framing
|
Examine how language constructs meaning and enacts power — what gets said, how it's framed, what remains silent. |
"How the word 'beneficiary' shapes the design of programs — positioning recipients as passive rather than active participants."
|
Institutional talk
policy, media, critical work
|
Speeches, documents, recorded dialogue |
|
Framework Analysis
Pre-set framework, matrix-based
|
Apply a predefined coding framework to the data. Produces transparent, deadline-friendly findings with clear logic. |
"Coding 120 interviews against the NHS Patient Experience framework — one row per participant, one column per dimension."
|
Applied & policy
fixed timeline, stakeholder reporting
|
Interviews tied to structured research questions |
Want to see how one method stays connected to the data collection that feeds it? That linkage — not the method itself — is what makes the analysis defensible. Sopact Sense keeps every code tied to the question that collected the response.
See the workflow →
How to choose a qualitative data analysis method
Choosing a qualitative data analysis method starts with the research question, not with the method you know best. Three diagnostic questions narrow the field fast.
First: are you building theory, or applying one? If no existing framework explains the phenomenon, grounded theory fits. If a framework already exists and you need to test or extend it, framework analysis or thematic analysis with a pre-set coding scheme is the right choice. This is the inductive vs. deductive axis.
Second: is the meaning at the surface, or underneath? If you care about what people explicitly say — how often they mention cost, which services they name — content analysis or descriptive thematic analysis works. If you care about what's implied, performed, or concealed in the language itself — power dynamics, identity claims, meaning-making — move to discourse analysis, narrative analysis, or interpretive thematic analysis. This is the semantic vs. latent axis.
Third: what can your timeline actually support? Grounded theory and IPA demand long timelines and deep immersion. Framework analysis is structured for deadlines. Thematic analysis sits in the middle and scales with resources. A mismatch here is where Method Inheritance does the most damage — researchers apply grounded theory on a three-week deadline and produce shallow work under a famous name.
Two further considerations sharpen the decision. How large is the dataset? Content analysis and framework analysis handle volume well; narrative analysis and IPA do not. And what claims will the output support? Counts and frequencies need content analysis; thick description needs thematic or narrative; new theory needs grounded theory.
Method Inheritance: the hidden bias in qualitative research
Method Inheritance is the tendency to apply the qualitative data analysis method you were trained in — regardless of whether it fits the current question. A PhD advisor used IPA; the student uses IPA. A research group publishes in thematic analysis; every new project uses thematic analysis. A policy team has always run framework analysis; framework analysis it is.
The cost is quiet but severe. A question that would be answered cleanly by content analysis gets dressed up in thematic coding because the team doesn't count categories. A phenomenon that genuinely needs grounded theory gets forced into a pre-set framework because the deadline demands it. The analysis looks rigorous — the method has a name, the coding is documented — but the fit is wrong, and the conclusions drift.
The escape is to name the inheritance. Before coding a single transcript, write one sentence that justifies the method against the research question, not against the researcher's training. If the sentence is hard to write, the method is probably wrong. This discipline is cheap — and it is the single highest-leverage move in any qualitative study.
Sopact Sense supports this discipline by linking every piece of qualitative data — every open-ended response, every interview transcript — to the research question it was collected against, so the method-question fit stays visible throughout the analysis. The goal is not to automate the method away but to make the inheritance problem impossible to hide.
Qualitative data analysis in applied research
In applied contexts — nonprofit programs, training evaluation, impact funds — the qualitative data analysis method has to answer stakeholders, not just committees. That constraint favors methods with visible logic: framework analysis for policy reviews, descriptive thematic analysis for program feedback, content analysis for structured reporting against indicators.
Applied research also deals with volume. A training provider with 600 open-ended responses per cohort cannot run IPA. An impact fund monitoring a portfolio of 40 investees cannot run pure grounded theory across every one. The methods that scale in applied contexts are the ones that combine structure with speed: framework analysis, deductive thematic analysis, and content analysis — often augmented by AI-assisted coding that surfaces initial categories for human review. Grant programs facing the same volume problem at the front end use similar logic for application review.
For nonprofits and funders, the durable workflow is simple. Collect qualitative data against a clear research question with stable identifiers. Pick the method that matches the question. Pilot the method on 5–10 responses before scaling. Keep a codebook with version history so the audit trail holds up. Report themes with supporting quotes and, where useful, frequencies. This is how nonprofit programs turn feedback into evidence without drowning the analysis team.
Frequently Asked Questions
What is a qualitative data analysis method?
A qualitative data analysis method is a structured approach to finding meaning in non-numeric data such as interviews, open-ended responses, and field notes. It specifies how a researcher moves from raw text to codes, themes, and defensible claims. The six main methods are thematic, content, grounded theory, narrative, discourse, and framework analysis.
What is an example of a qualitative data analysis method?
Thematic analysis is the most common example. A researcher reads 40 interview transcripts, assigns short codes to meaningful segments (trust, showing up, consistency), groups related codes into candidate themes, refines them against the full dataset, and writes up each theme with supporting quotes. The process follows Braun and Clarke's six phases.
What are the types of qualitative data analysis methods?
The six main types are thematic analysis, content analysis, grounded theory, narrative analysis, discourse analysis, and framework analysis. Each fits a different research question. Thematic analysis is the most flexible and commonly taught; framework analysis dominates applied policy research; grounded theory is reserved for building new theory.
What is Method Inheritance?
Method Inheritance is the failure mode where researchers apply the qualitative data analysis method they were trained in rather than the one that fits the current research question. It produces work that looks rigorous but whose method-question fit is weak, leading to conclusions that drift from the data. Naming the inheritance is the first escape.
Which is a qualitative approach to analyzing generated data but is inefficient?
Manual line-by-line coding without software is the qualitative approach that analyzes generated data but is inefficient. Researchers must read every transcript multiple times and hand-tag each segment, which can take weeks for large datasets. Computer-assisted qualitative data analysis (CAQDAS) and AI-assisted coding reduce this cost without replacing human interpretation.
What is the process of categorization and coding in qualitative data analysis called?
In grounded theory, the process of categorization and coding of data as part of theory development is called open coding. It is the first stage of coding, followed by axial coding (grouping categories by relationship) and selective coding (identifying the core category that organizes the theory). Thematic analysis uses a similar but less formalized sequence.
How do I choose a qualitative data analysis method?
Start with three diagnostic questions. Are you building theory or applying one? Is meaning at the surface or underneath? What timeline can you support? Grounded theory fits theory-building with long timelines; framework analysis fits deductive work on deadlines; thematic analysis sits in the middle. Match the method to the question, not to the researcher's training.
What is the difference between qualitative and quantitative data analysis?
Quantitative data analysis works with numeric data and uses statistical methods to test hypotheses, measure relationships, and generate estimates. Qualitative data analysis works with text, audio, and images, using interpretive methods to build meaning, identify patterns, or generate theory. Mixed-methods research combines both, typically using one to inform the other.
How long does qualitative data analysis take?
It depends on method and dataset size. Manual thematic analysis of 20–30 transcripts typically takes four to eight weeks including coding, theme development, and write-up. Grounded theory on a similar dataset can take three to six months. AI-assisted coding reduces initial coding time substantially, but theme refinement and interpretation remain human work.
Can AI help with qualitative data analysis?
AI can surface initial codes, cluster similar responses, and generate candidate themes for human review — speeding up the most time-consuming early steps. It does not replace interpretation, context, or the researcher's judgment about method-question fit. The most defensible workflow uses AI to accelerate coding and humans to validate every theme before it enters a finding.
What software supports qualitative data analysis methods?
Dedicated CAQDAS tools include NVivo, ATLAS.ti, MAXQDA, and Dedoose, each supporting multiple methods through coding, memoing, and query features. Newer AI-enabled platforms like Sopact Sense connect qualitative coding to the original collection instrument so every code stays linked to its source. The right tool depends on method, team size, and whether the platform must also handle data collection.
Does the qualitative data analysis method need to be decided before data collection?
Ideally yes. The method shapes what data to collect, how to phrase questions, and what sample size to target. Grounded theory collects iteratively and stops at saturation; framework analysis collects against predefined themes. Deciding the method after collection is possible but constrains the analysis to whatever the data happens to support — a common source of weak findings.
Put this into practice
The right method is half the work.
The other half is keeping every code, quote, and theme linked back to the question it came from. Sopact Sense collects qualitative data with unique participant IDs and persistent context, so the method you chose still holds up six months later — when the report gets written, the board asks a follow-up, or the next cohort arrives.