play icon for videos

Quantitative Data Analysis: Methods, Tools, Examples

Quantitative data analysis explained: method picker, six method families with worked examples, the seven-step process, and an honest tools comparison.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 4, 2026
360 feedback training evaluation
Use Case

Methods hub

Quantitative data analysis turns numerical measurements into answers.

Pick the right method by matching what your data is to what you want to know.

Five method families. Worked numerical examples for each. The seven-step process for moving from question to interpretation. A tools comparison covering Excel, SPSS, R, Python, visualization platforms, and survey-data platforms. The interactive picker below recommends the right method for your data in three questions. Examples come from research datasets, survey responses, and program evaluation. No prior statistical background needed.

On this page

  • The interactive method picker
  • What quantitative data analysis is
  • Five method families and worked examples
  • The seven-step process
  • Choosing the right tool
  • Applied program examples

The method picker

Three questions. The right method for your data.

The picker recommends a method based on the shape of your data and the question you want to answer. After the recommendation, jump to the worked example for that method family below.

Step 1 of 3

How many variables are you analyzing?

A variable is one column in your dataset: a score, a category, a count, a date.

Definitions

What quantitative data analysis is, in plain terms.

Five definitions cover the vocabulary you need before working with the methods. Each answer matches the language people actually search for.

What is quantitative data?

Quantitative data is information expressed as numbers. A test score, an age, a dollar amount, the count of completed sessions, a temperature, a rating on a 1-to-7 scale: every one of these is a quantitative datum.

Quantitative data divides into two types. Discrete data takes whole-number values you can count: number of participants in a cohort, number of follow-up surveys completed, number of grants awarded. Continuous data takes any value within a range: age in years, household income, rainfall in millimeters.

A useful test: if it makes sense to add two values together and divide by two, the data is quantitative. The average of two ages is a meaningful number; the average of two zip codes is not. Zip codes look numerical but behave categorically.

What is quantitative data analysis?

Quantitative data analysis is the process of examining numerical measurements to find patterns, compare groups, measure change, or predict outcomes. The work has two layers: descriptive (summarizing the data you have) and inferential (drawing conclusions about a population from a sample).

In a research or program context, quantitative data analysis answers questions like: did average reading scores improve from intake to exit; did employment rates differ between cohorts; what predicts program completion. The methods that answer these questions form five families: descriptive statistics, comparison, change, relationship, and prediction.

Reliable analysis depends as much on data quality as on statistical sophistication. The cleanest method on poor data produces unreliable findings; the simplest method on clean data often produces sufficient ones.

Quantitative data meaning

The word "quantitative" comes from "quantity," meaning measurable amount. Anything you can count, weigh, score, or rate is quantitative. The quantitative meaning is fundamentally about measurement: the value carries arithmetic meaning, not only a label.

This contrasts with qualitative data, which carries meaning that cannot be reduced to a number without losing something. A participant's narrative explanation of why their confidence shifted is qualitative; the score of 6 (instead of 4) is quantitative. Both are valid forms of evidence; each requires its own method.

What is quantitative analysis?

Quantitative analysis is the broader discipline that includes quantitative data analysis as its working method. In finance, "quantitative analysis" means modeling securities and risk with mathematical methods. In research, it means using numerical evidence to test hypotheses or describe populations. In program evaluation, it means measuring whether programs produced their intended outcomes.

What unites these uses is the appeal to numerical evidence as the basis for decision-making. The methods on this page are the toolset that makes that appeal credible.

Quantitative data examples

Common quantitative data examples come in three groups.

Counts. Number of program graduates, number of survey responses, number of households reached, number of repeat visits.

Continuous measurements. Age in years, annual household income, hours of training completed, score on a standardized assessment.

Ratings on numeric scales. A 1-to-5 satisfaction rating, a 1-to-10 likelihood-to-recommend score, a Likert agreement scale.

Each kind of quantitative data behaves differently in analysis. Counts often follow Poisson distributions. Continuous variables can be analyzed with means, t-tests, and regression. Ratings on small ordered scales work better with median, mode, and rank-based methods than with the mean.

Methods taxonomy

Five method families. One worked example each.

Most quantitative analysis methods fit into five families based on the question they answer. Each family below names its core methods and walks through a worked example with real-looking program data.

01 · Family

Descriptive statistics

When you need to summarize what is in your data: shape, center, spread.

Descriptive statistics summarize one variable at a time. The mean reports the arithmetic average. The median is the middle value when data is sorted. The mode is the most common value. The standard deviation measures how much values typically deviate from the mean.

For numerical variables, all four measures combine to describe the distribution. For categorical variables, frequencies and proportions replace mean and standard deviation. For ordinal variables (Likert, ranks), median and mode are more honest than mean. Descriptive statistics are always the first analysis step regardless of what comes next.

Methods in this family

Mean, median, mode · Standard deviation, variance, range · Frequency distribution · Histogram, boxplot, bar chart · Quartiles and percentiles

Worked example

Thirty participants exited a workforce training program. Each rated their confidence to apply for a target role on a 1-to-10 scale.

Confidence rating Participants Percent
213.3%
313.3%
5310.0%
6516.7%
7826.7%
8620.0%
9413.3%
1026.7%
Total30100.0%

The mode is 7 (eight participants chose it). The median is also 7 (the 15th and 16th values when sorted). The mean is 6.8, slightly below the mode because two participants rated themselves at 2 and 3, pulling the average down. The standard deviation is 1.9.

Interpretation: most participants finished confident, but a tail of less-confident participants is what separates the mean from the mode. Reporting median and mode (rather than mean alone) captures the typical experience.

02 · Family

Comparison methods

When you need to know whether two or more groups differ on the same outcome.

Comparison methods test whether observed differences between groups are likely to reflect a real underlying difference or could plausibly have arisen from random variation. The right method depends on the type of outcome and the number of groups.

For two groups on a numerical outcome, the independent t-test is the standard. For three or more groups, one-way ANOVA. For categorical outcomes, the chi-square test. For ordinal outcomes that violate t-test assumptions, the Mann-Whitney U test. All produce a test statistic and a p-value indicating how unusual the observed difference is.

Methods in this family

Independent t-test · One-way ANOVA · Chi-square test of independence · Mann-Whitney U · Kruskal-Wallis · Factorial ANOVA

Worked example

Two cohorts of a job-readiness training program. Cohort A used a revised curriculum; Cohort B used the prior version. Question: did exit confidence ratings differ?

Cohort N Mean SD
A (revised curriculum)427.41.6
B (prior curriculum)386.81.9

An independent t-test on these two means yields t = 1.54, p = 0.13. The mean difference is 0.6 points; the 95% confidence interval for the difference runs from -0.18 to +1.38. Cohen's d (a standardized effect size) is 0.34, which is a small-to-moderate effect.

Interpretation: the difference does not reach conventional significance (p > 0.05). The effect size suggests there may be a real but modest difference; a larger sample would be needed to detect it reliably.

03 · Family

Change methods

When you need to measure whether the same individuals changed across timepoints.

Change methods compare the same people to themselves at two or more timepoints. Each participant acts as their own control, isolating change from baseline differences between people. This is structurally different from comparing two separate groups.

For two timepoints on a numerical outcome, the paired t-test calculates each individual's change and tests whether the mean change differs from zero. For three or more timepoints, repeated measures ANOVA. For categorical outcomes at two timepoints, McNemar test. For ordinal data, the Wilcoxon signed-rank test. All require linked records: the same identifier present at every timepoint.

Methods in this family

Paired t-test · Repeated measures ANOVA · McNemar test · Wilcoxon signed-rank · Friedman test · Mixed-effects models

Worked example

Twenty participants completed a financial-literacy program. Confidence ratings were collected at intake and at exit on a 1-to-10 scale. Question: did confidence change?

Participant Intake Exit Change
P-00147+3
P-00258+3
P-003660
P-00437+4
P-00578+1
P-00659+4
P-00767+1
P-00846+2
… (12 more)
Mean5.17.4+2.3

The mean individual change is +2.3 points. A paired t-test on the 20 difference scores yields t = 6.91, p < 0.001. The 95% confidence interval for the mean change runs from +1.6 to +3.0. Cohen's d on the paired differences is 1.55, a large effect.

Interpretation: confidence ratings improved substantially and consistently. The paired structure (each participant compared to themselves) is what makes this conclusion valid; comparing intake and exit means as if they were two separate groups would understate the effect.

04 · Family

Relationship methods

When you need to know how two variables move together.

Relationship methods quantify how two (or more) variables vary together. Pearson correlation reports a single number between -1 and +1: positive means they rise together, negative means one rises as the other falls, zero means no linear relationship.

Simple linear regression goes further: it produces an equation that predicts one variable from the other. Spearman correlation is the rank-based counterpart, used when one or both variables are ordinal or non-normal. None of these methods proves causation: a strong correlation between training hours and exam scores is consistent with many causal stories, including a third variable driving both.

Methods in this family

Pearson correlation · Spearman rank correlation · Simple linear regression · Point-biserial correlation · Correlation matrix

Worked example

Thirty participants in a coding bootcamp. Recorded for each: hours of self-study during the program, and final assessment score. Question: are study hours related to score?

Participant Study hours Assessment score
P-0014278
P-0022864
P-0035588
P-0041852
P-0053771
P-0064882
P-0072259
… (23 more)
Mean36.471.2

Pearson r = 0.78, p < 0.001. The simple regression equation is: score = 38.5 + 0.90 × hours. R-squared = 0.61, meaning study hours account for about 61% of the variance in assessment scores.

Interpretation: there is a strong positive relationship. Each additional study hour is associated with about a 0.9-point increase in score. Whether more study hours cause higher scores cannot be settled by correlation alone, but the association is strong enough to investigate further.

05 · Family

Prediction and segmentation

When you need to predict an outcome from many predictors, or find natural groups in your data.

Multiple regression extends simple regression to many predictors. Logistic regression handles binary outcomes (yes/no, completed/not completed). Multinomial logistic regression handles categorical outcomes with three or more levels. Each method reports how much each predictor contributes when other predictors are held constant.

Segmentation methods take a different approach: rather than predicting a known outcome, they look for natural groups in the data itself. K-means cluster analysis partitions individuals based on similarity across many variables. Factor analysis reduces many correlated items to a smaller set of underlying dimensions, common in Likert-battery analysis.

Methods in this family

Multiple linear regression · Logistic regression · Multinomial logistic regression · K-means cluster analysis · Factor analysis · Principal component analysis · Latent class analysis

Worked example

One hundred and fifty participants in a digital-literacy program. Outcome: completed (yes/no). Predictors: prior education level, training hours attended, age. Question: which predictors influence completion?

Predictor Odds ratio 95% CI p-value
Training hours attended1.181.09 - 1.28< 0.001
Prior education (post-secondary vs not)2.141.05 - 4.380.037
Age (per year)0.980.95 - 1.010.21

The model achieves an AUC of 0.79, meaning it correctly orders a random completer above a random non-completer about 79% of the time. Training hours and prior education are statistically significant predictors of completion; age is not, after the other variables are accounted for.

Interpretation: each additional training hour increases the odds of completion by about 18%. Participants with post-secondary education have roughly twice the odds of completing. Age does not independently predict completion once attendance and education are accounted for.

The process

How to do quantitative data analysis in seven steps.

The seven-step process applies to data analysis in quantitative research, program evaluation, and organizational reporting. Skipping any step shifts the cost: skipped integrity checks make later results unreliable; skipped descriptives lead to wrong method choice; skipped disaggregation hides the patterns that matter most.

01 · Question

Define the question

State exactly what your analysis must answer. Vague questions produce vague analyses. Examples: did mean confidence improve from intake to exit; did employment outcomes differ between cohorts; what predicts completion.

What this prevents: methods that produce numbers without producing answers.

02 · Integrity

Verify dataset integrity

Confirm unique participant identifiers across timepoints. Check for duplicate rows. Validate that response options stayed consistent across instrument versions. Identify missing values and decide handling.

What this prevents: structurally unreliable results that no later method can rescue.

03 · Describe

Run descriptive statistics first

Calculate mean, median, mode, standard deviation, and visualize the distribution for every variable involved. Descriptive statistics surface data quality issues and inform method choice for later steps.

What this prevents: applying parametric tests to data that violates their assumptions.

04 · Method

Choose the right method

Match data shape and question type to method family: descriptive, comparison, change, relationship, or prediction. The method picker on this page recommends the right method in three questions.

What this prevents: a t-test on data that called for ANOVA, or correlation on a question that needed regression.

05 · Run

Run the analysis

Execute the chosen statistical method on the prepared dataset. Use a tool appropriate to the method and the dataset size: a spreadsheet for descriptives, dedicated statistical software for hypothesis testing.

What this prevents: results that cannot be reproduced or audited.

06 · Interpret

Interpret with effect size

Report effect size alongside p-value. A statistically significant result with a tiny effect size has limited practical meaning. Confidence intervals show the range of plausible values for the true effect.

What this prevents: overclaiming significance that has no practical magnitude.

07 · Disaggregate

Disaggregate by subgroups

Break results down by demographic, cohort, or site variables that matter for accountability. Aggregate findings hide subgroup patterns; disaggregated findings reveal who the program serves well and who it does not.

What this prevents: average results that hide harmful subgroup patterns.

Tools comparison

Quantitative data analysis tools, compared.

No tool is best at every method on every dataset for every team. The right tool depends on what you are collecting, who runs the analysis, and what the output has to be. Six categories cover the working population.

Tool Best for Limitation Typical user

Excel and Google Sheets

Spreadsheet

Small datasets. Descriptive statistics. Basic charts. Quick exploration of survey exports. No formal hypothesis testing at scale. Breaks down past about 100,000 rows. Hard to audit a multi-step analysis. Anyone, no training needed. Most program staff start here.

SPSS, SAS, Stata

Statistical software

Traditional hypothesis testing with point-and-click interfaces. Standard in academic and government research. Per-seat license cost. Statistical training required. Workflow lives separately from data collection. Academic researchers. Established M&E teams with statistical training.

R and Python

Programming environment

Custom analyses. Large datasets. Reproducible pipelines. Anything statistical software cannot do out of the box. Steep learning curve. Code review is the only audit trail. Reproducibility depends on environment management. Quantitative analysts. Data scientists. Research teams with engineering capacity.

Tableau, Power BI, Looker

Visualization platform

Dashboards. Interactive exploration. Visualization at scale across many variables and audiences. Limited inferential statistics. Not built for hypothesis testing or model fitting; usually paired with another tool upstream. Program managers. BI analysts. Funder-facing reporting teams.

Qualtrics, SurveyMonkey, Typeform

Survey platform

Form design. Data collection. Basic descriptives and crosstabs out of the box. Manual reconciliation across waves. Same-individual identification across pre and post surveys is the user's responsibility, not the platform's. Survey-driven research teams. Customer feedback teams.

Sopact Sense

Stakeholder data platform

Linked longitudinal datasets. Same-individual identification at first contact. Demographic variables embedded at design. Mixed-method records where quantitative and qualitative data live on the same row. Built for program evaluation, not pure research. Frequentist hypothesis testing is available but not the primary use case. Program evaluators. M&E leads. Impact funders running portfolios.

The tool depends on what you are collecting and who has to use it. The methods stay the same. Most working teams use two or three of these together: a survey platform for collection, a spreadsheet or statistical environment for analysis, and a dashboard tool for reporting.

Applied examples

What quantitative analysis looks like in three program contexts.

The methods above are the same in every context. What changes is the data shape: how many participants, how many timepoints, how many sites. Three working examples.

Context 01

Multi-cohort workforce evaluation

Three or more cohorts run per year. Three timepoints per participant: intake, exit, six-month follow-up. Demographic variables required for funder reporting.

The defining challenge is keeping individuals identifiable across timepoints. Intake collects demographics and baseline measures. Exit captures program completion and end-of-program scores. The six-month follow-up captures sustained outcomes (employment, retention, income). Linking these three records to the same person is the difference between a real longitudinal analysis and three cross-sectional snapshots.

The working analysis is paired t-tests on confidence and skill scores from intake to exit, repeated measures ANOVA across all three timepoints, and logistic regression predicting six-month employment from baseline, demographic, and program-completion variables. Disaggregation by cohort and demographic group is required at every step. Aggregate findings hide which subgroups the program serves well.

Methods used

  • Descriptive statistics by cohort
  • Paired t-test (intake to exit)
  • Repeated measures ANOVA across three timepoints
  • Logistic regression for follow-up outcomes
  • Subgroup disaggregation at every step

Context 02

Single-cycle community program

One cohort, 30 to 120 participants, two timepoints (intake and exit). Local funder reporting with demographic disaggregation.

Smaller programs have smaller analytical needs but the same integrity requirements. Sample sizes under 100 limit the methods that work reliably. With 30 participants per group, you can detect a moderate effect with a t-test but a smaller effect will not reach statistical significance even if it is present.

The working analysis stays close to descriptive statistics: means, medians, distributions, and visualizations. Pre-post change is best reported with paired t-tests and effect size (Cohen's d) rather than p-values alone, because at small sample sizes the effect size is the more honest signal. Disaggregation matters as much here as in larger programs but requires careful interpretation: a subgroup of seven participants cannot support strong claims on its own.

Methods used

  • Descriptive statistics with distributions
  • Paired t-test with Cohen's d
  • Confidence intervals reported alongside p-values
  • Cautious subgroup disaggregation

Context 03

Portfolio funder view

Fifteen or more grantees, each running a different program. Standardized indicators required across the portfolio for cross-grantee comparison and learning.

The defining challenge is comparison across grantees that collect data differently. Without a shared instrument or shared option lists, every cross-grantee comparison requires manual reconciliation. With shared instruments at the indicator level, the portfolio analysis becomes structurally similar to a multi-cohort program: descriptive statistics by grantee, comparison tests across grantees, regression to identify which program features predict outcomes.

The working analysis is one-way ANOVA on shared indicators across grantees, post-hoc tests to identify which grantees differ from which, and regression linking grantee-level features (program duration, cohort size, target population) to participant-level outcomes. Disaggregation happens at two levels: across grantees in the portfolio and within each grantee's own participants.

Methods used

  • Descriptive statistics by grantee
  • One-way ANOVA across grantees
  • Post-hoc pairwise comparison
  • Multi-level regression
  • Two-level disaggregation

A note on tools

Where standard tools end, and what Sopact Sense adds.

Excel SPSS R / Python Tableau Qualtrics Sopact Sense

Standard tools handle quantitative data analysis well at the row level. Excel produces descriptives. SPSS runs hypothesis tests. R fits any model that can be specified in code. Tableau visualizes. Qualtrics collects. The architectural gap appears across cycles: same-individual identification across pre, mid, and post; locked option lists across instrument versions; demographic structure embedded at design rather than reconciled afterward.

Sopact Sense is the data origin. Persistent participant identifiers are assigned at first contact. Demographic variables are structured at design. Pre, mid, and follow-up instruments link automatically to the same record. The methods on this page run against datasets that are clean, linked, and disaggregated from collection rather than reconciled afterward.

Frequently asked

Sixteen questions about quantitative data analysis.

Definitions, methods, statistics, tools. Each answer is calibrated to be useful as a standalone reference.

Q01

What is quantitative data analysis?

Quantitative data analysis examines numerical measurements (scores, counts, rates, dollars) to find patterns, compare groups, and measure change. It uses statistical methods like descriptive statistics, t-tests, regression, and chi-square tests. The right method depends on how many variables you have, whether they are numerical or categorical, and what question you want to answer.

Q02

What is quantitative data?

Quantitative data is information expressed as numbers. Examples include test scores, ages, dollar amounts, completion rates, and counts of events. Quantitative data divides into two types. Discrete data takes whole-number values (number of participants, number of completed sessions). Continuous data takes any value within a range (age in years, income, temperature).

Q03

What are the steps in quantitative data analysis?

The seven steps are: define the question; verify dataset integrity by checking unique IDs, completeness, and option consistency; run descriptive statistics first; choose the comparison, change, or relationship method appropriate to your data; run the analysis; interpret with effect size and confidence intervals; and disaggregate by relevant subgroups. Skipping the integrity step is the most common cause of unreliable results downstream.

Q04

What are quantitative data analysis methods?

The five method families are descriptive statistics (mean, median, distribution), comparison methods (t-test, ANOVA, chi-square), change methods (paired t-test, repeated measures ANOVA), relationship methods (correlation, simple regression), and prediction or segmentation methods (multiple regression, logistic regression, cluster analysis). Each family answers a different kind of question and requires data shaped accordingly.

Q05

What are quantitative data analysis techniques?

Techniques span the full pipeline. Data preparation includes deduplication, missing-value handling, and option-list standardization. Statistical techniques include descriptive summarization, hypothesis testing, correlation, regression modeling, and multivariate analysis. Reporting techniques include effect size calculation, confidence interval reporting, and disaggregation by demographic subgroups. The technique you select depends on the question, the data shape, and the audience.

Q06

What is the difference between quantitative and qualitative data analysis?

Quantitative analysis works with numbers and uses statistical methods to test hypotheses or summarize patterns. Qualitative analysis works with text, audio, or images and uses thematic coding, narrative analysis, or grounded theory to surface meaning. Robust program evaluation usually combines both: quantitative scores describe what changed, qualitative responses explain why. Each method answers questions the other cannot.

Q07

What is the difference between descriptive and inferential statistics?

Descriptive statistics summarize the data you have (mean, median, distribution shape). Inferential statistics use the data you have to draw conclusions about a larger population or test whether observed differences could have arisen by chance. A frequency distribution is descriptive. A t-test is inferential. Most quantitative analysis combines both: describe first, then test specific questions.

Q08

When should you use a t-test versus ANOVA?

Use a t-test when comparing means between exactly two groups. Use one-way ANOVA when comparing means across three or more groups. Both rely on similar assumptions about the data. Running multiple t-tests instead of one ANOVA inflates the chance of a false-positive finding, so when you have three or more groups, ANOVA is the right starting point and post-hoc tests identify which specific pairs differ.

Q09

What sample size do you need for quantitative data analysis?

Sample size depends on what you want to detect. To detect a moderate difference between two groups with reasonable confidence, you typically need around 30 to 60 participants per group. To detect smaller effects, you need substantially more. For descriptive analysis, smaller samples can be acceptable if you are not generalizing to a larger population. A power analysis before data collection sets the right target.

Q10

What is a p-value, and how do you interpret it?

A p-value is the probability of observing a result at least as extreme as yours, assuming there is no real effect. A p-value below 0.05 is conventionally treated as evidence against the no-effect assumption. A p-value is not the probability that your hypothesis is true, and it is not a measure of effect size. Always pair a p-value with an effect size (like Cohen's d) and a confidence interval.

Q11

What is statistical significance?

Statistical significance means that the observed result is unlikely to have occurred by chance alone, given a chosen threshold (usually 0.05). It does not mean the result is large or important. A statistically significant difference of 0.1 points on a 100-point scale may have no practical meaning. Always report effect size alongside significance.

Q12

What are the best tools for quantitative data analysis?

The best tool depends on the dataset and the user. Excel and Google Sheets handle small datasets and basic descriptives. SPSS, SAS, and Stata serve traditional hypothesis testing with point-and-click interfaces. R and Python serve custom analyses, large datasets, and reproducible pipelines. Tableau and Power BI handle dashboards and exploration. Sopact Sense is the right fit when collection plus analysis-ready longitudinal data with embedded demographics is the requirement. Each is wrong for the use cases the others handle well.

Q13

How do you analyze quantitative data in research?

Begin by verifying dataset integrity: confirm unique participant identifiers, check for missing values, and validate that response options remained consistent across instrument versions. Then produce descriptive statistics. Apply the inferential method that matches the question (comparison, change, relationship, or prediction). Disaggregate findings by relevant subgroups. Pair quantitative findings with qualitative context where available.

Q14

What are quantitative data analysis examples?

Examples include: comparing average reading scores between two cohorts using a t-test; measuring change in self-reported confidence from program intake to exit using a paired t-test; testing whether employment outcomes differ by demographic group using chi-square; predicting graduation likelihood from prior academic and demographic variables using logistic regression; and identifying response-pattern segments in a satisfaction survey using cluster analysis. Each example pairs a method with the type of question it answers.

Q15

Can ChatGPT or Excel do quantitative data analysis reliably?

ChatGPT and similar tools cannot reliably analyze quantitative data for evaluation purposes because results are not reproducible: the same dataset processed in two sessions can produce different statistics and different category labels. Excel handles descriptive statistics and basic charts well, but it lacks rigorous tests for many comparison and prediction methods at scale. For research and program evaluation that will be cited, use a dedicated statistical environment or a platform that produces consistent, auditable analysis from a live dataset.

Q16

How does Sopact Sense support quantitative data analysis?

Sopact Sense is the data origin: forms and surveys are designed and collected inside the platform, with unique participant identifiers assigned at first contact and demographic variables embedded at design. Pre-program, mid-program, and exit instruments link automatically to the same record. Descriptive statistics, pre-post comparison, and equity disaggregation run against datasets that are clean, linked, and disaggregated from collection rather than reconciled afterward.

Working session

Bring your dataset. Run your method.

Forty minutes. We pick the right method for your data, run it on a real dataset, and walk through the interpretation. Bring a CSV, a survey export, or a research dataset. No procurement decision required.

Format

One person from your team, one from ours. Forty minutes. Live screen-share with your dataset.

What to bring

A CSV, survey export, or research dataset. Plus the question you want it to answer.

What you leave with

A method recommendation, the actual analysis run, and a clear interpretation of what the numbers mean.