
New webinar on 3rd March 2026 | 9:00 am PT
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
Learn how to measure impact using IMP's Five Dimensions framework—What, Who, How Much, Contribution, and Risk
Every enterprise affects people and the planet — some effects are positive, others negative; some intended, others unintended. But how do you decide which effects truly matter? What data should you collect to understand your real impact?
The five dimensions of impact provide the answer. Developed by the Impact Management Project (IMP), these five dimensions — What, Who, How Much, Contribution, and Risk — give every organization a universal structure for defining, measuring, and communicating genuine impact on stakeholders.
This article explains each dimension, shows why most organizations only measure two or three of the five, and demonstrates how AI-native analysis makes all five dimensions operationally practical for the first time.
Organizations spend 80% of their time cleaning data instead of analyzing it. Most use only 5% of available context for decisions. And while 76% of nonprofits say impact measurement is a priority, only 29% do it effectively.
The problem is not the framework. The IMP five dimensions are conceptually clear. The problem is that most organizations adopt the five dimensions as a reporting taxonomy — a way to organize their annual report — without connecting them to actual data collection instruments and analysis workflows.
The result is a predictable pattern: collect outputs, skip Contribution and Risk entirely because they require qualitative evidence nobody has time to code manually, produce an annual report that arrives too late to inform decisions, and call it "impact measurement."
Three structural flaws drive this failure. First, frameworks without data architecture produce compliance theater — organizations can name all five dimensions but cannot operationalize them. Second, manual coding of qualitative evidence is impossible at scale, which means the two most important dimensions (Contribution and Risk) get skipped. Third, annual reporting cycles deliver insights after programs have already moved on, so the evidence never reaches decision-makers while there is still time to act.
The five dimensions of impact are a classification system created by the Impact Management Project that organizes impact evidence into five questions every organization must answer. Together they transform vague impact claims into structured, comparable, and decision-useful evidence.
The five dimensions are: What outcome occurred, Who experienced it, How Much change happened (scale, depth, and duration), what was the organization's Contribution to that change, and what is the Risk that impact differs from expectation.
The "What" dimension asks: what outcome is your enterprise contributing to, and how important is that outcome to the stakeholders who experience it?
Most organizations handle this dimension reasonably well because it maps to traditional output and outcome tracking. The mistake is confusing outputs (workshops delivered, meals served, loans disbursed) with outcomes (increased knowledge, improved nutrition, economic stability). The What dimension focuses exclusively on outcomes — the actual changes experienced by stakeholders.
The critical nuance: an outcome that seems significant to your organization might not be what your beneficiaries actually value most. A youth employment program might track job placement rates, but participants might value confidence, independence, and career direction more than the job itself.
Making it operational: Define 3-5 specific outcomes per program. Each needs at least one quantitative indicator and one qualitative evidence source. Sopact's Cell-level AI classifies outcomes from open-ended survey responses and auto-aligns with SDG categories — turning unstructured stakeholder language into structured outcome evidence.
The "Who" dimension asks: who experiences the outcome, and how underserved were they before your intervention?
This dimension recognizes that impact matters more when it reaches those who need it most. A job training program placing 100 college graduates creates different impact than one placing 100 formerly incarcerated individuals facing systemic barriers.
Making it operational: Collect demographic and contextual data at intake, linked to outcome data through persistent unique IDs. Use AI to segment outcomes by stakeholder characteristics. Sopact's Row-level analysis tracks each stakeholder as a unique person across every touchpoint — from application through outcome through follow-up — revealing equity patterns that aggregate data hides.
The "How Much" dimension measures three sub-elements: scale (how many people experience the outcome), depth (how much change each person experiences), and duration (how long the change lasts).
This is the dimension organizations handle worst because it requires longitudinal tracking. Without persistent unique IDs that follow stakeholders from intake through post-program follow-up, "how much" becomes a guess.
Making it operational: Design data collection to capture scale at every stage, depth through both quantitative scores and qualitative evidence, and duration through post-program follow-up. Sopact's Column-level analysis aggregates across cohorts — comparing pre/post measurements, segmenting by demographics, and correlating qualitative themes with quantitative scores.
The "Contribution" dimension asks: what is your enterprise's contribution to the outcome versus what would have happened anyway?
This is the most technically challenging dimension. Full counterfactual analysis (randomized control trials) is expensive and impractical for most organizations. But contribution evidence can still be gathered through stakeholder attribution (asking participants what they believe caused the change), comparison groups, and theory-based evaluation.
Why it gets skipped: Contribution requires qualitative evidence — interview transcripts, open-ended responses, stakeholder reflections. Organizations that rely on manual coding cannot process this evidence at scale, so they skip the dimension entirely.
Making it operational: At minimum, collect stakeholder attribution data — direct questions about what participants believe drove changes. Sopact's Row and Grid-level AI analyzes open-ended attribution responses at scale, identifying common causal narratives across cohorts and testing whether your theory of change mechanisms actually appear in stakeholder evidence.
The "Risk" dimension asks: what is the risk that impact is different from expected?
This dimension forces honest assessment: risk that outcomes do not materialize, risk that unintended negative consequences occur, risk that impact is not sustained. Most organizations treat risk as a one-time assessment during program design. The IMP framework treats it as ongoing monitoring.
Why it gets skipped: Like Contribution, Risk requires qualitative evidence — sentiment analysis, emerging themes, early warning signals. Organizations focused on quarterly metrics miss the qualitative signals that predict problems before they appear in the numbers.
Making it operational: Build risk indicators into regular data collection. Use mid-program check-ins and stakeholder pulse surveys to detect emerging risks before they appear in quantitative metrics. Sopact's Grid-level AI flags anomalies in qualitative data — detecting dropping sentiment, emerging complaints, or participation patterns that indicate brewing problems — and surfaces them in real-time.
The five dimensions are universal, but emphasis shifts depending on organizational context.
Impact investors use all five dimensions to score portfolio companies and compare across investments. Contribution and Risk matter most for due diligence — understanding whether the investee's activities actually cause the reported outcomes and whether those outcomes face material threats.
Nonprofits typically focus on What, Who, and How Much for program reporting, but increasingly need Contribution evidence to differentiate themselves in competitive funding environments. The organizations that can demonstrate their additive effect — not just that outcomes occurred, but that their intervention caused them — secure more funding.
Foundations and grantmakers need portfolio-level views across all five dimensions. They compare grantees not just on What outcomes they report, but on Who they reach (equity patterns), How Much change per dollar invested, Contribution evidence, and Risk profiles.
Accelerators apply the five dimensions across a uniquely complex lifecycle — from application screening through cohort delivery through alumni tracking. Each dimension maps to different program stages: What maps to outcome goals, Who maps to cohort selection criteria, How Much maps to program intensity and follow-up, Contribution maps to what the accelerator uniquely provided, and Risk maps to startup failure rates and external market conditions.
The five dimensions of impact are a framework created by the Impact Management Project (IMP) that classifies impact evidence into five questions: What outcome occurred, Who experienced it, How Much change happened (scale, depth, duration), what was the organization's Contribution to that change, and what is the Risk that impact differs from expectation. Together they provide a universal language for measuring and comparing impact.
The IMP (Impact Management Project) framework is a global consensus on how to measure and manage impact, developed through consultation with over 2,000 organizations starting in 2016. Its core component is the five dimensions of impact. The IMP's work is now housed at the IFRS Foundation, integrating impact thinking with broader sustainability disclosure standards.
Design your data collection instruments — surveys, intake forms, interview guides — to explicitly address each dimension. Collect baseline and outcome data for What and How Much, track demographics for Who, gather qualitative evidence through open-ended questions and interviews for Contribution, and monitor drop-off rates and sentiment for Risk. AI-powered platforms can now extract Contribution and Risk evidence automatically from text data.
A theory of change maps the causal pathway from activities to outcomes — it tells you what should happen and why. The five dimensions measure what actually happened across five evidence categories. They are complementary: the theory of change guides what to measure, and the five dimensions structure how you collect and organize the evidence.
Contribution and Risk require qualitative evidence — interview transcripts, open-ended responses, stakeholder narratives — rather than simple quantitative metrics. Traditional approaches relied on manual coding of this evidence, which is time-consuming and doesn't scale. AI-native analysis platforms now extract Contribution and Risk evidence automatically from text data, making these dimensions practically measurable for organizations of any size.
Impact investors use the five dimensions to evaluate portfolio companies during due diligence and ongoing monitoring. What and Who define the investment thesis. How Much tracks whether outcomes match expectations. Contribution assesses whether the investee's activities actually cause reported outcomes. Risk identifies threats to impact delivery. Portfolio-level analysis across all five dimensions enables investors to compare investments, allocate capital toward highest-impact opportunities, and report to LPs with structured evidence.



