Sopact is a technology based social enterprise committed to helping organizations measure impact by directly involving their stakeholders.
Useful links
Copyright 2015-2025 © sopact. All rights reserved.

New webinar on 3rd March 2026 | 9:00 am PT
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
Master IMP's Five Dimensions of impact — What, Who, How Much, Contribution, and Risk — and turn your measurement framework into decisions funders trust.
Every impact fund in 2026 references the Five Dimensions of Impact in its reporting deck. Most label content against the dimensions like section headers — What over here, Who over there, How Much in the metrics page, Contribution and Risk in a narrative paragraph near the end. Very few actually score investees against the dimensions, compound those scores forward from due diligence through portfolio monitoring and LP reporting, or generate LP-ready dimension analysis from accumulated evidence. The distance between the framework as a taxonomy and the framework as a scoring architecture is what we call The Taxonomy Trap — and it's the structural failure point the rest of this guide is about.
Last updated: April 2026
This article covers the Impact Management Project (IMP) Five Dimensions framework — origin, definitions for each of the five, evidence requirements, and the data infrastructure required to operationalize them. It is a complement to Impact Measurement and Management: that page shows the architecture; this page shows the framework that runs through it.
The Five Dimensions of Impact are a universal structure for assessing the impact of any enterprise, investment, or program. They answer five questions: What outcome occurs, Who experiences it, How Much of it happens, what the enterprise's Contribution is relative to what would have happened otherwise, and what Risk there is that the impact differs from expectations. The framework was developed by the Impact Management Project between 2016 and 2020 and is now referenced in IRIS+, the Operating Principles for Impact Management (OPIM), SDG-aligned reporting, and virtually every major impact standard in use today.
The power of the Five Dimensions is not in their novelty — they codify questions any thoughtful investor was already asking — but in their consensus. Before the IMP, an impact fund's scoring methodology was typically idiosyncratic and non-comparable across portfolio companies or across funds. The Five Dimensions gave the field a shared question structure so that different funds, investees, and reporters could describe impact in commensurable terms.
The Impact Management Project was a time-bound forum (2016–2020) that convened more than 2,000 organizations — asset owners, asset managers, enterprises, development finance institutions, and standard-setters — to build consensus on how to measure, manage, and report impact. Its core output was the Five Dimensions framework. When the IMP completed its mandate in 2020, its work streams were stewarded by successor organizations, most notably Impact Frontiers, which continues to refine the Five Dimensions and publish implementation guidance.
In 2026 the Five Dimensions are the default language for impact assessment — invoked in LP reporting, due diligence memos, theory-of-change documents, and regulatory filings. The challenge is no longer adoption. It is operationalization.
The What dimension asks: what outcome does the enterprise produce for the people or planet it serves, and how important is that outcome to them? This is not the output (training delivered, product sold) but the outcome (skills gained, livelihood improved, emissions avoided). At due diligence, the What signal is the clarity and specificity of the investee's outcome definition and whether it distinguishes between outputs and genuine behavior or wellbeing change.
Funds vary in how they weight What. A climate-tech fund may prioritize SDG alignment and carbon outcome specificity; a financial-inclusion fund weights outcome specificity and quality of the behavioral change definition. The What dimension becomes the benchmark against which all subsequent narrative submissions are reconciled — if quarterly reports describe outcomes that do not match the What established at DD, the mismatch is surface-able.
The Who dimension asks: which stakeholders experience the outcome, and how underserved are they relative to the outcome in question? Evidence at DD is demographic data at intake, a stated theory of equity, and mechanisms for verifying Who actually benefits versus Who was intended to benefit. A common failure pattern is investees who describe their target Who in theory but lack data infrastructure to verify Who actually received the outcome in practice.
The Who dimension establishes the cohort definition that cascades through every subsequent instrument. Surveys, follow-ups, and outcome reports all segment against Who established at DD. Without this discipline, equity gaps become invisible — an investee may deliver strong aggregate outcomes while systematically underserving the Who the fund originally invested for.
The How Much dimension is actually three questions in one. Scale: how many people or how much of the planet experience the outcome. Depth: how significant the outcome is for each person or place affected. Duration: how long the outcome lasts.
Most funds cover Scale reasonably well — it's the headline number in pitch decks and annual reports. Depth and Duration are where evidence thins out. Depth requires per-beneficiary measurement, not aggregate. Duration requires longitudinal tracking, which means persistent investee and participant IDs from first measurement forward. Without that infrastructure, funds report Scale confidently and hand-wave Depth and Duration — which is how impact claims become indistinguishable from output claims.
The Contribution dimension asks: what would have happened without this enterprise? If the outcome would have occurred anyway — through a market process, a government program, or a competitor's intervention — the enterprise's contribution is small. If the outcome would not have occurred without the specific mechanism the enterprise provides, contribution is high. Evidence for Contribution is counterfactual reasoning: comparison-group data, waitlist analysis, or at minimum rigorous narrative attribution grounded in stakeholder voice.
Contribution is simultaneously the most important and most under-evidenced dimension. It is important because it distinguishes impact-generating investments from investments that happen to land in socially valued sectors. It is under-evidenced because counterfactual data is costly to collect and rarely available in DD documents. The opportunity — and where AI-native analysis changes the economics — is that much of the counterfactual reasoning funds need already exists in DD interview transcripts. It has never been systematically extracted and scored.
The Risk dimension asks: what are the factors that could cause the impact to differ from expectations? Categories typically include evidence risk (is the evidence weak enough that the claim itself might be wrong), external risk (market, political, regulatory factors), and participation risk (stakeholders might not engage as expected, or might stop engaging). At DD, the Risk signal is completeness of risk identification, mitigation plan quality, and historical risk-management evidence.
In most funds' data, Risk shows up as a paragraph of narrative in the DD memo and then never appears structurally again. The failure mode is that risk flags identified at DD do not become early-warning criteria in ongoing monitoring. A well-designed impact data infrastructure surfaces signals in qualitative quarterly data that match the Risk categories flagged at DD — before they appear in quantitative metric shortfalls.
The Taxonomy Trap is the structural failure where a fund adopts the Five Dimensions as labels for report sections rather than as a scoring architecture that drives decisions. The symptom is that the dimensions organize how impact content is presented in LP reports but do not change how investees are selected, monitored, or compared. Under the trap, dimensions are narrative scaffolding, not analytical infrastructure.
Operationalizing the Five Dimensions as a scoring architecture requires three properties most funds lack. First, a fund-specific rubric anchored to all five dimensions — not just What and Scale. Second, evidence linkage so every proposed score cites the specific document passage that supports it. Third, score compounding — DD scores become the baseline against which portfolio monitoring data is reconciled, and monitoring data becomes the evidence base for LP-ready dimension analysis.
Sopact Sense is built around this architecture. The Five Dimensions are the scoring frame; the investee's DD documents, quarterly submissions, and stakeholder data are the evidence; and the intelligence layer produces dimension-level intelligence that compounds across the investment lifecycle.
Each of the Five Dimensions has a distinct evidence profile, which is why operationalizing them requires data infrastructure designed at the dimension level — not a single "impact report" pipeline.
What and Scale are primarily quantitative and output-adjacent. They rely on well-defined outcome indicators, consistent metric definitions across the portfolio, and reliable reporting cadence. These are the dimensions most funds already cover adequately.
Who and Depth require stakeholder-voice infrastructure. Demographic segmentation at intake, cohort-preserving longitudinal tracking, and qualitative analysis of stakeholder experience are the baseline. This is where AI-native analysis changes the economics — Depth in particular has historically been inferred from narrative and is now directly scorable when AI reads stakeholder interviews at portfolio scale.
Duration requires persistent participant and investee IDs from first measurement through long-term follow-up. Without persistent IDs, Duration is either reconstructed (expensively, with gaps) or skipped entirely.
Contribution requires counterfactual reasoning infrastructure — comparison-group data where available, structured attribution analysis from stakeholder voice, and disciplined application of the fund's additionality criteria to DD documents. Much of this lives in interview transcripts at DD and has historically been unstructured.
Risk requires a risk-category taxonomy applied at DD and revisited in ongoing monitoring. The signal is pattern matching: ongoing narrative data surfaces language that matches DD risk categories — sometimes months before metrics confirm the risk has materialized.
Five-dimension scoring becomes operationally powerful when scores compound across phases rather than reset. At Phase 1 — due diligence — the fund scores the investee across all five dimensions from DD documents, with citations. At Phase 2 — living theory of change — those scores become the baseline against which quarterly submissions are automatically reconciled. At Phase 3 — LP reporting — dimension-level intelligence is aggregated across the portfolio and rolled up into LP-ready analysis.
This is the architecture the Impact Measurement and Management workflow describes in full. The Five Dimensions are the scoring lens; the three-phase architecture is how that lens stays focused across the entire investment lifecycle.
Without this compounding, every phase re-reads the same documents. The DD risk flag is not surfaced in Q3 monitoring. The contribution claim from the founder interview is not compared to the Q2 attribution narrative. LP reports get assembled by hand, at speed, from scratch — and most of what was learned at DD never makes it into the LP story.
The Five Dimensions are not in competition with IRIS+, OPIM, or any of the major reporting standards — they are the structural layer underneath. IRIS+ provides metric definitions that populate What and How Much. OPIM principles describe management practices that address Risk and institutional governance. SDG alignment sits as a tagging layer over What. UN Guiding Principles and the Five Dimensions reinforce one another around Who and stakeholder voice.
In practice, this means a fund's impact measurement system should let the Five Dimensions frame the questions, IRIS+ (or equivalent) provide the metrics, and OPIM (or the fund's own principles) govern the management practices. A unified data infrastructure captures evidence once and rolls it up into whichever framework or standard an LP requires — rather than running multiple parallel reporting pipelines.
Related reading: impact reporting, theory of change, logframe, donor impact report.
Mistake 1: Using the Five Dimensions as report section headers only. The Taxonomy Trap in its purest form. Labels are free; scoring is infrastructure. A fund whose Five Dimensions appear only in report layout has not operationalized the framework.
Mistake 2: Scoring only What and Scale, hand-waving the rest. These are the dimensions with the most available data and the least analytical value for distinguishing impact-generating investments. The investments worth making are typically the ones where Contribution is high and Risk is manageably addressed — the two dimensions funds most often skip.
Mistake 3: Treating DD scoring as a one-time event. DD scoring that does not persist into monitoring and LP reporting is decorative. The scoring's value is in what it enables downstream — not in what it signals at IC meeting.
Mistake 4: Collecting Duration data without persistent participant IDs. Duration requires that the Who at intake is the same Who at follow-up. Without persistent IDs at first contact, Duration becomes a reconstruction project.
Mistake 5: Requiring investees to fill out new portals. Most investees already produce the data funds need — in board decks, interview transcripts, existing CRM systems, surveys they already run. Infrastructure that requires investees to adopt the fund's new tool invariably produces compliance data, not honest data. The better design reads what investees already produce.
The Five Dimensions of Impact are a universal structure for assessing impact, developed by the Impact Management Project between 2016 and 2020. They are: What (the outcome the enterprise produces), Who (the stakeholders experiencing it), How Much (scale, depth, and duration of the outcome), Contribution (what would have happened without the enterprise), and Risk (factors that could cause the impact to differ from expectations). The framework is now embedded in IRIS+, OPIM, and most major impact reporting standards.
The Impact Management Project was a time-bound forum that ran from 2016 to 2020, convening more than 2,000 organizations to build consensus on impact measurement. Its core output was the Five Dimensions framework. When the IMP concluded, successor organizations — most notably Impact Frontiers — took on stewardship of the Five Dimensions and continue to publish implementation guidance.
Each dimension answers a core question about an enterprise's impact. What: what outcome does the enterprise produce, and how important is it. Who: which stakeholders experience the outcome, and how underserved are they. How Much: scale (how many), depth (how significant per person), and duration (how long-lasting). Contribution: what would have happened without the enterprise. Risk: what factors could prevent the expected impact. Together they produce a structured, comparable assessment of impact across different enterprises and sectors.
The Taxonomy Trap is adopting the Five Dimensions as section headers in reports rather than as a scoring architecture that drives decisions. Under the trap, dimensions organize how impact content is presented but do not change how investees are selected, monitored, or compared. The fix is a fund-specific rubric anchored to all five dimensions, evidence citation on every proposed score, and score compounding across due diligence, portfolio monitoring, and LP reporting.
The Five Dimensions framework (from the IMP) is a question structure — it defines the categories of evidence needed for impact assessment. IRIS+ is a taxonomy of specific metric definitions. They are complementary: the Five Dimensions frame what to ask; IRIS+ provides standardized metrics that populate the What and How Much answers. Most rigorous fund impact systems use both, with the Five Dimensions as the structure and IRIS+ as the metric dictionary.
What and How Much (Scale) are primarily quantitative outcome data — indicators, definitions, reporting cadence. Who and Depth require stakeholder voice — demographic segmentation, qualitative analysis, cohort tracking. Duration requires persistent participant and investee IDs. Contribution requires counterfactual reasoning — comparison-group data where available, structured attribution from stakeholder narrative otherwise. Risk requires a risk-category taxonomy applied at DD and revisited in monitoring. Each dimension has a distinct evidence profile, which is why operationalizing the framework requires dimension-aware data infrastructure.
Contribution and attribution are closely related but not identical. Attribution asks: "Did this specific intervention cause this specific outcome?" — a causal claim. Contribution asks: "What would have happened without the intervention?" — a counterfactual claim. Contribution is typically more practical to evidence because it does not require randomized experimental design; structured stakeholder voice and comparison-group data are usually sufficient. The Five Dimensions framework prefers Contribution precisely because it is operationally tractable.
Risk is included because the probability that the expected impact differs from outcomes is itself a material part of the impact profile. A high-scoring impact claim with high evidence risk (the evidence might be wrong), high external risk (market or regulatory factors), or high participation risk (stakeholders might not engage as expected) is not the same as a moderate claim with low risk. The Risk dimension forces funds to assess impact as probability-weighted, not just point-estimated.
The Impact Management Project itself completed its mandate in 2020. The work streams continue through successor organizations. Impact Frontiers is the primary steward of the Five Dimensions framework today and publishes ongoing implementation guidance. The framework itself is widely adopted and actively maintained through these successor structures.
Sopact Sense applies the Five Dimensions as a scoring architecture, not a taxonomy. DD documents are read end-to-end by AI and scored against a fund-specific rubric anchored to all five dimensions, with citation trails on every proposed score. Those scores become the baseline for portfolio monitoring — quarterly submissions are auto-reconciled against the DD baseline, gaps and anomalies surfaced. Dimension-level intelligence rolls up into LP-ready reports automatically. The same evidence infrastructure serves DD, monitoring, and LP reporting without re-keying data. Built for impact-fund use cases covered in Impact Measurement and Management.
This page covers the Five Dimensions framework itself — what the dimensions are, what evidence each requires, and how they are scored. The Impact Measurement and Management page covers the architecture that operationalizes the framework — the three-phase workflow (due diligence → living theory of change → LP reporting) and the data sources that feed the intelligence layer. Together the two pages describe the framework and the architecture required to apply it at portfolio scale.
Sopact Sense pricing scales with fund size and portfolio complexity. Most impact funds operating in the 10–50 investee range see dramatic reduction in reporting cycle time and analyst hours within the first two quarterly cycles. Request a walkthrough for pricing specific to your fund's portfolio size and reporting cadence.