Five Dimensions of Impact: Complete Guide to IMP Framework
Master IMP's Five Dimensions of impact — What, Who, How Much, Contribution, and Risk — and turn your measurement framework into decisions funders trust.
The Five Dimensions of Impact in 2026: What, Who, How Much, Contribution, and Risk — Applied, Not Just Labeled
Every impact fund in 2026 references the Five Dimensions of Impact in its reporting deck. Most label content against the dimensions like section headers — What over here, Who over there, How Much in the metrics page, Contribution and Risk in a narrative paragraph near the end. Very few actually score investees against the dimensions, compound those scores forward from due diligence through portfolio monitoring and LP reporting, or generate LP-ready dimension analysis from accumulated evidence. The distance between the framework as a taxonomy and the framework as a scoring architecture is what we call The Taxonomy Trap — and it's the structural failure point the rest of this guide is about.
Last updated: April 2026
This article covers the Impact Management Project (IMP) Five Dimensions framework — origin, definitions for each of the five, evidence requirements, and the data infrastructure required to operationalize them. It is a complement to Impact Measurement and Management: that page shows the architecture; this page shows the framework that runs through it.
Use Case · Five Dimensions of Impact
The Impact Management Project's Five Dimensions — and the data infrastructure that actually applies them.
The IMP framework is the most widely referenced impact measurement scheme in 2026. Yet almost no fund has the data infrastructure to genuinely score against all five dimensions with cross-portfolio consistency. The gap between framework adoption and framework operation is where Sopact Sense lives.
The Five Dimensions of Impact are a universal structure for assessing the impact of any enterprise, investment, or program. They answer five questions: What outcome occurs, Who experiences it, How Much of it happens, what the enterprise's Contribution is relative to what would have happened otherwise, and what Risk there is that the impact differs from expectations. The framework was developed by the Impact Management Project between 2016 and 2020 and is now referenced in IRIS+, the Operating Principles for Impact Management (OPIM), SDG-aligned reporting, and virtually every major impact standard in use today.
The power of the Five Dimensions is not in their novelty — they codify questions any thoughtful investor was already asking — but in their consensus. Before the IMP, an impact fund's scoring methodology was typically idiosyncratic and non-comparable across portfolio companies or across funds. The Five Dimensions gave the field a shared question structure so that different funds, investees, and reporters could describe impact in commensurable terms.
What is the Impact Management Project (IMP)?
The Impact Management Project was a time-bound forum (2016–2020) that convened more than 2,000 organizations — asset owners, asset managers, enterprises, development finance institutions, and standard-setters — to build consensus on how to measure, manage, and report impact. Its core output was the Five Dimensions framework. When the IMP completed its mandate in 2020, its work streams were stewarded by successor organizations, most notably Impact Frontiers, which continues to refine the Five Dimensions and publish implementation guidance.
In 2026 the Five Dimensions are the default language for impact assessment — invoked in LP reporting, due diligence memos, theory-of-change documents, and regulatory filings. The challenge is no longer adoption. It is operationalization.
IMP Operationalization Principles · 2026
Six principles that separate IMP-applied from IMP-labeled
What changes when the Five Dimensions move from report section headers to a scoring architecture that drives DD, monitoring, and LP reporting decisions.
Treat the Five Dimensions as a rubric, not a taxonomy
The framework's power is in scoring, not labeling. A fund-specific rubric anchored to all five dimensions — with weights that reflect the fund's thesis — is what distinguishes impact-generating investments from investments that land in socially valued sectors.
△Section headers are free. Rubrics are infrastructure.
02
Evidence Citation
Every proposed dimension score must cite its source passage
"Why did you score Contribution a 3?" should return a sentence from the founder interview transcript, not a number someone typed in a spreadsheet. Citation discipline turns dimension scoring from subjective to defensible — and makes LP conversations evidence-based.
△A score without a citation is a guess wearing a number.
03
Persistent IDs
Persistent investee and participant IDs from first measurement
The Duration dimension only works if the Who at intake is the same Who at follow-up. Persistent IDs at first contact are the architectural requirement. Without them, Duration is either reconstructed expensively or quietly skipped.
△New IDs per measurement cycle erase Duration evidence permanently.
04
Counterfactual Evidence
Mine Contribution evidence from stakeholder voice at DD
Most funds skip Contribution because counterfactual data is "hard to collect." In reality, much of it already exists in DD interview transcripts — stakeholder attribution language, market-gap assertions, comparison-group hints. AI analysis at portfolio scale makes this evidence systematically scorable for the first time.
△Contribution data you already have is worth more than a survey you'll never field.
05
Risk Monitoring
Risk flags at DD must become monitoring early-warning criteria
The Risk dimension fails operationally when DD risk identification is a paragraph in the memo that never appears again. Infrastructure closes the loop: ongoing narrative data is pattern-matched against DD risk categories — before the risk shows up in quantitative shortfalls.
△Risk identification without risk monitoring is compliance theater.
06
Compounding
Dimension scores compound across DD, monitoring, and LP reporting
Scores set at DD become the baseline. Quarterly submissions auto-reconcile against that baseline. LP reports roll up from accumulated dimension evidence — not reconstructed from scratch each cycle. This is the architectural difference between IMP as decoration and IMP as intelligence.
△Scoring that resets each phase produces reports, not intelligence.
Impact Management Project framework: the Five Dimensions explained
What — outcomes the enterprise produces
The What dimension asks: what outcome does the enterprise produce for the people or planet it serves, and how important is that outcome to them? This is not the output (training delivered, product sold) but the outcome (skills gained, livelihood improved, emissions avoided). At due diligence, the What signal is the clarity and specificity of the investee's outcome definition and whether it distinguishes between outputs and genuine behavior or wellbeing change.
Funds vary in how they weight What. A climate-tech fund may prioritize SDG alignment and carbon outcome specificity; a financial-inclusion fund weights outcome specificity and quality of the behavioral change definition. The What dimension becomes the benchmark against which all subsequent narrative submissions are reconciled — if quarterly reports describe outcomes that do not match the What established at DD, the mismatch is surface-able.
Who — stakeholders experiencing outcomes
The Who dimension asks: which stakeholders experience the outcome, and how underserved are they relative to the outcome in question? Evidence at DD is demographic data at intake, a stated theory of equity, and mechanisms for verifying Who actually benefits versus Who was intended to benefit. A common failure pattern is investees who describe their target Who in theory but lack data infrastructure to verify Who actually received the outcome in practice.
The Who dimension establishes the cohort definition that cascades through every subsequent instrument. Surveys, follow-ups, and outcome reports all segment against Who established at DD. Without this discipline, equity gaps become invisible — an investee may deliver strong aggregate outcomes while systematically underserving the Who the fund originally invested for.
How Much — scale, depth, and duration
The How Much dimension is actually three questions in one. Scale: how many people or how much of the planet experience the outcome. Depth: how significant the outcome is for each person or place affected. Duration: how long the outcome lasts.
Most funds cover Scale reasonably well — it's the headline number in pitch decks and annual reports. Depth and Duration are where evidence thins out. Depth requires per-beneficiary measurement, not aggregate. Duration requires longitudinal tracking, which means persistent investee and participant IDs from first measurement forward. Without that infrastructure, funds report Scale confidently and hand-wave Depth and Duration — which is how impact claims become indistinguishable from output claims.
Contribution — what would have happened anyway
The Contribution dimension asks: what would have happened without this enterprise? If the outcome would have occurred anyway — through a market process, a government program, or a competitor's intervention — the enterprise's contribution is small. If the outcome would not have occurred without the specific mechanism the enterprise provides, contribution is high. Evidence for Contribution is counterfactual reasoning: comparison-group data, waitlist analysis, or at minimum rigorous narrative attribution grounded in stakeholder voice.
Contribution is simultaneously the most important and most under-evidenced dimension. It is important because it distinguishes impact-generating investments from investments that happen to land in socially valued sectors. It is under-evidenced because counterfactual data is costly to collect and rarely available in DD documents. The opportunity — and where AI-native analysis changes the economics — is that much of the counterfactual reasoning funds need already exists in DD interview transcripts. It has never been systematically extracted and scored.
Risk — what could prevent expected impact
The Risk dimension asks: what are the factors that could cause the impact to differ from expectations? Categories typically include evidence risk (is the evidence weak enough that the claim itself might be wrong), external risk (market, political, regulatory factors), and participation risk (stakeholders might not engage as expected, or might stop engaging). At DD, the Risk signal is completeness of risk identification, mitigation plan quality, and historical risk-management evidence.
In most funds' data, Risk shows up as a paragraph of narrative in the DD memo and then never appears structurally again. The failure mode is that risk flags identified at DD do not become early-warning criteria in ongoing monitoring. A well-designed impact data infrastructure surfaces signals in qualitative quarterly data that match the Risk categories flagged at DD — before they appear in quantitative metric shortfalls.
Step 1: The Taxonomy Trap — why most funds label instead of score
The Taxonomy Trap is the structural failure where a fund adopts the Five Dimensions as labels for report sections rather than as a scoring architecture that drives decisions. The symptom is that the dimensions organize how impact content is presented in LP reports but do not change how investees are selected, monitored, or compared. Under the trap, dimensions are narrative scaffolding, not analytical infrastructure.
Operationalizing the Five Dimensions as a scoring architecture requires three properties most funds lack. First, a fund-specific rubric anchored to all five dimensions — not just What and Scale. Second, evidence linkage so every proposed score cites the specific document passage that supports it. Third, score compounding — DD scores become the baseline against which portfolio monitoring data is reconciled, and monitoring data becomes the evidence base for LP-ready dimension analysis.
Sopact Sense is built around this architecture. The Five Dimensions are the scoring frame; the investee's DD documents, quarterly submissions, and stakeholder data are the evidence; and the intelligence layer produces dimension-level intelligence that compounds across the investment lifecycle.
Five Dimensions · Grouped by Evidence Type
Each dimension group requires a different kind of evidence infrastructure
Not all five dimensions break in the same way. Stakeholder-voice dimensions fail for one reason. Outcome dimensions fail for a different reason. Analytical dimensions fail for a third. This is why a single "impact reporting" pipeline cannot operationalize IMP — the infrastructure has to be dimension-aware.
Who and Depth both require stakeholder voice. Who asks: which people experience the outcome and how underserved are they. Depth asks: how significant is the outcome for each person affected. Both require demographic segmentation at intake, qualitative analysis of lived experience, and cohort-preserving tracking. At portfolio scale these dimensions historically break — most funds capture Who demographically and leave Depth as narrative paragraphs that never get systematically analyzed.
Moment01
Voice collected
Surveys · interviews · stakeholder calls
Moment02
Who segmentation
Demographic layers · underserved status
Moment03
Depth scored
Per-person significance · citation trail
Traditional Stack
Who demographic-only, Depth in narrative
Who captured demographically — never verified in practice
Depth appears as anecdotes in PDFs
Qualitative data sits in Excel, never analyzed at scale
Equity gaps invisible until the cohort is lost
Each fund does bespoke, non-comparable analysis
With Sopact Sense
Structured Who + scorable Depth at portfolio scale
Structured Who at intake + verification in practice
Depthscored from stakeholder voice with source citations
AI analyzes qualitative responses across entire portfolio
Cohort-level equity signals surface early — not post-mortem
Cross-fund comparable evidence structures
What and Scale are the dimensions most funds already cover adequately — they live in indicators and headline metrics. Duration is the underestimated challenge: it requires persistent participant IDs from first measurement through long-term follow-up, and that infrastructure rarely exists in CRM-style portfolio systems. The result is reasonable Scale data, weak Duration data, and a reporting layer that hand-waves through outcome persistence claims.
Moment01
Outcome defined
Indicators · IRIS+ mapping
Moment02
Measured at scale
Portfolio-wide consistency
Moment03
Longitudinal follow-up
Same ID · same cohort · multi-year
Traditional Stack
Strong Scale. Duration is a reconstruction project.
Indicators tracked in scattered systems
Scale is the headline; Duration is inferred from vibes
No persistent IDs = cohort breaks at every measurement cycle
Commitment vs. actual delivery reconciled by hand
Depth sized only after the fact — never during collection
With Sopact Sense
Durable outcomes tied to persistent participant IDs
IRIS+ indicators mapped at DD, enforced portfolio-wide
Portfolio-wide measurement structure for comparability
Persistent IDs from first application through long-term follow-up
Auto-reconciliation of commitment vs. actual — quarterly
Depth captured alongside Scale, not inferred afterward
Contribution and Risk are the dimensions funds most often skip — not because they don't matter, but because the evidence requirements are harder. Contribution requires counterfactual reasoning. Risk requires a risk-category taxonomy applied at DD and actively monitored. These are also the dimensions where AI-native analysis changes the economics most dramatically — both are answerable from evidence funds already collect in DD interviews and quarterly narratives.
Moment01
Counterfactual at DD
Attribution from founder interviews
Moment02
Risk taxonomy applied
Categories · mitigation · historical evidence
Moment03
Continuous monitoring
Narrative signals pattern-matched to DD categories
Traditional Stack
Contribution and Risk are paragraphs that never return
Contribution = narrative paragraph in IC memo
Risk = bullet list in DD memo that never appears again
Counterfactual evidence skipped as "too hard to collect"
LP question "why additional?" returns a story, not analysis
With Sopact Sense
Contribution and Risk scored from evidence funds already have
Contribution scored from DD interview transcripts
Risk taxonomy active from DD through every quarterly submission
AI surfaces counterfactual language systematically
Narrative signals pattern-matched to DD risk categories early
Contribution claims cross-checked against stakeholder voice
Step 2: Evidence infrastructure for each dimension
Each of the Five Dimensions has a distinct evidence profile, which is why operationalizing them requires data infrastructure designed at the dimension level — not a single "impact report" pipeline.
What and Scale are primarily quantitative and output-adjacent. They rely on well-defined outcome indicators, consistent metric definitions across the portfolio, and reliable reporting cadence. These are the dimensions most funds already cover adequately.
Who and Depth require stakeholder-voice infrastructure. Demographic segmentation at intake, cohort-preserving longitudinal tracking, and qualitative analysis of stakeholder experience are the baseline. This is where AI-native analysis changes the economics — Depth in particular has historically been inferred from narrative and is now directly scorable when AI reads stakeholder interviews at portfolio scale.
Duration requires persistent participant and investee IDs from first measurement through long-term follow-up. Without persistent IDs, Duration is either reconstructed (expensively, with gaps) or skipped entirely.
Contribution requires counterfactual reasoning infrastructure — comparison-group data where available, structured attribution analysis from stakeholder voice, and disciplined application of the fund's additionality criteria to DD documents. Much of this lives in interview transcripts at DD and has historically been unstructured.
Risk requires a risk-category taxonomy applied at DD and revisited in ongoing monitoring. The signal is pattern matching: ongoing narrative data surfaces language that matches DD risk categories — sometimes months before metrics confirm the risk has materialized.
Step 3: How five-dimension scoring compounds through the investment lifecycle
Five-dimension scoring becomes operationally powerful when scores compound across phases rather than reset. At Phase 1 — due diligence — the fund scores the investee across all five dimensions from DD documents, with citations. At Phase 2 — living theory of change — those scores become the baseline against which quarterly submissions are automatically reconciled. At Phase 3 — LP reporting — dimension-level intelligence is aggregated across the portfolio and rolled up into LP-ready analysis.
This is the architecture the Impact Measurement and Management workflow describes in full. The Five Dimensions are the scoring lens; the three-phase architecture is how that lens stays focused across the entire investment lifecycle.
Without this compounding, every phase re-reads the same documents. The DD risk flag is not surfaced in Q3 monitoring. The contribution claim from the founder interview is not compared to the Q2 attribution narrative. LP reports get assembled by hand, at speed, from scratch — and most of what was learned at DD never makes it into the LP story.
IMP as Taxonomy vs. IMP as Scoring Architecture
What changes when the Five Dimensions drive decisions instead of organizing report sections
Dimension-by-dimension comparison across the way most funds apply IMP today and the way an operational scoring architecture handles each dimension.
Risk 01
Labels without scoring
Dimensions organize LP report sections but do not change how investees are selected, monitored, or compared. Section headers are free.
This is The Taxonomy Trap in its purest form.
Risk 02
DD scoring that doesn't compound
Scoring set at IC meeting disappears by Q2 monitoring. Every phase re-reads the same documents. Intelligence resets instead of accumulating.
One-time DD scoring is decorative. Compounded DD scoring is intelligence.
Risk 03
Contribution and Risk under-evidenced
The two dimensions that most distinguish impact-generating investments are the two most often left as narrative. The analytical layer never arrives.
Skipping Contribution is skipping the impact case itself.
Risk 04
LP reports rebuilt every cycle
Four to six days of analyst time per investee, per quarter, reconstructing narrative from scratch. Nothing accumulated. Nothing reused.
The hand-assembly cost is the Taxonomy Trap's invoice.
Dimension-by-Dimension
Five Dimensions as labels vs. Five Dimensions as a scoring architecture
Dimension
As Taxonomy (typical fund)
As Scoring Architecture (Sopact Sense)
D1
What
The outcome the enterprise produces
A section header in the LP report
Outcome description pasted from pitch deck. No distinction between outputs and outcomes. No alignment check against the fund's impact thesis.
Scored per fund rubric — outcome specificity, output vs. outcome, thesis alignment, SDG/IRIS+ category match
Every proposed score cites the DD passage that supports it. Becomes the benchmark against which every quarterly narrative is auto-reconciled.
D2
Who
Stakeholders experiencing the outcome
Demographic categorization — unverified in practice
Target stakeholder described at DD. No mechanism to verify Who actually benefits vs. Who was intended. Equity gaps invisible.
Structured Who at intake + underserved-status multiplier + cohort-preserving tracking through every subsequent instrument
Stakeholder profile becomes cohort definition for every survey, follow-up, and outcome report. Equity gaps surface before they compound.
D3
How Much
Scale · Depth · Duration
Scale reported. Depth and Duration inferred from narrative.
No persistent participant IDs = Duration is reconstruction. Depth sized per-cohort, not per-beneficiary. Commitment vs. actual reconciled by hand.
Scale + Depth + Duration captured structurally with persistent IDs from first measurement through long-term follow-up
DD commitments become quarterly monitoring baseline. Every metric submission auto-scored: on track · below · above commitment.
D4
Contribution
What would have happened anyway
A narrative paragraph in the IC memo. Often none of the above.
Counterfactual evidence skipped as "too hard." LP question "why additional?" returns a story, not analysis.
Scored from DD interview transcripts using fund's additionality criteria — counterfactual language extracted systematically
Contribution claims cross-referenced against quarterly stakeholder narratives. Causal language flagged and scored automatically.
D5
Risk
What could reduce expected impact
A bullet list in the DD memo — never appears downstream
Risk flags don't trigger monitoring alerts. Risk materializes in metric shortfalls before anyone connects it to the original identification.
Risk taxonomy active from DD through every quarterly submission — pattern matching on narrative signals
Narrative signals that match DD risk categories surface as early warnings — before quantitative metrics confirm the risk has materialized.
The Five Dimensions are a question structure. The answer structure — the data infrastructure, the compounding, the citation trails — is where Sopact Sense lives. The framework is widely adopted. The architecture that applies it is rare.
Step 4: Applying IMP with IRIS+, OPIM, and standard reporting frameworks
The Five Dimensions are not in competition with IRIS+, OPIM, or any of the major reporting standards — they are the structural layer underneath. IRIS+ provides metric definitions that populate What and How Much. OPIM principles describe management practices that address Risk and institutional governance. SDG alignment sits as a tagging layer over What. UN Guiding Principles and the Five Dimensions reinforce one another around Who and stakeholder voice.
In practice, this means a fund's impact measurement system should let the Five Dimensions frame the questions, IRIS+ (or equivalent) provide the metrics, and OPIM (or the fund's own principles) govern the management practices. A unified data infrastructure captures evidence once and rolls it up into whichever framework or standard an LP requires — rather than running multiple parallel reporting pipelines.
Step 5: Common mistakes that kill IMP-aligned measurement
Mistake 1: Using the Five Dimensions as report section headers only. The Taxonomy Trap in its purest form. Labels are free; scoring is infrastructure. A fund whose Five Dimensions appear only in report layout has not operationalized the framework.
Mistake 2: Scoring only What and Scale, hand-waving the rest. These are the dimensions with the most available data and the least analytical value for distinguishing impact-generating investments. The investments worth making are typically the ones where Contribution is high and Risk is manageably addressed — the two dimensions funds most often skip.
Mistake 3: Treating DD scoring as a one-time event. DD scoring that does not persist into monitoring and LP reporting is decorative. The scoring's value is in what it enables downstream — not in what it signals at IC meeting.
Mistake 4: Collecting Duration data without persistent participant IDs. Duration requires that the Who at intake is the same Who at follow-up. Without persistent IDs at first contact, Duration becomes a reconstruction project.
Mistake 5: Requiring investees to fill out new portals. Most investees already produce the data funds need — in board decks, interview transcripts, existing CRM systems, surveys they already run. Infrastructure that requires investees to adopt the fund's new tool invariably produces compliance data, not honest data. The better design reads what investees already produce.
Masterclass
The Five Dimensions of Impact — IMP framework, applied end-to-end
The Five Dimensions of Impact are a universal structure for assessing impact, developed by the Impact Management Project between 2016 and 2020. They are: What (the outcome the enterprise produces), Who (the stakeholders experiencing it), How Much (scale, depth, and duration of the outcome), Contribution (what would have happened without the enterprise), and Risk (factors that could cause the impact to differ from expectations). The framework is now embedded in IRIS+, OPIM, and most major impact reporting standards.
What is the Impact Management Project (IMP)?
The Impact Management Project was a time-bound forum that ran from 2016 to 2020, convening more than 2,000 organizations to build consensus on impact measurement. Its core output was the Five Dimensions framework. When the IMP concluded, successor organizations — most notably Impact Frontiers — took on stewardship of the Five Dimensions and continue to publish implementation guidance.
What do What, Who, How Much, Contribution, and Risk mean in the IMP framework?
Each dimension answers a core question about an enterprise's impact. What: what outcome does the enterprise produce, and how important is it. Who: which stakeholders experience the outcome, and how underserved are they. How Much: scale (how many), depth (how significant per person), and duration (how long-lasting). Contribution: what would have happened without the enterprise. Risk: what factors could prevent the expected impact. Together they produce a structured, comparable assessment of impact across different enterprises and sectors.
What is The Taxonomy Trap?
The Taxonomy Trap is adopting the Five Dimensions as section headers in reports rather than as a scoring architecture that drives decisions. Under the trap, dimensions organize how impact content is presented but do not change how investees are selected, monitored, or compared. The fix is a fund-specific rubric anchored to all five dimensions, evidence citation on every proposed score, and score compounding across due diligence, portfolio monitoring, and LP reporting.
What is the difference between IMP and IRIS+?
The Five Dimensions framework (from the IMP) is a question structure — it defines the categories of evidence needed for impact assessment. IRIS+ is a taxonomy of specific metric definitions. They are complementary: the Five Dimensions frame what to ask; IRIS+ provides standardized metrics that populate the What and How Much answers. Most rigorous fund impact systems use both, with the Five Dimensions as the structure and IRIS+ as the metric dictionary.
What kind of data do I need for each dimension?
What and How Much (Scale) are primarily quantitative outcome data — indicators, definitions, reporting cadence. Who and Depth require stakeholder voice — demographic segmentation, qualitative analysis, cohort tracking. Duration requires persistent participant and investee IDs. Contribution requires counterfactual reasoning — comparison-group data where available, structured attribution from stakeholder narrative otherwise. Risk requires a risk-category taxonomy applied at DD and revisited in monitoring. Each dimension has a distinct evidence profile, which is why operationalizing the framework requires dimension-aware data infrastructure.
How does Contribution differ from impact attribution?
Contribution and attribution are closely related but not identical. Attribution asks: "Did this specific intervention cause this specific outcome?" — a causal claim. Contribution asks: "What would have happened without the intervention?" — a counterfactual claim. Contribution is typically more practical to evidence because it does not require randomized experimental design; structured stakeholder voice and comparison-group data are usually sufficient. The Five Dimensions framework prefers Contribution precisely because it is operationally tractable.
Why is Risk included as an impact dimension?
Risk is included because the probability that the expected impact differs from outcomes is itself a material part of the impact profile. A high-scoring impact claim with high evidence risk (the evidence might be wrong), high external risk (market or regulatory factors), or high participation risk (stakeholders might not engage as expected) is not the same as a moderate claim with low risk. The Risk dimension forces funds to assess impact as probability-weighted, not just point-estimated.
Is the Impact Management Project still active?
The Impact Management Project itself completed its mandate in 2020. The work streams continue through successor organizations. Impact Frontiers is the primary steward of the Five Dimensions framework today and publishes ongoing implementation guidance. The framework itself is widely adopted and actively maintained through these successor structures.
How does Sopact Sense operationalize the IMP Five Dimensions?
Sopact Sense applies the Five Dimensions as a scoring architecture, not a taxonomy. DD documents are read end-to-end by AI and scored against a fund-specific rubric anchored to all five dimensions, with citation trails on every proposed score. Those scores become the baseline for portfolio monitoring — quarterly submissions are auto-reconciled against the DD baseline, gaps and anomalies surfaced. Dimension-level intelligence rolls up into LP-ready reports automatically. The same evidence infrastructure serves DD, monitoring, and LP reporting without re-keying data. Built for impact-fund use cases covered in Impact Measurement and Management.
How does this relate to Sopact's Impact Measurement and Management workflow?
This page covers the Five Dimensions framework itself — what the dimensions are, what evidence each requires, and how they are scored. The Impact Measurement and Management page covers the architecture that operationalizes the framework — the three-phase workflow (due diligence → living theory of change → LP reporting) and the data sources that feed the intelligence layer. Together the two pages describe the framework and the architecture required to apply it at portfolio scale.
How much does it cost to operationalize the Five Dimensions with Sopact Sense?
Sopact Sense pricing scales with fund size and portfolio complexity. Most impact funds operating in the 10–50 investee range see dramatic reduction in reporting cycle time and analyst hours within the first two quarterly cycles. Request a walkthrough for pricing specific to your fund's portfolio size and reporting cadence.
Close the Taxonomy Trap
Operationalize the IMP Five Dimensions — as scoring architecture, not section headers.
Fund-specific rubric anchored to all five dimensions. Citation trails on every proposed score. Scores that compound across due diligence, portfolio monitoring, and LP reporting — not reset every cycle. Built for climate, financial inclusion, workforce, and any impact fund aligned with Impact Frontiers guidance.
Fund-specific rubric across all five dimensions with flexible, thesis-aligned weights
Evidence citations on every AI-proposed score — audit-defensible by default
Scores compound across DD, quarterly monitoring, and LP reporting — no resets