play icon for videos
Use case

Five Dimensions of Impact: Complete Guide to IMP Framework

Learn how to measure impact using IMP's Five Dimensions framework—What, Who, How Much, Contribution, and Risk

TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

March 12, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

Five Dimensions of Impact: Due Diligence Scoring, Portfolio Aggregation, and LP Intelligence

By Unmesh Sheth, Founder & CEO, Sopact

Why the Five Dimensions Fail Without a Scoring Architecture

The five dimensions of impact — What, Who, How Much, Contribution, and Risk — are the most widely adopted framework for structuring impact evidence. Developed by the Impact Management Project and now embedded in IRIS+, OPIM, and virtually every major reporting standard, they give every organization a universal language for defining, measuring, and comparing impact.

But there is a gap between adopting the five dimensions as a taxonomy and deploying them as a working intelligence system. Most funds use the five dimensions to label their annual report sections. The highest-performing funds use them to score investees at due diligence, monitor against those scores every quarter, and aggregate those scores across a portfolio of twenty to two hundred companies — automatically.

The Five Dimensions of Impact — IMP Framework Explained

Masterclass · Unmesh Sheth, Founder & CEO, Sopact

Watch: 5 Dimensions of Impact (IMP Framework)
5
Universal dimensions: What, Who, How Much, Contribution, Risk
2,000+
Organizations consulted to build the IMP framework
3
Phases where scoring compounds: DD → TOC → LP Report
The gap: Most funds adopt the five dimensions as a reporting taxonomy. The highest-performing funds use them to score investees at due diligence — and carry that intelligence forward automatically through every LP quarterly report.
See how Sopact applies five-dimensions scoring to your fund's documents How Sopact works →

This article is about that gap. It covers how proprietary scoring aligned with the five dimensions works at each stage of the investment lifecycle, how context from due diligence compounds rather than resets, and how Sopact's AI infrastructure makes portfolio-level intelligence operationally practical for the first time without requiring investees to fill out new portals or adopt new systems.

Proprietary Scoring Aligned with the Five Dimensions

The five dimensions give you the question structure. They do not tell you how to weight answers, what evidence to accept as sufficient, or how to compare a climate-tech company against a workforce development nonprofit in the same portfolio.

That is where proprietary scoring comes in. A fund's impact rubric — its internal scoring methodology — defines the specific criteria, weight distributions, and evidence thresholds that translate five-dimension questions into actionable scores. This is what makes the framework fund-specific without making it incomparable.

The most powerful rubrics share three properties. First, they are anchored to the five dimensions — every criterion maps back to one of the five questions so scoring remains coherent across portfolio companies and across reporting periods. Second, they are flexible at the criterion level — different investee sectors, impact theses, and stages carry different weights, and the rubric should reflect that. Third, they are evidence-linked — every score cites the specific document, passage, or survey response that supports it, so when an analyst or LP asks "why did you score Contribution a 3?", the answer is a sentence from the DD interview transcript, not a number someone typed in a spreadsheet.

Proprietary Five-Dimensions Scoring Rubric

How impact funds translate the IMP framework into actionable DD scoring — with flexibility at the criterion level and coherence at the portfolio level

Dimension DD Signal Questions Flexible Scoring Criteria What Compounds Forward
D1
What
Core questions at DD What outcome does the investee claim to create? Is it an output or a genuine behavior/wellbeing change? Does evidence in the DD package support the claim?
Criteria (fund-adjustable) Specificity of outcome definition · Distinction between outputs and outcomes · Alignment with fund's impact thesis · SDG / IRIS+ category match
Climate funds may weight SDG alignment higher. Financial inclusion funds weight outcome specificity higher.
Compounding Outcome definition becomes the benchmark against which all quarterly narrative submissions are reconciled. New outcome claims are validated against this baseline.
D2
Who
Core questions at DD Who are the target beneficiaries? How underserved are they? Does the investee have demographic data at intake that tracks who actually benefits versus who was intended?
Criteria (fund-adjustable) Underserved status of target group · Depth of equity lens · Demographic data quality at intake · Mechanism for verifying "who" in practice versus in theory
DEI-mandate funds weight underserved status multiplier. General funds weight data quality more equally.
Compounding Stakeholder profile becomes the cohort definition for every subsequent survey, follow-up, and outcome report. Equity gaps are trackable across the full investment lifecycle.
D3
How Much
Core questions at DD What are the specific outcome commitments — scale, depth, duration? Are they measurable? What is the track record showing actual delivery versus stated projections?
Criteria (fund-adjustable) Commitment specificity · Quantitative track record · Follow-up measurement plan · Depth per beneficiary vs. headline scale
Early-stage funds accept lower track record evidence. Growth-stage funds require demonstrated delivery history.
Compounding DD commitments become the quarterly monitoring baseline. Every metrics submission is auto-scored: on track, below commitment, above commitment.
D4
Contribution
Core questions at DD What would happen without this investee? Is the intervention unique or substitutable? What evidence in the DD package demonstrates additive effect rather than background trend?
Criteria (fund-adjustable) Counterfactual rigor · Uniqueness of mechanism · Stakeholder attribution evidence · Comparison group or waitlist data availability
Highest-weight dimension for additionality-focused funds. Often lowest-evidenced — AI analysis of interview transcripts closes this gap.
Compounding Contribution claims from DD are cross-referenced against qualitative narrative submissions each quarter. Causal language in reports is flagged and scored automatically.
D5
Risk
Core questions at DD What are the primary risks to impact delivery — evidence risk, external risk, participation risk? How has the investee managed similar risks previously?
Criteria (fund-adjustable) Risk identification completeness · Mitigation plan quality · Historical risk management evidence · Sector-specific risk flags (regulatory, political, supply chain)
Risk weight adjusts by sector maturity and geography. Emerging market funds typically apply higher risk multipliers.
Compounding DD risk flags become the early warning criteria for ongoing monitoring. Sopact surfaces signals in qualitative data that match established risk categories — before they appear in quantitative metrics.
→ FLEXIBILITY PRINCIPLE: Criteria and weights are fund-specific. The five-dimension structure is universal. Both are necessary — flexibility without structure produces incomparable scores; structure without flexibility produces irrelevant ones.
Key insight: A well-designed rubric surfaces the Contribution and Risk evidence that DD teams already collect — in interview transcripts and narrative documents — but have never had the infrastructure to score consistently. Sopact's AI reads every document and applies your criteria, not generic ones.
Build your fund’s five-dimensions rubric from your existing IC criteria See how Sopact scores DD documents →

Most funds have already done the hard intellectual work here. They have frameworks, investment theses, and impact criteria embedded in their IC memos and DD templates. What they lack is the infrastructure to apply those criteria consistently across hundreds of documents, carry scores forward through the investment lifecycle, and surface anomalies when new evidence contradicts established assessments.

How Context Compounds: From Due Diligence to LP Report

The reason most portfolio impact reporting is expensive, slow, and unreliable is not missing data. It is context that resets.

At due diligence, a team reads fifty to two hundred documents — pitch decks, impact theses, founder interviews, financial models — and builds a picture of what the investee claims to do and what evidence supports those claims. Then the investment closes, the analyst moves on, and fourteen months later someone opens that folder to write the Q3 LP narrative.

The context has reset to zero. Nobody remembers which version of the deck was final. The theory of change that was validated at DD is not connected to the Q2 outcomes report. The risk flag that appeared on page seven of the Q2 narrative was never connected to the risk dimension score from the DD interview. The LP narrative gets assembled by hand, from scratch, in a week.

This is not a workflow problem. It is an architecture problem. And the three-phase architecture that solves it is the same architecture that makes five-dimensions scoring genuinely useful rather than decorative.

Three-Phase Architecture: Where Context Compounds

How five-dimensions intelligence accumulates from first DD document through final LP narrative — nothing resets

Phase 01
DD Intelligence
Every document in the due diligence package, read and scored before the first IC meeting
  • 50–200 DD docs read and scored automatically
  • Five-dimensions rubric applied to every claim
  • Inconsistencies flagged with source citations
  • Investee profile built before IC review
  • Every finding linked to source passage
Context established
20% — Baseline set
Phase 02
Living Theory of Change
DD profile becomes the baseline. Every quarterly submission reconciled against it automatically
  • Commitments extracted, IRIS+ indicators mapped
  • Shared data dictionary built with investee
  • New claims validated against DD baseline
  • Gaps auto-flagged before IC review
  • Risk signals connected to DD risk flags
Context established
55% — Intelligence building
Phase 03
Quarterly Loop + LP Report
LP narratives generated from accumulated intelligence — six reports per investee, overnight
  • Each quarterly submission auto-scored vs. commitments
  • Longitudinal trend data across all five dimensions
  • Early warning flags surface before LP calls
  • Six LP-ready reports generated per investee
  • Every narrative claim traced to source document
Context established
90%+ — Full picture across lifecycle
Phase 01 Output
Scored DD Assessment
Five-dimensions score + citation trail, delivered before IC meeting
Phase 02 Output
Living TOC + Data Dictionary
Both parties aligned before Q1 data submission
Phase 03 Output
6 LP-Ready Reports / Investee
Generated overnight every quarter. Zero manual assembly.
Without this architecture
  • Context resets at every stage
  • DD findings disconnected from monitoring
  • LP reports assembled by hand in a week
  • Risk signals caught after LP calls
  • Analyst departure erases institutional memory
With five-dimensions architecture
  • Every stage inherits the full prior record
  • DD commitments drive all quarterly scoring
  • LP reports generated before you open your laptop
  • Risk flags surface the day they appear in data
  • Intelligence outlasts any individual analyst
→ SOPACT PRINCIPLE: Intelligence should compound, not reset. The fund that builds this architecture in year one has a data advantage in year four that no reporting template can replicate.
See how context compounds from your first DD document through LP reporting Explore Impact Intelligence →

Phase one is DD Intelligence. Every document in the due diligence package gets read, scored against the five-dimensions rubric, and stored as a queryable investee profile. The output is not just a score — it is a living intelligence layer that carries the full evidentiary basis for every finding.

Phase two is the Living Theory of Change. The moment investment closes, the DD profile becomes the baseline against which all future submissions are measured. When the Q1 narrative arrives, Sopact reconciles it against DD commitments automatically. Gaps are flagged. Progress is tracked against original commitments, not just against last quarter. The investee does not need to adopt new software — they keep sending PDFs, narrative updates, and survey data through normal channels.

Phase three is the Quarterly Loop. Each reporting period, Sopact reads every submission, scores it against the established rubric, updates the living investee profile, and generates six LP-ready reports per investee — overnight. Risk signals detected in qualitative data are flagged the day they appear, not six weeks later when someone finally gets to page seven.

Portfolio Aggregation: Reporting Across 20 to 200 Companies

Individual investee scoring is valuable. Portfolio-level aggregation is where five-dimensions infrastructure becomes a competitive capability.

LP reporting requires synthesizing outcomes across a portfolio — not just listing what each company reported, but constructing a coherent story about portfolio-wide impact. Most funds do this by reading every investee narrative and manually assembling a portfolio summary. The result is a document that takes weeks to produce, contains inconsistencies, and is already outdated when it lands with LPs.

Five-dimensions aggregation changes the unit of analysis. Instead of comparing investee narratives, you compare investee scores across a standardized rubric. You can answer questions like: which companies in the portfolio are delivering the strongest How Much evidence? Which ones show deteriorating Risk scores in their last two quarters? Across the clean energy cohort, what is the portfolio-wide beneficiary reach versus commitments?

Portfolio Aggregation: Five Dimensions Across the Fund

How individual investee scores roll up to portfolio-level intelligence — enabling LP reporting, capital allocation, and strategic decisions grounded in structured evidence

Portfolio scorecard — illustrative (5 of 44 investees shown) — Q3 2025
Investee What (D1) Who (D2) How Much (D3) Contribution (D4) Risk (D5) QoQ Trend
Greenbridge Capital
Clean energy · Series B
4.2 / 5 4.0 / 5 3.8 / 5 2.9 / 5 3.7 / 5 ↑ +0.4 vs Q2
Solaris Ventures
Financial inclusion · Seed
3.9 / 5 4.5 / 5 2.7 / 5 3.2 / 5 2.1 / 5 ⚑ ↓ −0.6 vs Q2
Terra Fund
Agri-tech · Series A
4.1 / 5 3.4 / 5 4.2 / 5 3.8 / 5 4.0 / 5 ↑ +0.7 vs Q2
Brightline Energy
Clean energy · Pre-seed
3.0 / 5 2.8 / 5 1.9 / 5 2.0 / 5 3.1 / 5 → flat vs Q2
Uplift Health
Health equity · Series B
4.6 / 5 4.8 / 5 4.3 / 5 4.1 / 5 4.2 / 5 ↑ +0.3 vs Q2
Portfolio average (44 investees) 3.8 / 5 3.7 / 5 3.3 / 5 3.0 / 5 3.4 / 5 ↑ +0.2 vs Q2
Portfolio finding
Contribution (D4) is the portfolio's weakest dimension
Average 3.0 — below all other dimensions. This signals that investees report outcomes but do not consistently provide additive-effect evidence. LP narrative should address this gap directly.
Risk signal
Solaris Ventures: Risk score fell 0.6 points this quarter
D5 flag detected in Q3 narrative — "delayed regulatory approval" language cross-references risk flag from original DD interview. Flagged for IC review before LP call.
Capital allocation insight
Clean energy cohort leads on How Much delivery
Greenbridge and Terra Fund are outperforming Q3 commitments on D3 by an average of 18%. Follow-on allocation decision has five-dimensions data backing it rather than narrative judgement.
→ LP REPORTING: Portfolio aggregation across five dimensions replaces manual narrative assembly. Every comparison is backed by structured scores — traceable to source documents — not analyst memory.
See portfolio aggregation across your 20–200 investees See portfolio intelligence →

This is not about reducing impact to a single number. It is about having a structured vocabulary — the five dimensions — that makes comparison meaningful. A clean energy company and a workforce nonprofit are not comparable on most metrics, but they are comparable on Contribution quality, Risk management discipline, and the rigor of their Who evidence. Those are the dimensions where the rubric enables honest portfolio-level conversation.

The Intelligence Layer That Sits Alongside Your Systems

Sopact's approach is deliberately non-disruptive. It does not require investees to adopt new portals, change their reporting formats, or learn new tools. It reads the documents they already send — PDFs, spreadsheets, narrative reports, survey exports — and layers intelligence on top of your existing CRM and portfolio management systems.

Your Attio, Salesforce, HubSpot, or DealCloud instance stays your system of record. Sopact pulls context from it, enriches it with document intelligence, and returns scored assessments. No write permissions, no data migration, no IT project.

This matters because the barrier to five-dimensions scoring has never been the framework — it has been the operational cost of applying it consistently across hundreds of documents, every quarter, for every investee. When that cost approaches zero, the framework stops being a reporting taxonomy and becomes a genuine intelligence system.

Why Your Impact Portfolio Analysis Is Breaking

Your LP report is due in a week. Your team is opening due diligence folders last touched 14 months ago. The analyst who built them left in March. This masterclass shows the three-phase architecture that eliminates this problem — for funds managing 20 to 200+ portfolio companies.

Masterclass · Portfolio Intelligence Architecture · Unmesh Sheth, Sopact

"This is not about better spreadsheets or faster dashboards. This is about building a system where every piece of intelligence you create at due diligence compounds — automatically — all the way through to your LP quarterly packet."

— Unmesh Sheth, Founder & CEO, Sopact
See what Sopact finds in your documents — in 20 minutes.
Your documents, your rubric. Immediate results.
Book a Live Session →

The funds that build this infrastructure now are not just making LP reporting easier. They are building a longitudinal dataset — five-dimensions scores, evidence trails, risk signals, outcome trajectories — that compounds in value with every passing quarter. By year three, the intelligence gap between funds that did this and funds that did not is not a reporting efficiency difference. It is a data advantage that shapes investment decisions, fund strategy, and LP relationships.

Frequently Asked Questions

What is the five dimensions of impact framework for due diligence?

The five dimensions of impact — What, Who, How Much, Contribution, and Risk — provide the question structure for DD scoring in impact investing. At due diligence, each dimension maps to specific evidence types: What defines the impact thesis, Who identifies target stakeholders and their underserved status, How Much establishes outcome commitments and scale expectations, Contribution assesses whether the investee's activities genuinely cause the reported outcomes rather than riding background trends, and Risk flags threats to impact delivery. A fund's proprietary rubric translates these five questions into scored criteria, weighted to reflect the fund's investment thesis and sector focus.

How do impact funds build proprietary scoring aligned with the five dimensions?

Proprietary scoring starts by mapping the fund's existing IC criteria to the five dimensions — most funds have already done the intellectual work; it just exists in narrative IC memos rather than structured rubrics. Each dimension gets a set of criteria, evidence thresholds, and weights appropriate to the fund's portfolio. A climate-tech fund might weight How Much and Risk most heavily. A financial inclusion fund might weight Who and Contribution more. The rubric stays anchored to the five dimensions so portfolio-level aggregation remains coherent, while individual criteria flex to match investee context.

What does "context that compounds" mean for impact fund reporting?

Most portfolio reporting resets context at each stage — DD findings are not connected to quarterly narratives, which are not connected to LP reports. Context that compounds means every finding, score, and evidence trail from due diligence carries forward automatically into ongoing monitoring and LP reporting. When the Q3 narrative arrives, it is reconciled against DD commitments and Q1-Q2 baselines automatically. Risk signals are connected to risk flags from the original DD interview. LP narratives are generated from the full longitudinal record, not assembled from scratch each quarter.

How does Sopact score impact at the due diligence stage?

Sopact reads every document in the due diligence package — pitch decks, impact theses, financials, founder interviews, theory-of-change documents — and applies the fund's five-dimensions rubric to extract scored evidence. Every score is linked to the specific source passage that supports it. The output is a queryable investee profile with citation trails, not just a number. This profile becomes the baseline against which all future submissions are measured throughout the investment lifecycle.

How does portfolio aggregation work across the five dimensions?

Portfolio aggregation uses the five-dimensions rubric as a common scoring vocabulary across all investees, regardless of sector or geography. Sopact aggregates investee scores to produce portfolio-level views: which dimension shows the widest evidence gaps across the portfolio, which investees are tracking above or below How Much commitments, how Risk scores have trended over the past four quarters. This enables LP reporting that compares investments on standardized criteria rather than narrative descriptions, and allows the fund to make capital allocation decisions grounded in longitudinal intelligence rather than most-recent-quarter data.

What is the difference between IRIS+ metrics and five-dimensions scoring?

IRIS+ is a metric catalogue — it defines what to measure (employment rate, clean energy capacity, financial accounts opened). The five dimensions define how to classify and evaluate impact evidence — they provide the framework that determines whether a given metric is the right one to measure for a given investee, who experiences that metric, how much change it represents, whether the investee caused it, and how confident you should be in the projection. Sopact maps IRIS+ metrics to the five-dimensions rubric so funds can use standardized metric definitions while applying fund-specific scoring weights and evidence thresholds.

How long does it take to set up five-dimensions scoring for a portfolio?

A first-pass five-dimensions rubric can be built from an existing fund's IC memos and due diligence templates in two to three weeks. Sopact's onboarding process starts by extracting the criteria already embedded in the fund's existing documents, mapping them to the five dimensions, and building a scoring template that the fund's team validates before it is applied to portfolio companies. Initial DD intelligence runs — reading and scoring existing investee documents — typically complete within twenty minutes per company.