Sopact is a technology based social enterprise committed to helping organizations measure impact by directly involving their stakeholders.
Useful links
Copyright 2015-2025 © sopact. All rights reserved.

New webinar on 3rd March 2026 | 9:00 am PT
In this webinar, discover how Sopact Sense revolutionizes data collection and analysis.
Learn how to measure impact using IMP's Five Dimensions framework—What, Who, How Much, Contribution, and Risk
By Unmesh Sheth, Founder & CEO, Sopact
The five dimensions of impact — What, Who, How Much, Contribution, and Risk — are the most widely adopted framework for structuring impact evidence. Developed by the Impact Management Project and now embedded in IRIS+, OPIM, and virtually every major reporting standard, they give every organization a universal language for defining, measuring, and comparing impact.
But there is a gap between adopting the five dimensions as a taxonomy and deploying them as a working intelligence system. Most funds use the five dimensions to label their annual report sections. The highest-performing funds use them to score investees at due diligence, monitor against those scores every quarter, and aggregate those scores across a portfolio of twenty to two hundred companies — automatically.
This article is about that gap. It covers how proprietary scoring aligned with the five dimensions works at each stage of the investment lifecycle, how context from due diligence compounds rather than resets, and how Sopact's AI infrastructure makes portfolio-level intelligence operationally practical for the first time without requiring investees to fill out new portals or adopt new systems.
The five dimensions give you the question structure. They do not tell you how to weight answers, what evidence to accept as sufficient, or how to compare a climate-tech company against a workforce development nonprofit in the same portfolio.
That is where proprietary scoring comes in. A fund's impact rubric — its internal scoring methodology — defines the specific criteria, weight distributions, and evidence thresholds that translate five-dimension questions into actionable scores. This is what makes the framework fund-specific without making it incomparable.
The most powerful rubrics share three properties. First, they are anchored to the five dimensions — every criterion maps back to one of the five questions so scoring remains coherent across portfolio companies and across reporting periods. Second, they are flexible at the criterion level — different investee sectors, impact theses, and stages carry different weights, and the rubric should reflect that. Third, they are evidence-linked — every score cites the specific document, passage, or survey response that supports it, so when an analyst or LP asks "why did you score Contribution a 3?", the answer is a sentence from the DD interview transcript, not a number someone typed in a spreadsheet.
Most funds have already done the hard intellectual work here. They have frameworks, investment theses, and impact criteria embedded in their IC memos and DD templates. What they lack is the infrastructure to apply those criteria consistently across hundreds of documents, carry scores forward through the investment lifecycle, and surface anomalies when new evidence contradicts established assessments.
The reason most portfolio impact reporting is expensive, slow, and unreliable is not missing data. It is context that resets.
At due diligence, a team reads fifty to two hundred documents — pitch decks, impact theses, founder interviews, financial models — and builds a picture of what the investee claims to do and what evidence supports those claims. Then the investment closes, the analyst moves on, and fourteen months later someone opens that folder to write the Q3 LP narrative.
The context has reset to zero. Nobody remembers which version of the deck was final. The theory of change that was validated at DD is not connected to the Q2 outcomes report. The risk flag that appeared on page seven of the Q2 narrative was never connected to the risk dimension score from the DD interview. The LP narrative gets assembled by hand, from scratch, in a week.
This is not a workflow problem. It is an architecture problem. And the three-phase architecture that solves it is the same architecture that makes five-dimensions scoring genuinely useful rather than decorative.
Phase one is DD Intelligence. Every document in the due diligence package gets read, scored against the five-dimensions rubric, and stored as a queryable investee profile. The output is not just a score — it is a living intelligence layer that carries the full evidentiary basis for every finding.
Phase two is the Living Theory of Change. The moment investment closes, the DD profile becomes the baseline against which all future submissions are measured. When the Q1 narrative arrives, Sopact reconciles it against DD commitments automatically. Gaps are flagged. Progress is tracked against original commitments, not just against last quarter. The investee does not need to adopt new software — they keep sending PDFs, narrative updates, and survey data through normal channels.
Phase three is the Quarterly Loop. Each reporting period, Sopact reads every submission, scores it against the established rubric, updates the living investee profile, and generates six LP-ready reports per investee — overnight. Risk signals detected in qualitative data are flagged the day they appear, not six weeks later when someone finally gets to page seven.
Individual investee scoring is valuable. Portfolio-level aggregation is where five-dimensions infrastructure becomes a competitive capability.
LP reporting requires synthesizing outcomes across a portfolio — not just listing what each company reported, but constructing a coherent story about portfolio-wide impact. Most funds do this by reading every investee narrative and manually assembling a portfolio summary. The result is a document that takes weeks to produce, contains inconsistencies, and is already outdated when it lands with LPs.
Five-dimensions aggregation changes the unit of analysis. Instead of comparing investee narratives, you compare investee scores across a standardized rubric. You can answer questions like: which companies in the portfolio are delivering the strongest How Much evidence? Which ones show deteriorating Risk scores in their last two quarters? Across the clean energy cohort, what is the portfolio-wide beneficiary reach versus commitments?
This is not about reducing impact to a single number. It is about having a structured vocabulary — the five dimensions — that makes comparison meaningful. A clean energy company and a workforce nonprofit are not comparable on most metrics, but they are comparable on Contribution quality, Risk management discipline, and the rigor of their Who evidence. Those are the dimensions where the rubric enables honest portfolio-level conversation.
Sopact's approach is deliberately non-disruptive. It does not require investees to adopt new portals, change their reporting formats, or learn new tools. It reads the documents they already send — PDFs, spreadsheets, narrative reports, survey exports — and layers intelligence on top of your existing CRM and portfolio management systems.
Your Attio, Salesforce, HubSpot, or DealCloud instance stays your system of record. Sopact pulls context from it, enriches it with document intelligence, and returns scored assessments. No write permissions, no data migration, no IT project.
This matters because the barrier to five-dimensions scoring has never been the framework — it has been the operational cost of applying it consistently across hundreds of documents, every quarter, for every investee. When that cost approaches zero, the framework stops being a reporting taxonomy and becomes a genuine intelligence system.
The funds that build this infrastructure now are not just making LP reporting easier. They are building a longitudinal dataset — five-dimensions scores, evidence trails, risk signals, outcome trajectories — that compounds in value with every passing quarter. By year three, the intelligence gap between funds that did this and funds that did not is not a reporting efficiency difference. It is a data advantage that shapes investment decisions, fund strategy, and LP relationships.
The five dimensions of impact — What, Who, How Much, Contribution, and Risk — provide the question structure for DD scoring in impact investing. At due diligence, each dimension maps to specific evidence types: What defines the impact thesis, Who identifies target stakeholders and their underserved status, How Much establishes outcome commitments and scale expectations, Contribution assesses whether the investee's activities genuinely cause the reported outcomes rather than riding background trends, and Risk flags threats to impact delivery. A fund's proprietary rubric translates these five questions into scored criteria, weighted to reflect the fund's investment thesis and sector focus.
Proprietary scoring starts by mapping the fund's existing IC criteria to the five dimensions — most funds have already done the intellectual work; it just exists in narrative IC memos rather than structured rubrics. Each dimension gets a set of criteria, evidence thresholds, and weights appropriate to the fund's portfolio. A climate-tech fund might weight How Much and Risk most heavily. A financial inclusion fund might weight Who and Contribution more. The rubric stays anchored to the five dimensions so portfolio-level aggregation remains coherent, while individual criteria flex to match investee context.
Most portfolio reporting resets context at each stage — DD findings are not connected to quarterly narratives, which are not connected to LP reports. Context that compounds means every finding, score, and evidence trail from due diligence carries forward automatically into ongoing monitoring and LP reporting. When the Q3 narrative arrives, it is reconciled against DD commitments and Q1-Q2 baselines automatically. Risk signals are connected to risk flags from the original DD interview. LP narratives are generated from the full longitudinal record, not assembled from scratch each quarter.
Sopact reads every document in the due diligence package — pitch decks, impact theses, financials, founder interviews, theory-of-change documents — and applies the fund's five-dimensions rubric to extract scored evidence. Every score is linked to the specific source passage that supports it. The output is a queryable investee profile with citation trails, not just a number. This profile becomes the baseline against which all future submissions are measured throughout the investment lifecycle.
Portfolio aggregation uses the five-dimensions rubric as a common scoring vocabulary across all investees, regardless of sector or geography. Sopact aggregates investee scores to produce portfolio-level views: which dimension shows the widest evidence gaps across the portfolio, which investees are tracking above or below How Much commitments, how Risk scores have trended over the past four quarters. This enables LP reporting that compares investments on standardized criteria rather than narrative descriptions, and allows the fund to make capital allocation decisions grounded in longitudinal intelligence rather than most-recent-quarter data.
IRIS+ is a metric catalogue — it defines what to measure (employment rate, clean energy capacity, financial accounts opened). The five dimensions define how to classify and evaluate impact evidence — they provide the framework that determines whether a given metric is the right one to measure for a given investee, who experiences that metric, how much change it represents, whether the investee caused it, and how confident you should be in the projection. Sopact maps IRIS+ metrics to the five-dimensions rubric so funds can use standardized metric definitions while applying fund-specific scoring weights and evidence thresholds.
A first-pass five-dimensions rubric can be built from an existing fund's IC memos and due diligence templates in two to three weeks. Sopact's onboarding process starts by extracting the criteria already embedded in the fund's existing documents, mapping them to the five dimensions, and building a scoring template that the fund's team validates before it is applied to portfolio companies. Initial DD intelligence runs — reading and scoring existing investee documents — typically complete within twenty minutes per company.