play icon for videos

Five Dimensions of Impact: IMP Framework Practical Guide

The IMP framework — What, Who, How Much, Contribution, Risk — operationalized at outcome, enterprise, and portfolio level. By Sopact.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 4, 2026
360 feedback training evaluation
Use Case
Impact Management Project · IMP Framework
The Five Dimensions of Impact in practice — operating the Impact Management Project framework, not just referencing it.

The Impact Management Project (IMP) brought more than 2,000 organizations together to converge on a single language for measuring impact. The Five Dimensions of Impact — What, Who, How Much, Contribution, and Risk — became its centerpiece. Almost every serious fund now references the framework. Almost none has the data infrastructure to score against all five dimensions with cross-portfolio consistency. The gap between adopting the IMP framework and operating it is where Sopact Sense lives.

Evidence Coverage Across the Five Dimensions
COVERAGE BY DIMENSION · TRADITIONAL VS SOPACT D1 What D2 Who D3 How Much D4 Contribution D5 Risk Sopact Sense — consistent evidence across all 5 Traditional — strong on What/Scale, thin elsewhere
5 / 5
dimensions with
structured evidence
2 / 5
dimensions covered by
typical fund today
2,000+
organizations
shaped the IMP
3 phases
where score
compounds forward

Practical Guide · Dimension by Dimension

What each dimension actually requires as evidence

A framework is a question structure. The questions only become operational when the evidence requirement and the common failure mode behind each are made explicit.

Dimension 01 · D1

What

Which outcome is the enterprise contributing to, and how important is that outcome to the people experiencing it?

Evidence required
A current Theory of Change with named outcomes; baseline measurement of those outcomes among affected stakeholders; stakeholder-voice confirmation that the outcome matters as framed.
Common failure mode
Conflating output with outcome. Counting trainings delivered rather than skill change among trainees. ToC slide is two years old and nobody references it.
How Sopact handles it
Captures ToC as a structured object at DD; carries the named outcomes forward as the field structure for every subsequent stakeholder interaction.
Dimension 02 · D2

Who

Who experiences the outcome, and how underserved are they relative to it?

Evidence required
Disaggregated demographic data tied to outcome data through a persistent stakeholder ID — gender, race, geography, prior service access, intersectional categories where they affect the outcome.
Common failure mode
Reporting beneficiary count without disaggregation. Equity analysis becomes impossible because demographic data and outcome data live in different systems with different IDs.
How Sopact handles it
Assigns a persistent contact ID at first interaction; joins demographic data to outcome data through that ID for the life of the relationship.
Dimension 03 · D3

How Much

How many people experienced the outcome (scale), what magnitude of change did they experience (depth), and how durable was the change (duration)?

Evidence required
Paired baseline-and-endline measurement at minimum, with longitudinal follow-up where duration is part of the claim. Same instrument run twice against the same stakeholder ID.
Common failure mode
Reporting scale alone — beneficiary headcount — without depth or duration. The number is uninterpretable because it does not say how much each person changed or how long it lasted.
How Sopact handles it
Enforces baseline-endline pairing structurally. Same instrument runs at intake and at follow-up against the same ID — depth and duration become queries, not reconciliations.
Dimension 04 · D4

Contribution

Did the enterprise's effort produce outcomes that were likely better than what would have occurred otherwise?

Evidence required
Some form of counterfactual reasoning — comparison group, propensity matching, contribution analysis from process tracing, or stakeholder attribution of change to the program.
Common failure mode
Treating Contribution as a narrative paragraph rather than a structured claim. The claim "we contributed" appears in the LP report, but no document passage in the underlying evidence supports it.
How Sopact handles it
Extracts contribution claims from DD documents and stakeholder interviews as structured statements, links each to its evidence source, and flags claims with no evidence backing.
Dimension 05 · D5

Risk

What is the likelihood that the impact will be different from expectations, and across which categories of risk?

Evidence required
A risk-category register applied at DD — evidence, external, stakeholder participation, drop-off, unexpected impact, execution, alignment, endurance — with monitoring signals defined for each category.
Common failure mode
Naming risks once at DD and never monitoring against them. The risk register becomes a one-time deliverable rather than an active surveillance object.
How Sopact handles it
Carries the risk register from DD into quarterly monitoring as a structured object. Quarterly check-in narratives are scanned against the register for early warning signals.

The Operating Layer

Three connected pillars beneath every operational Five Dimensions framework

A framework PDF gets you a question structure. Operating it requires three artifacts, kept current and connected. Most funds maintain all three — in disconnected forms.

PILLAR I

Theory of Change

THE CAUSAL LOGIC

What activities produce what outputs, what outputs lead to what outcomes, which outcomes the enterprise is accountable for. Anchors the What dimension.

Without it
The What dimension floats. ToC slide is two years stale.
In Sopact Sense
Living model. Pulled from pitch at DD, confirmed at onboarding, updated each quarter — drift surfaces automatically.
PILLAR II

Data Dictionary

THE SHARED SCHEMA

Field-by-field definition of what counts as evidence for each outcome. Every field has a definition, a unit, an evidence requirement. Anchors Who and How Much.

Without it
The same outcome is measured three different ways across three investees. Portfolio comparison breaks.
In Sopact Sense
Enforced at intake. Two investees on the same outcome use the same field structure or the inconsistency surfaces as a warning, not a silent gap.
PILLAR III

Five Dimensions Rubric

THE SCORING ARCHITECTURE

Mapping from data dictionary fields and ToC outcomes to a fund-specific 0–3 score on each dimension. Closes Contribution and Risk.

Without it
Dimension scores are opinions. No citation behind any score; nothing compounds forward.
In Sopact Sense
Every proposed score cites the document passage that supports it. DD scores become baselines for monitoring; monitoring evidence assembles LP reports.

The three pillars only work as a system. Sopact Sense holds ToC, Data Dictionary, and the Five Dimensions rubric as connected layers in one infrastructure — not three documents stored in three places.

The Hard Part · Normalization

The framework only works at the level you can normalize

Most funds can apply the Five Dimensions to a single outcome. Few can normalize across an enterprise. Almost none can normalize across a portfolio. Each level requires the same data dictionary, ToC structure, and rubric to run consistently.

Shared Infrastructure

Runs across all three levels

The three pillars must be the same artifact at outcome, enterprise, and portfolio level — or the higher levels are averaging noise.

PILLAR I
Theory of Change
Outcome structure inherited at every level
PILLAR II
Data Dictionary
Same field definitions everywhere
PILLAR III
5D Rubric
Same scoring criteria at every level
LEVEL 03
P
Portfolio
Unit of Analysis
A fund holding 20 enterprises across themes — capital deployed against ABC categories, dimension-level evidence quality across the book.
What the level produces
An LP-ready portfolio view: where the dimension-level evidence is strong vs. thin, where Contribution evidence is missing, where Risk monitoring has lapsed.
A · 20% B · 55% C · 25%
LEVEL 02
E
Enterprise
Unit of Analysis
A single investee or grantee — say, a smallholder finance company with three programmatic outcomes aggregated into one impact profile.
What the level produces
A single ABC classification for the enterprise: Act to Avoid Harm, Benefit Stakeholders, or Contribute to Solutions — with a 5D profile underneath.
C · Contribute to Solutions
LEVEL 01
O
Outcome
Unit of Analysis
A single intended outcome — for example, "smallholder farmer income increased by ≥30%" — scored on each of the five dimensions with evidence citations.
What the level produces
A 0–3 score per dimension, each citing the document passage or quantitative metric behind it. The atomic unit on which everything above is built.
D1·3 D2·2 D3·3 D4·2 D5·2
!

If outcome-level scoring is inconsistent across investees, nothing above it is reliable. The portfolio view becomes an aggregation of noise. Most funds discover this when an LP asks for a cross-portfolio comparison and the underlying scores were generated against twenty different implicit rubrics.

The Taxonomy Trap

Five Dimensions as section headers, or as scoring architecture

The same framework can read two ways. One labels content. The other drives decisions. The difference is what the underlying evidence actually supports.

The Trap

Dimensions as labels

LP Annual Report · Investee XYZ
What
Improves financial inclusion for underbanked smallholder farmers in East Africa.
Who
Smallholder farmers, predominantly in rural communities.
How Much
42,000 farmers reached this year.
Contribution
[narrative paragraph — no evidence] "We believe our financing has been instrumental in expanding access where it would not have otherwise existed."
Risk
[same paragraph as last year] "We continue to monitor execution risk."
Dimensions organize the report. They do not change which investees are selected, what monitoring questions get asked, or how performance is compared. The framework is decoration.
The Architecture

Dimensions as scores

5D Scorecard · Investee XYZ · 2026 Q3
D1 · What
3
DD-04ToC outcome confirmed by 84% of borrower interviews.
D2 · Who
2
Q3-DEMGender disaggregation present; income-tier data partial.
D3 · How Much
3
12-MOBaseline-endline pair on income; 12-month follow-up complete.
D4 · Contribution
2
VOICEStakeholder attribution captured; comparison group missing.
D5 · Risk
2
REGDrop-off and external risk monitored; alignment risk new.
Every score cites the specific evidence behind it. DD scores set baselines; quarterly evidence updates them; LP report assembles itself. The framework runs the system.

From Framework to Operation

If you reference the Five Dimensions but can't score against them, the gap isn't the framework.

It's the data infrastructure underneath. Impact Intelligence is Sopact's solution for funds, foundations, and accelerators that score investees against the Five Dimensions at DD and carry those scores forward through monitoring and LP reporting — without resetting context each cycle.