play icon for videos

Impact Measurement & Management (IMM): Framework to Intelligence

IMM framework, Five Dimensions, and AI platform. From due diligence through quarterly monitoring to LP reports — without resetting context each cycle.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 28, 2026
360 feedback training evaluation
Use Case
Below: how each year inherits the last.

Most funds use a small fraction of what they already know about each investee. The pitch deck is read once at IC and filed. The founder interview is summarized into a memo and reopened never. Quarterly updates land as disconnected spreadsheets that nobody connects back to the investment thesis. The framework is not the problem. What sits underneath the framework is.

Sopact Sense gives every investee one ID at the first DD document. Every form, survey, and submission after that uses the same ID: onboarding, quarterly check-in, stakeholder voice, exit. The record gets richer at each stage instead of starting over.

The portfolio lifecycle, with one investee ID

Four-stage portfolio lifecycle with one investee ID A vertical sequence of four stages, Due Diligence, Onboarding, Quarterly Loop, and Year 2-7 plus Exit, with a clay-colored line on the left labeled ONE INVESTEE ID running through all four. To the right of each stage label is a short headline describing what that stage produces, with a small dot marking the stage entry on the throughline. The line is continuous from top to bottom, signaling that the same record carries across every stage. ONE INVESTEE ID STAGE 01 · DUE DILIGENCE Five Dimensions scored Theory of Change pulled from pitch language. STAGE 02 · ONBOARDING Baseline confirmed Commitments tracked against the DD record. STAGE 03 · QUARTERLY LOOP Q3 checks against DD Living ToC. Risk signals. Stakeholder voice. STAGE 04 · YEAR 2–7 · EXIT LP narrative writes itself Six reports per investee, generated overnight.

What it is

Impact measurement and management, defined.

Impact measurement and management (IMM) is the systematic practice of collecting evidence of social, environmental, or programmatic change, analyzing what it means, and using those findings to drive investment, allocation, or program decisions.

Two halves, both essential. Measurement asks the "what changed" question: indicators, baselines, stakeholder voice, year-over-year trends. Management asks the "what do we do about it" question: portfolio rebalancing, intervention design, capital allocation, learning loops. Measurement without management is documentation. Management without measurement is guesswork.

The field has settled on a small set of frameworks for organizing the work: IRIS+ for indicators, Five Dimensions for any single impact claim, Theory of Change for the logic that connects activities to outcomes. The frameworks are not the hard part. Connecting them across the investment lifecycle, on the same record, is.

Two halves of one practice

Half 01

Measurement

What changed?

  • Indicators and baselines
  • Stakeholder voice
  • Five Dimensions scoring
  • Year-over-year trends

Half 02

Management

What do we do about it?

  • Portfolio rebalancing
  • Intervention design
  • Capital allocation
  • Learning loops
Both halves, one connected investee record

The three frameworks IMM teams actually use

Five Dimensions of Impact Five questions every impact claim has to answer. What. Who. How much. Contribution. Risk.
IRIS+ The indicator standard for impact metrics. Maintained by the GIIN. Used in LP reports.
Theory of Change The logic model connecting activities to outcomes. Lives or dies on whether it stays current.

The Intelligence Horizon

Most funds use a fraction of the context they already have.

Pitch decks read once at investment committee and filed. Founder interviews summarized into a memo and never reopened. Quarterly updates landing as disconnected spreadsheets. The frameworks meant to unify all of it, Five Dimensions of Impact, IRIS+, Theory of Change, live as static PDFs rather than as code that runs. The framework is not the problem. What sits underneath the framework is.

The Intelligence Horizon is the point where accumulated investee evidence shifts from backward-looking reports to forward-looking decisions. Funds that start IMM at LP reporting keep the Horizon at zero. Funds that start IMM at due diligence build it: each cycle adds to what came before, and Year 3 decisions inherit Years 1 and 2 evidence automatically.

Context utilization across the investment lifecycle

Two paths of context utilization across the investment lifecycle A horizontal axis labeled with four lifecycle stages, Due Diligence, Onboarding, Quarterly Loop, and Year 2-7 plus Exit. Two lines run across the stages. The lower line, in muted gray, stays flat at a low position labeled "starts over each cycle". The upper line, in clay, climbs from a low starting point at Due Diligence, rises through Onboarding and Quarterly Loop, and reaches the top at Year 2-7 plus Exit, labeled "each year inherits the last". CONTEXT UTILIZATION STAGE 01 Due Diligence STAGE 02 Onboarding STAGE 03 Quarterly Loop STAGE 04 Year 2–7 · Exit Traditional IMM starts over each cycle Sopact Sense each year inherits the last FULL NONE

The retrofit is matching by email and stitching one spreadsheet to another. It breaks every quarter. The fix is not a smarter dashboard. The fix is upstream: give every investee one ID at the first DD document, and let every form and submission after that use the same ID.

Frameworks like IRIS+, Five Dimensions, and Theory of Change become operational the moment AI starts reading at due diligence. Until then, they are slides.

Sopact · Impact Portfolio Intelligence thesis

How each year inherits the last

Every stage inherits the full prior record.

One investee ID issued at the first DD document. Every form, survey, and submission after that uses the same ID. The record gets richer at each stage. By Year 3, the same row holds the full investment lifecycle for the same company, queryable in seconds.

One investee ID issued at the first DD document, used by every form and submission from onboarding through exit.

Stage 1

Due Diligence

Pre-investment

Stage 2

Onboarding

Post-IC, pre-Q1

Stage 3

Quarterly

Every cycle

Stage 4

Year 2–7

Multi-year

Stage 5

Exit

Liquidity event

Identity Investee ID, sector, thesis, cohort
Captured

ID issued. Sector + thesis tags stored.

Carried
Carried
Carried
Carried
Five Dimensions What, Who, How Much, Contribution, Risk
Captured

Rubric scored against pitch language and DD documents. Citations attached.

Carried
Re-scored

Same rubric. New evidence. Per-investee delta computed.

Trend
Final score
Theory of Change Living model, updated each cycle
Extracted

Pulled from pitch language and investment memo.

Confirmed

Aligned with investee. Data dictionary built.

Updated

Drift surfaced. Assumptions tested.

Carried
Validated
Stakeholder voice Beneficiaries, customers, employees
Not yet
Baseline

Lean Data-style survey deployed. Open-ended scored at submit.

Themed

Cross-portfolio patterns surface in minutes.

Year over year
Exit interview
Reports IC briefs, LP reports, exit summaries
IC brief

Generated from structured DD record.

Onboarding pack
Six per investee

Scorecard, gap memo, LP narrative, trends, risk, exit summary.

Annual narrative
Exit summary

What sits underneath

Four analysis layers. Two work at collection. Two work at reporting.

Every layer works because every record carries the same investee ID. Without it, year-over-year portfolio analysis is manual cleanup. With it, the analysis is a default output of collection itself.

01 · Cell

Intelligent Cell

Collection time · per document

Single-field analysis. Applied to one DD document, one founder narrative, or one quarterly submission, with a rubric defined by the fund. The score lands in a column inside the same record alongside the source citation.

In portfolios

A pitch deck gets read at upload time and scored against the fund’s Five Dimensions rubric. Each score links back to the specific passage that supports it. Reviewer can audit any judgment in seconds.

02 · Row

Intelligent Row

Collection time · per investee

Multi-field analysis per record. Combines several Cells, structured CRM data, and uploaded files into one coherent investee profile. The IC reviewer sees a consolidated brief, not five tabs.

In portfolios

Pitch deck + financial model + founder interview + ESG screening rolled into a one-page IC brief. Inconsistencies between deck claims and supporting data are flagged automatically.

03 · Column

Intelligent Column

Reporting time · cross-portfolio

Cross-record patterns across all responses to one or more fields. Theme extraction across the full portfolio. Risk pattern detection. IRIS+ indicator computation across every active investee.

In portfolios

Theme extraction across every quarterly stakeholder voice survey. Surfaces which investees are flagging the same regulatory concern in their own words, before it becomes a portfolio-wide pattern.

04 · Grid

Intelligent Grid

Reporting time · full dataset

Full dataset analysis across every record and every field. LP portfolio narrative, cohort comparison, year-over-year trend reports. The two weeks before an LP call compress into hours.

In portfolios

Six LP-ready reports per investee per quarter, generated overnight: scorecard, gap memo, LP narrative, year-over-year trend, risk report, exit impact summary. Analyst time shifts from assembly to insight.

Where teams use it

One platform. Many portfolio shapes.

Impact Portfolio Intelligence is the foundation underneath every page in this section. Pick the page closest to your work and go deeper.

What’s different

Spreadsheets handle this quarter. Aggregators surface metrics. Neither carries context forward.

Most impact funds run IMM on spreadsheets and consultants, or on metric-aggregation tools that pull numbers out of investee submissions and chart them. Both produce outputs. Neither carries an investee ID across the lifecycle, neither scores qualitative evidence at submit, and neither runs a human-in-the-loop accuracy checkpoint before data lands in front of the team.

Capability

Spreadsheets + consultants

Excel, Airtable, custom analyst work

Metric aggregators

Upmetrics and similar tools

Sopact Sense

Impact Portfolio Intelligence

One investee ID across DD, quarterly, and exit

Manual match

Each tab issues its own identifier. Stitching IDs across cycles is the analyst’s job, every cycle.

Per submission

IDs scoped to a metric submission cycle. Aggregators were built to chart numbers, not to carry a record across DD, monitoring, and exit.

Native primitive

Issued at the first DD document. Used by every form and submission from onboarding through exit. Survives email changes, name spelling, role moves.

Five Dimensions scoring with evidence citations

Manual at IC

Scored once at investment committee. Dimensions 1, 4, and 5 require qualitative evidence and are usually left incomplete.

Numbers only

Aggregates the metrics investees submit. Does not read pitch decks, founder interviews, or narrative reports. Dimensions 1, 4, and 5 require evidence the tool was never built to ingest.

Cell at submit

Fund’s own rubric. Every score linked to the source passage that supports it. Re-scored each quarterly cycle against the same rubric.

Theory of Change as a living model

Frozen PDF

Captured at DD, filed, never updated. Drift never surfaces. Assumptions never get tested.

Out of scope

Theory of Change is not in the data model. Drift cannot surface because the model was never connected to the metrics in the first place.

Updated each cycle

Extracted from pitch language at DD. Confirmed with investee at onboarding. Re-evaluated each quarterly cycle. Drift flagged automatically.

Cohort end to shareable LP report

Team-month per cycle

Stitch exports together, manually code open-ends, draft narrative, format. Most of the cost is cleanup, not analysis.

Partial coverage

Charts the metrics that did get submitted. Narrative still gets written by hand. Most teams report the system covers a fraction of what an LP actually wants to read.

Hours per investee

Six LP-ready reports per investee generated overnight from the connected record. Program staff edit narrative, do not assemble data.

Cross-portfolio queries on qualitative evidence

Not feasible

Reading 25 quarterly narratives to find a pattern is a multi-week investigation. Most patterns are missed.

Quantitative only

Cross-record queries on submitted metrics. Open-ended narrative remains uncoded. Risk signals in qualitative data are invisible to the system.

Column + Grid

"Which investees flagged regulatory risk in their Q3 narrative?" runs as a single query. Cross-investee themes surface in minutes.

Human-in-the-loop accuracy checkpoint before data goes live

Manual ad-hoc

Whoever owns the spreadsheet eyeballs each submission. Catches some errors. Misses others. No structured release process.

No checkpoint

Submissions roll into dashboards as they arrive. Errors are visible to the whole team before the data lead has reviewed them. The data lead becomes the bottleneck for accuracy questions.

Reviewer release

Submissions land in a reviewer queue. AI flags inconsistencies and missing fields. The data lead releases each record before it propagates to the team. Accuracy becomes a workflow, not a back-channel question.

What’s different

Spreadsheets handle this quarter. Aggregators surface metrics. Neither carries context forward.

Most impact funds run IMM on spreadsheets and consultants, or on metric-aggregation tools that pull numbers out of investee submissions and chart them. Both produce outputs. Neither carries an investee ID across the lifecycle, neither scores qualitative evidence at submit, and neither runs a human-in-the-loop accuracy checkpoint before data lands in front of the team.

Capability

Spreadsheets + consultants

Excel, Airtable, custom analyst work

Metric aggregators

Upmetrics and similar tools

Sopact Sense

Impact Portfolio Intelligence

One investee ID across DD, quarterly, and exit

Manual match

Each tab issues its own identifier. Stitching IDs across cycles is the analyst’s job, every cycle.

Per submission

IDs scoped to a metric submission cycle. Aggregators were built to chart numbers, not to carry a record across DD, monitoring, and exit.

Native primitive

Issued at the first DD document. Used by every form and submission from onboarding through exit. Survives email changes, name spelling, role moves.

Five Dimensions scoring with evidence citations

Manual at IC

Scored once at investment committee. Dimensions 1, 4, and 5 require qualitative evidence and are usually left incomplete.

Numbers only

Aggregates the metrics investees submit. Does not read pitch decks, founder interviews, or narrative reports. Dimensions 1, 4, and 5 require evidence the tool was never built to ingest.

Cell at submit

Fund’s own rubric. Every score linked to the source passage that supports it. Re-scored each quarterly cycle against the same rubric.

Theory of Change as a living model

Frozen PDF

Captured at DD, filed, never updated. Drift never surfaces. Assumptions never get tested.

Out of scope

Theory of Change is not in the data model. Drift cannot surface because the model was never connected to the metrics in the first place.

Updated each cycle

Extracted from pitch language at DD. Confirmed with investee at onboarding. Re-evaluated each quarterly cycle. Drift flagged automatically.

Cohort end to shareable LP report

Team-month per cycle

Stitch exports together, manually code open-ends, draft narrative, format. Most of the cost is cleanup, not analysis.

Partial coverage

Charts the metrics that did get submitted. Narrative still gets written by hand. Most teams report the system covers a fraction of what an LP actually wants to read.

Hours per investee

Six LP-ready reports per investee generated overnight from the connected record. Program staff edit narrative, do not assemble data.

Cross-portfolio queries on qualitative evidence

Not feasible

Reading 25 quarterly narratives to find a pattern is a multi-week investigation. Most patterns are missed.

Quantitative only

Cross-record queries on submitted metrics. Open-ended narrative remains uncoded. Risk signals in qualitative data are invisible to the system.

Column + Grid

"Which investees flagged regulatory risk in their Q3 narrative?" runs as a single query. Cross-investee themes surface in minutes.

Human-in-the-loop accuracy checkpoint before data goes live

Manual ad-hoc

Whoever owns the spreadsheet eyeballs each submission. Catches some errors. Misses others. No structured release process.

No checkpoint

Submissions roll into dashboards as they arrive. Errors are visible to the whole team before the data lead has reviewed them. The data lead becomes the bottleneck for accuracy questions.

Reviewer release

Submissions land in a reviewer queue. AI flags inconsistencies and missing fields. The data lead releases each record before it propagates to the team. Accuracy becomes a workflow, not a back-channel question.

IPI · 09 Proof · preview

Impact Portfolio Intelligence | Sopact

Who runs it

Real funds. Each year inherits the last.

Three customers. Three different portfolio shapes: public-company CDFI, Singapore fund manager, African gender-lens foundation. Same architecture: one investee ID across the lifecycle, qualitative evidence scored at submit, reports generated rather than assembled.

Crossroads Impact Corp

Impact Investor · Public Co. · CDFI Subsidiary

5+ years of impact learning. 4+ years of published reports.

260%

Increased Investment.

Year-over-year increase in impact investing toward environmental and social change.

Challenge

How to ensure investments stay on the path of sustainability and inclusion across every stage, from sourcing to exit.

Solution

Sopact mapped relevant frameworks (EDCI, GIIN-IRIS, PRI), integrated investee data across the lifecycle, and built a connected record beneath four consecutive published impact reports. Each year inherits the prior year’s baseline rather than starting over.

Double Delta

Singapore · Fund Manager · 2 Funds

Automating report sharing to save 30% team time.

30%

Less reporting time

Across two funds. Frees the team for building new funds and investments.

Challenge

Collecting data and reporting to investors consumed many hours on top of due diligence and other fund management activities across two sizable funds.

Solution

Sopact integrated investee data across both funds, built fund-level dashboards, and now powers a branded investor-facing dashboard that shares dynamic investee data in real time.

Kuramo Foundation

Africa · Foundation · Fund Manager · Accelerator

Advancing financial inclusion with a structured, data-driven, market-systems approach.

1M+ jobs

+ $3B+ capital unlocked

Across Moremi Accelerator · WEAVE · WIIF programs.

Challenge

Building a Gender Lens Investing vehicle to support an initiative that is data-driven from day one, across three connected platforms.

Solution

Sopact made the Foundation’s impact framework operational, mapped indicators across access to funding for female fund managers, gender equality, and entrepreneurial growth. New programs use AI scorecard agents to conduct due diligence and reporting.

FAQ

Questions LPs and IC chairs ask.

Eight questions impact fund managers, foundation program officers, and ESG leads ask in their first conversation. Visible Q&A so search engines and AI assistants can index every answer.

Q. 01
What is impact measurement and management (IMM)?

Impact measurement and management is the practice of systematically collecting evidence of change, analyzing what it means, and using those findings to drive investment, allocation, or program decisions. Measurement asks "what changed." Management asks "what do we do about it." Measurement without management is documentation. Management without measurement is guesswork. Modern IMM runs both as one connected system, anchored on one investee record that carries forward from due diligence onward.

Q. 02
What are the Five Dimensions of Impact?

The Five Dimensions, developed by the Impact Management Project (now Impact Frontiers), are the consensus framework for evaluating any impact claim. What: the outcome being pursued and its importance. Who: the stakeholders affected and how underserved they are. How Much: scale, depth, and duration. Contribution: the investor’s additionality versus what would happen anyway. Risk: the probability that impact differs from expectations. Dimensions 1, 4, and 5 require qualitative evidence as much as quantitative metrics, which is where most funds stall manually. See the full deep dive on Five Dimensions scoring.

Q. 03
What is impact due diligence and why does it matter for IMM?

Impact due diligence is the systematic pre-investment assessment of a target company’s theory of change, intended outcomes, ESG posture, stakeholder relationships, and the probability that claimed impact will materialize. It is the foundational decision that determines whether a fund’s IMM builds year over year or starts over. When DD runs as a standalone exercise, its findings die at the investment committee. When it runs through a platform that carries every finding forward as structured data, it becomes the starting point for portfolio intelligence that gets richer at each cycle. The deep dive on impact investing due diligence covers the three-stage workflow that connects DD, onboarding, and quarterly reporting.

Q. 04
How does AI change IMM in practice?

AI changes the operational economics of running frameworks like IRIS+, Five Dimensions, and Theory of Change continuously across a portfolio. The question stops being whether to apply the framework and starts being how the framework operates as automation. Every document gets read at submit. Every open-ended response gets scored against a fund-defined rubric with citations to the source passage. Cross-portfolio queries that used to be multi-week investigations become single queries. Frameworks become operational rather than aspirational.

Q. 05
What is a "living" Theory of Change?

A living Theory of Change is a logic model that updates continuously as investee reality evolves, rather than a static PDF filed after the investment committee. The TOC gets extracted from pitch language at DD, confirmed with the investee at onboarding, and re-evaluated each quarterly cycle against the latest evidence. Drift surfaces automatically. Assumptions get tested. The framework earns its place in portfolio decisions instead of becoming compliance documentation.

Q. 06
How does each year inherit the last across the investment lifecycle?

One investee ID is issued at the first DD document. Every form, survey, and submission after that uses the same ID: onboarding interview, baseline survey, quarterly submissions, manager observation, exit interview. Each new piece of evidence joins the same record. Cross-cycle queries become a query, not a multi-week cleanup project. The Q3 narrative is automatically checked against DD commitments. Risk signals connect to risk flags from the original DD interview. LP narratives are generated from the full year-over-year record, not assembled from scratch each quarter.

Q. 07
Do I need to replace my CRM (Affinity, DealCloud, HubSpot, Salesforce) to use Sopact?

No. The CRM stays the system of record for deal flow, contacts, and pipeline. Sopact reads from the CRM through standard connectors, layers in document intelligence and stakeholder voice, and returns scored assessments. No write permissions, no data migration, no IT project. Most impact funds keep their existing CRM and pull Sopact in alongside it for the IMM and LP-reporting workflow specifically.

Q. 08
How is Sopact different from metric-aggregation tools like Upmetrics?

Metric aggregators were built to chart the numbers investees submit. They do that part well. Where they stop is where most LP reports actually live: qualitative evidence, narrative cleanup, drift in Theory of Change, risk signals buried in open-ended responses, and the year-over-year story across DD, monitoring, and exit. Aggregators also typically lack a human-in-the-loop accuracy checkpoint, so submissions land in dashboards before the data lead has reviewed them. Sopact is the intelligence layer, not a charting tool. It owns the connected investee record, scores qualitative evidence at submit, and releases data through a reviewer queue rather than directly to the team.

Start the Intelligence Horizon

Bring a portfolio. Leave with the architecture.

A 60-minute working session. Bring your fund’s thesis and a sample DD package. We map the Five Dimensions to your IC criteria, the one investee ID that stays connected across your CRM, and the LP report you need at quarter end. By the end, you have a working setup and a path to running it next cycle.

Format 60-minute working session
Bring A sample DD package
Leave with A working IMM architecture