play icon for videos
Use case

Impact Investing Due Diligence | From Documents to Decisions

Most impact fund due diligence drowns in documents. Sopact Sense scores, synthesizes, and carries context from DD through quarterly reporting. See it live.

TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

March 29, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

Impact Investing Due Diligence: Framework, Auditing, and Best Practices

Your investment committee approved the deal three months ago. The DD memo scored the theory of change at 2.4 out of 3. Organizational commitment rated strong. Five Dimensions assessment documented. The file is closed. Now your LP wants to know whether the impact thesis is actually playing out. You open the DD memo. It was written by the analyst who has since left. The scoring rubric they used is in a spreadsheet tab you can't find. The quarterly report from the investee uses different indicators than the ones agreed during diligence. The verdict that justified the investment cannot connect to the evidence that should confirm it. That is The Verdict Fallacy.

The Verdict Fallacy is the structural error of treating impact investing due diligence as a one-time judgment — a score that answers "should we invest?" — rather than the first data point in a continuous evidence series that answers "were we right, and how do we know?" A static DD verdict has no mechanism for being proved correct or corrected. It cannot compound into portfolio intelligence. It cannot satisfy the LP, auditor, or regulator who asks for evidence that impact claims are grounded. And it cannot satisfy you, twelve months later, when you need to know whether the thesis is holding.

Sopact Sense and Impact Intelligence were built to close the Verdict Fallacy: DD findings do not file away after the investment decision — they become the living baseline that every quarterly submission is compared against, automatically, until exit.

New Framework
The Verdict Fallacy
The Verdict Fallacy is treating impact due diligence as a one-time judgment — a score that answers "should we invest?" — rather than the first data point in a continuous evidence series that answers "were we right, and how do we know?" A static verdict cannot be audited for accuracy, cannot compound into portfolio intelligence, and cannot satisfy the LP or regulator who asks for evidence that impact claims are grounded. Sopact Sense closes the Verdict Fallacy: every DD commitment carries forward automatically into every quarterly cycle through persistent entity IDs — so the evidence lifecycle never resets.
D1
What
Outcome Pursued
Specific change sought for stakeholders
D2
Who
Stakeholders Affected
How underserved, how many, context
D3
How Much
Scale · Depth · Duration
Breadth and persistence of change
D4
Contribution
Additionality
What changes because of this investment
D5
Risk
Impact Probability
Risk impact differs from expectations
412
Months to build a quantitative DD rubric manually — per PCV's Impact Due Diligence Guide
$1.5T
Global impact AUM — growing 21% CAGR — with most funds still assembling DD in spreadsheets
0
Context resets between DD, onboarding, and quarterly reporting when persistent entity IDs connect all stages
Early-Stage Impact Fund
Growth-Stage Fund / ESG Mandate
Foundation & Grantmaker

Step 1: Identify Your Impact Due Diligence Situation

Impact investing due diligence means different things to an early-stage impact fund scoring 80 deal applications per year, a growth-stage fund with an ESG mandate auditing impact claims across 40 portfolio companies, and a foundation conducting due diligence on prospective grantees before multi-year program commitments. The Five Dimensions framework applies to all three. The depth of evidence required, the scoring methodology, and the post-investment continuity architecture differ fundamentally.

Define Your Impact Due Diligence Situation

Three contexts — each with different scoring depth, evidence requirements, and post-investment continuity needs

① Describe your situation
② What to bring
③ What Sopact produces
Early-Stage Impact Fund
DD produces a strong thesis but no architecture for validating it — every quarterly cycle starts from scratch
Investment analysts · Fund managers · Impact directors · IC members
"I'm the impact director at an early-stage fund. We do rigorous Five Dimensions scoring during DD — our rubric is good and our IC trusts it. But after the investment decision, the DD memo gets filed. When portfolio companies submit quarterly updates, nobody compares them to what they committed at DD. By Year 2, we're measuring what companies want to report, not what they agreed to deliver. Our LP is starting to ask for evidence that the impact thesis is holding, and I don't have an auditable chain to show them."
Platform signal: Sopact Sense is the right fit when your DD scoring is solid but the evidence lifecycle ends at the investment decision. The core need is persistent entity IDs connecting your DD rubric to post-investment monitoring — so the Verdict Fallacy closes automatically. For funds with fewer than 5 portfolio companies tracked informally, structured spreadsheets may suffice until the portfolio and LP accountability pressure grows.
Growth-Stage Fund — ESG + Impact
40+ portfolio companies reporting impact data inconsistently — auditing impact claims across the portfolio is impossible with current tooling
Portfolio analytics teams · ESG directors · Risk officers · LP relations
"I run portfolio analytics for a growth-stage impact fund with 44 portfolio companies. Our LPs are asking for audited impact claims — they want evidence that what companies said at DD is what they're delivering in Year 3. But our quarterly reports are inconsistent: some companies use our template, some send narrative PDFs, some send email updates. Nobody has linked those submissions back to the original DD commitments. When we try to audit a specific claim, we have to manually reconstruct the evidence chain from the DD memo, the onboarding documents, and three years of quarterly files. It takes days per company."
Platform signal: This is the core use case for Sopact Sense + Impact Intelligence. The persistent entity architecture connects DD records to quarterly submissions automatically — auditing impact claims across 44 companies becomes a comparison of structured data, not a reconstruction of document chains. Risk-based escalation surfaces the highest-divergence companies without requiring manual review of every submission.
Foundation & Grantmaker
Grant application review uses a narrative process — no consistent Five Dimensions scoring, no connection between selection rubric and grantee reporting
Program officers · Grant reviewers · Learning officers · Executive directors
"I'm a program officer at a foundation. We receive 200 grant applications per cycle. Review is narrative — program officers read applications and score them subjectively. We have no consistent rubric for theory of change quality, organizational commitment, or stakeholder evidence. After selection, grantees report annually in formats that don't align with how we scored them at application. Two years in, I can't tell whether the organizations we selected based on strong impact narratives are actually delivering stronger outcomes than the ones we didn't select."
Platform signal: Sopact Sense designs the grant application as the DD instrument — Five Dimensions criteria structured at collection, AI pre-scoring before any reviewer opens the queue, and the application rubric score connected to the grantee's reporting record through a persistent entity ID. This is the architecture that closes the Verdict Fallacy for grantmaking as much as for investing.
📋
Impact Thesis and DD Documents
Pitch decks, impact theses, theory of change documents, interview transcripts, and financial models. Sopact AI extracts Five Dimensions evidence from these documents automatically — no manual field population.
📊
Your Scoring Rubric or Framework
Your existing Five Dimensions rubric, IRIS+ indicator set, or custom scoring criteria. Sopact maps to your framework — bringing the rubric you already have into a consistent, analyst-independent application engine.
👥
Stakeholder Evidence Sources
Beneficiary surveys, community interviews, worker voice data, customer feedback. For Dimensions 1, 4, and 5 — qualitative evidence at scale. Sopact codes these against your rubric consistently across every entity.
📅
Post-Investment Reporting Cadence
Quarterly monitoring timeline, annual LP report deadlines. Bring the indicator set agreed with each portfolio company — these become the comparison baseline for each entity's quarterly monitoring form.
📄
Prior Quarterly Submissions
Existing quarterly updates in whatever format they exist — PDFs, spreadsheets, narrative emails. Sopact ingests all formats and maps them to the DD baseline retroactively for companies already in portfolio.
🔍
Organizational Commitment Indicators
Does the investee have dedicated impact staff? Board-level ESG/impact reporting? Beneficiary feedback loops? These signals — assessed during DD — predict long-term impact delivery and feed the risk dimension continuously.
For funds with existing portfolio companies: Sopact Sense can import historical DD records and map them to the entity architecture retroactively. The Verdict Fallacy starts closing from the first quarterly cycle after setup — even for companies where DD was completed before Sopact was implemented.
From Sopact Sense — Impact DD Outputs
  • AI Five Dimensions scoring: every DD document reviewed and scored across all five dimensions with evidence citations from source passages — consistent rubric applied regardless of which analyst reviews
  • Persistent entity record: DD commitment record assigned a unique entity ID from first contact — every subsequent quarterly submission, interview, and survey connected to the same record automatically
  • Living theory of change: AI synthesizes DD documents and onboarding interview into a structured, updateable theory of change — both parties aligned on indicators before data collection begins
  • Impact claims audit trail: each quarterly submission automatically compared to DD baseline commitments — divergences flagged as they appear, not discovered during LP review two years later
  • Risk-based escalation flags: highest-divergence companies surfaced for priority review — risk-based auditing without manual reconstruction of evidence chains per company
  • Six LP-ready reports per investee: investee scorecard, gap and risk memo, IC preparation brief, LP portfolio narrative, longitudinal trend, exit impact summary — generated overnight at quarter close through Impact Intelligence
Next prompt — Early-Stage Fund
"We have 8 new portfolio companies from this cycle. Extract the Five Dimensions assessment from each company's DD package and build their monitoring baseline. For each company, show me where the evidence for Dimension 4 (Contribution) is weakest — those are the claims most at risk of being unverifiable at Year 2."
Next prompt — Growth Fund Audit
"Run the impact claims audit across 44 portfolio companies. Flag any company where their Q3 qualitative narrative contradicts a specific commitment from their DD memo. Rank by divergence severity — I need the top 10 for the IC review next week."
Next prompt — Foundation
"200 grant applications are in. Score each against our Five Dimensions rubric and flag the top 40 by composite score. For each application, show me the evidence strength per dimension — I want to know which high-scoring applications have weak evidence on Dimension 3 (How Much) before the review committee opens the queue."

The Verdict Fallacy: Why Impact DD Evidence Expires

The Impact Management Project's Five Dimensions framework — What, Who, How Much, Contribution, and Risk — provides the right structure for evaluating impact investing opportunities. Pacific Community Ventures' Impact Due Diligence Guide confirms the frameworks are mature. The GIIN estimates impact AUM exceeding $1.5 trillion. The problem is not knowledge. It is what happens to impact evidence after the verdict is delivered.

Static DD assessments fail for three reasons. First, they are scored by one analyst using criteria that another analyst would apply differently — the PCV guide acknowledges this as a persistent challenge, noting that even well-designed quantitative tools require 30–45 minutes per deal and produce scores that vary with the reviewer. Second, they are stored in formats — PDFs in shared drives, spreadsheets in analyst folders — that disconnect from post-investment monitoring, so the theory of change tested during diligence is never the same theory of change measured quarterly. Third, they produce a verdict on what a company claims it will do, which is the least reliable data point in the entire evidence lifecycle.

The IFC's AIMM system is the clearest proof that connected architecture works — structurally linking front-end diagnostics to results measurement and ex-post evaluation. But replicating AIMM has historically required development-finance-institution scale. The Verdict Fallacy persists across every fund that does not have that scale, because the tooling was never built to connect the stages.

Closing the Verdict Fallacy requires one architectural decision: the DD record must carry forward automatically into every post-investment stage. Sopact Sense assigns a persistent entity ID at the point of first DD contact — application, intake DDQ, or document upload — and connects every subsequent data point to that same record. The DD verdict is not the end of the evidence lifecycle. It is the opening entry.

Step 2: How Sopact Sense Runs Impact Investing Due Diligence

Sopact Sense is a data collection platform — not a document viewer or a reporting layer. Every entity assessed receives a unique persistent ID at first contact. For impact investing DD, that means the entity ID is assigned at the initial DDQ or document package submission, before scoring begins. Every subsequent instrument — Five Dimensions rubric assessment, theory of change extraction, financial document analysis, onboarding survey, quarterly update — connects to that same ID automatically.

For screening and deal flow, Sopact Sense designs the initial DDQ and document intake inside the platform. AI extracts theory of change from pitch decks and impact theses — mapping the company's stated outcomes, target population, and contribution claims to your Five Dimensions rubric before any analyst opens the file. This is not AI generating a summary. It is AI populating structured fields against a consistent rubric so that your analyst reviews a scored assessment, not a 200-page document stack. Every score is cited to the specific source passage. No black boxes.

For portfolio-level auditing of impact claims, this is the architecture that makes risk-based auditing tractable at scale. The ESG due diligence page covers the ESG screening layer in detail. The Five Dimensions layer is specific to impact thesis validation: Dimension 4 (Contribution) and Dimension 5 (Risk) require qualitative evidence at scale — stakeholder attribution analysis, narrative risk signals — that no manual review process can reliably deliver across 40+ portfolio companies per quarter. Sopact Sense's AI codes open-ended responses and narrative submissions against the same rubric consistently, making cross-portfolio comparison meaningful rather than analyst-dependent.

For foundations and grantmakers running impact DD on prospective grantees, Sopact Sense designs the grant application form as the DD instrument — with theory of change fields, Five Dimensions indicators, and organizational commitment criteria structured at collection, not retrofitted from a narrative review. AI pre-scores every application before reviewers open the queue. The application review architecture is the same architecture as impact DD — the difference is the rubric, not the platform.

Step 3: Five Dimensions Scoring and Impact Claims Auditing

The Five Dimensions of Impact are the IMP consensus framework for organizing impact evidence across What (outcome pursued), Who (stakeholders affected and how underserved), How Much (scale, depth, duration), Contribution (additionality), and Risk (probability impact differs from expectations). Making them operational at portfolio scale — not just for one deal, but consistently across 20, 40, or 80 investments — is where most impact DD systems break down.

The phase-by-phase workflow below covers: DD scoring and application review, onboarding theory of change validation, and quarterly impact claims auditing — the three stages that must connect to close the Verdict Fallacy.

Impact Investing Due Diligence — Three-Phase Workflow

Select your context to see Phase 1 (DD Scoring), Phase 2 (Onboarding), and Phase 3 (Impact Claims Auditing) in Sopact Sense

Early-Stage Impact Fund
Growth Fund — Portfolio Audit
Foundation & Grantmaker
Phase 1 — DD Scoring
Score Every Deal Against the Five Dimensions — Before the IC Meeting
Impact Director — DD Scoring Prompt
"We have 12 deals in active due diligence. Each has a pitch deck, impact thesis, theory of change document, and at least one founder interview transcript. Score each deal against our Five Dimensions rubric: What (outcome specificity), Who (population underservedness), How Much (scale and depth evidence), Contribution (additionality strength), Risk (theory of change credibility). Weight Contribution and Risk most heavily — those are our weakest areas historically. Show me the ranked scorecard with evidence citations."
Sopact Sense produces
  • Five Dimensions scorecard for all 12 deals: composite score + individual dimension scores, each score cited to the specific document passage that supports it — no black-box estimates
  • Contribution strength analysis: 4 deals with strong additionality evidence (market gap documentation, regulatory barrier evidence, or comparison group data); 3 deals with weak additionality (management assertion only) — flagged for IC discussion
  • Risk dimension: theory of change credibility rated per deal — 2 deals have causal logic gaps between activities and claimed outcomes; AI identifies the specific gap in each TOC and recommends the evidence that would close it
  • Persistent entity IDs assigned to all 12 deals — DD scoring record will carry forward automatically into onboarding and quarterly monitoring for funded deals
  • IC preparation briefs for top 5 by composite score: key strengths, flagged concerns per dimension, and recommended due diligence questions for the next founder conversation
Phase 2 — Onboarding
Convert DD Findings Into a Living Theory of Change Before Q1 Data Arrives
Impact Director — Onboarding Prompt
"We funded 4 of the 12 deals. For each funded company, synthesize the DD findings with the onboarding interview transcript into a Living Theory of Change. Extract their specific outcome indicators from the DD record and build their quarterly monitoring form — pre-populated with their DD commitments as the targets. I want their Q1 monitoring form to ask them to report against what they promised, not a generic template."
Sopact Sense produces
  • Living Theory of Change for each of the 4 funded companies: structured from DD documents + onboarding interview, showing activities → outputs → outcomes chain with the specific assumptions and evidence gaps identified during DD highlighted
  • Shared data dictionary per company: outcome indicators agreed during onboarding, mapped to IRIS+ metrics where applicable — both investor and investee aligned before Q1 data collection begins
  • Four company-specific quarterly monitoring forms: each pre-populated with that company's specific DD commitments as comparison targets — investees report progress against their own promises, not a one-size-fits-all template
  • Organizational commitment assessment per company: dedicated impact staff, board-level reporting, beneficiary feedback mechanisms — scored from onboarding interview and carried forward as a risk flag if any are absent
Phase 3 — Impact Claims Auditing
Audit DD Commitments Against Quarterly Evidence — Before the LP Call
Impact Director — Year 2 Audit Prompt
"It's Year 2. All 4 portfolio companies have submitted Q3 updates. Run the impact claims audit: compare each company's Q3 reported outcomes to their original DD commitments. Flag any company where the theory of change appears to have shifted since onboarding — where what they're now measuring doesn't match what they promised to deliver at DD. Show me the evidence chain for each flag."
Sopact Sense produces
  • Impact claims audit across all 4 companies: DD commitment vs. Q3 reported outcome per indicator — all comparisons pulled automatically from persistent entity records, no manual DD memo review required
  • 1 company flagged with theory of change drift: originally committed to measuring "direct job creation" (DD commitment), now reporting "indirect economic beneficiaries" in Q3 narrative — shift detected by AI comparing DD record to current submission language
  • Evidence chain for the flagged company: DD passage where job creation commitment was made, Q2 narrative where language first shifted to indirect beneficiaries, Q3 narrative confirming the shift — three-quarter audit trail generated automatically
  • LP-ready impact claims summary: what each company committed at DD, what they've reported through Year 2, divergence summary, and recommended IC discussion questions — formatted for LP quarterly review without additional analyst work
Phase 1 — Portfolio-Scale DD Scoring
Consistent Five Dimensions Scoring Across 40+ Deals — Same Rubric, Every Analyst
Portfolio Analytics Lead — DD Scoring Prompt
"We're processing 60 deals this cycle across three investment analysts. I need consistent Five Dimensions scoring regardless of which analyst reviews each deal — our IC has flagged that scores vary 0.5–1.0 points depending on the reviewer. Configure the rubric so AI pre-scores each deal before any analyst opens the file, and show the analyst the pre-score as a calibrated starting point with the evidence citations. Reviewers can override but must note why."
Sopact Sense produces
  • AI pre-scores all 60 deals against the configured Five Dimensions rubric — same criteria applied to every deal regardless of analyst assignment; inter-rater variance eliminated at the pre-score level
  • Reviewer interface: AI pre-score per dimension with supporting evidence citations displayed alongside blank override field — calibrated starting point for each analyst; reviewers see where AI scored and why before making their own assessment
  • Override tracking: when reviewers change an AI score, the original AI score and the reviewer's revised score are both recorded — allowing IC to review where human judgment diverged from AI and why, building rubric calibration over time
  • Calibration analysis after first cycle: dimensions where analyst overrides cluster above or below AI scores — identifies rubric ambiguities to resolve before the next deal cycle rather than discovering them at IC
Phase 2 — Risk-Based Portfolio Audit Setup
Configure the Impact Claims Audit Architecture Across All 44 Portfolio Companies
Portfolio Analytics Lead — Audit Configuration Prompt
"We have 44 portfolio companies — 12 funded this year with DD records in Sopact Sense, and 32 funded in prior years with DD records in shared drive folders. I need to: (a) map the prior-year DD records into the persistent entity architecture retroactively; (b) configure risk-based audit thresholds — flag any company where reported outcomes diverge more than 20% from DD commitments; and (c) set up qualitative monitoring for theory of change drift in narrative submissions."
Sopact Sense produces
  • Retroactive entity architecture for 32 prior-year companies: AI extracts DD commitments from uploaded historical memos and maps them to entity records — creating the commitment baseline that quarterly monitoring will compare against going forward
  • Risk-based audit thresholds configured: 20% divergence from DD commitment on primary outcome indicator triggers automatic escalation flag; secondary threshold at 35% divergence triggers IC brief generation
  • Qualitative TOC drift monitoring active: AI compares each quarterly narrative to the specific language of the company's DD-era theory of change — flags when outcome framing shifts from what was committed (e.g., direct to indirect beneficiaries)
  • Portfolio audit dashboard: all 44 companies visible with audit status, last DD commitment comparison date, and flag count — sortable by divergence severity for prioritized review
Phase 3 — Quarterly Impact Claims Audit
Risk-Based Auditing Across 44 Companies — LP-Ready Evidence Before Monday
Portfolio Analytics Lead — Q3 Audit Prompt
"Q3 submissions are in for 41 of 44 companies. Run the impact claims audit. Show me: (a) the full divergence scorecard ranked by gap between DD commitments and Q3 reported outcomes; (b) any companies where TOC drift was detected in the qualitative narrative; (c) the top 8 highest-risk companies for a focused IC review. I need this in 24 hours."
Sopact Sense produces
  • Full divergence scorecard: 41 companies ranked by gap between DD commitments and Q3 outcomes — 6 companies above 20% divergence threshold, 2 above 35% threshold; all comparisons automatic from persistent entity records
  • TOC drift flags: 3 companies where Q3 narrative language diverges from DD theory of change — each flagged with the specific DD passage and Q3 passage side by side, plus drift characterization (scope narrowing, indicator substitution, beneficiary population shift)
  • Top 8 IC review companies: 5 from divergence flags + 3 from TOC drift — ranked by combined risk score, with IC brief per company summarizing DD commitment, current performance, evidence chain, and recommended discussion questions
  • 3 pending submissions noted; 2 have consistent late-submission patterns; 1 is a new late submission flagged for outreach before the IC review
Phase 1 — Grant Application as DD Instrument
Design the Application Form as a Five Dimensions Assessment — Score Before Any Reviewer Opens the Queue
Program Officer — Setup Prompt
"We receive 200 grant applications per cycle. Our review process is currently narrative — program officers read and score subjectively, which takes 3 weeks and produces inconsistent results. Build an application intake form that captures Five Dimensions evidence: theory of change quality, stakeholder population specificity, evidence of past outcomes, organizational capacity, and equity alignment. AI should pre-score each application so reviewers start from a calibrated baseline, not a blank page."
Sopact Sense produces
  • Grant application form with Five Dimensions-aligned fields: structured and open-ended questions per dimension, with field mapping to rubric criteria so AI scoring applies consistently to every submission
  • AI pre-scores all 200 applications across five dimensions before any reviewer opens the queue — review cycle compresses from 3 weeks to 1 week because reviewers start from a calibrated scored baseline, not a reading assignment
  • Persistent applicant IDs assigned at submission — connecting this application cycle to any future application, grantee reporting cycle, or renewal review through the same entity record
  • Equity alignment flag: applications where beneficiary population specificity is vague or where equity evidence is absent surfaced before reviewer assignment — saving reviewers from opening applications that will fail on non-negotiable criteria
Phase 2 — Selection and Onboarding
Connect Selection Rubric Scores to Grantee Reporting — Before the First Check Clears
Program Officer — Selection Prompt
"We've selected 24 grantees from 200 applications. For each selected grantee, carry their application scores and theory of change evidence forward into their grant agreement baseline. Build their annual reporting form so they report against the specific outcomes they described in their application — not a generic form. I want to be able to compare what they claimed at application to what they reported at Year 1."
Sopact Sense produces
  • 24 grantee onboarding records: application scores, theory of change evidence, and Five Dimensions assessment carried forward from the application record — no re-entry, no context reset at grant agreement stage
  • Grantee-specific annual reporting forms: each pre-configured with the outcomes the grantee described in their application as the comparison baseline — grantees report against their own commitments, not a standard form
  • Grant agreement baseline documentation per grantee: application rubric scores, key outcome commitments, and evidence standards agreed — forms the Verdict Fallacy-closing record that annual reporting will compare against
  • Declined applicant feedback generation: 176 declined applications receive feedback letters noting their rubric scores by dimension and dimension-specific guidance — no reviewer attribution included
Phase 3 — Annual Grantee Reporting Audit
Compare Application Commitments to Year 1 Outcomes — Across All 24 Grantees
Program Officer — Year 1 Audit Prompt
"Year 1 grantee reports are in for 22 of 24 grantees. Compare each grantee's reported Year 1 outcomes to their application commitments. Show me: which grantees delivered stronger outcomes than they committed to, which fell short, and which appear to have shifted their focus from what they described at application. I need this for the renewal decision meeting next month."
Sopact Sense produces
  • Application-to-Year-1 comparison for all 22 reporting grantees: outcome delivered vs. outcome committed at application — all comparisons automatic from persistent entity records, no manual application review required
  • Overperformers (5 grantees): delivered stronger outcomes than application commitments on primary indicators — evidence cited from Year 1 reports and cross-referenced to application claims
  • Underperformers (6 grantees): fell short of primary indicator targets — each with gap size, narrative explanation from Year 1 report, and whether the shortfall was acknowledged by the grantee or appears unreported
  • Focus shift flags (3 grantees): Year 1 report describes work that diverges from application commitments — AI identifies the specific shift (e.g., population served narrowed, geography changed, primary activity substituted) with supporting passages
  • Renewal recommendation brief per grantee: performance vs. commitment summary, risk assessment for Year 2, and recommended grant condition modifications for underperformers — formatted for the renewal decision meeting

Best Practices for Auditing Impact Claims in Impact Investing

Best practices for auditing impact claims have converged on a risk-based approach: concentrate rigorous review on the claims with the highest materiality and the weakest evidence, rather than attempting uniform verification across every indicator. The PCV guide's seven areas of best practice — assessing impact using the Five Dimensions, bridging ESG and impact, aligning with SDGs, elevating stakeholder perspectives, evaluating organizational commitment, portfolio-wide approach, and accessibility — are the right framework. The audit question is: which of these areas presents the highest risk of divergence between claimed and actual impact for each investee?

In practice, risk-based auditing of impact claims requires three inputs: the original DD commitment record (what the company claimed it would deliver), the subsequent quarterly reporting record (what it actually reported), and qualitative stakeholder evidence (what beneficiaries and communities experienced). Without persistent entity IDs connecting all three, auditing impact claims means manually reconstructing the evidence chain from scratch for each portfolio company — which is why most funds don't do it systematically and instead rely on management assertions.

Sopact Sense enables risk-based auditing through the same architecture that closes the Verdict Fallacy. The DD commitment record is the baseline. Every quarterly submission is automatically compared to it. Qualitative submissions are AI-coded for theme consistency — does the company's Q4 narrative still reflect the same theory of change it described at DD? AI flags divergences the moment they appear, enabling risk-based escalation before the LP call rather than during it.

For grant reporting contexts, the same auditing architecture applies: grantee claims against grant objectives, tracked through the same persistent entity architecture from application through final report. The framework is Five Dimensions; the architecture is persistent entity IDs; the output is an auditable evidence chain.

Portfolio Due Diligence: Scaling Across 20–80 Investments

Portfolio due diligence — applying consistent DD standards across a large and growing investee pool — is where the Verdict Fallacy compounds most severely. At 10 investments, an experienced analyst can hold the DD context in memory. At 30, it's impossible. At 80, you are definitionally operating on verdicts that have no connection to current evidence.

The scaling problem is not analyst capacity. It is the absence of a shared entity architecture. When every investee's DD record lives in a separate folder, every quarterly comparison requires manually locating the right version of the right document and applying a rubric that was designed by a different analyst in a different quarter. Sopact Sense makes portfolio due diligence tractable because every entity's record shares the same architecture, the same rubric structure, and the same persistent ID — so portfolio-wide analysis is comparison of equivalent data points, not reconstruction of disconnected files.

1
The Verdict Fallacy in Every Manual Workflow
Spreadsheet rubrics, survey questionnaires, and narrative DD memos all produce verdicts that expire at the investment decision. None connects the DD score to post-investment monitoring. The Verdict Fallacy is architectural — it cannot be fixed by a better spreadsheet.
2
Analyst-Dependent Scoring Breaks Portfolio Comparability
Without AI pre-scoring, Five Dimensions assessments vary 0.5–1.0 points depending on the reviewer — PCV's guide acknowledges this explicitly. When the scoring rubric lives in an analyst's head, portfolio-wide comparisons are comparing different methodologies, not different investees.
3
Theory of Change Drift Goes Undetected
Without AI monitoring qualitative narrative submissions for language divergence from DD commitments, theory of change drift is invisible until an LP asks the question. By Year 3, a company may be measuring entirely different outcomes than those assessed during diligence — and nobody noticed.
4
Auditing Impact Claims Requires Reconstructing Evidence From Scratch
When DD records live in shared drives and quarterly data lives in a reporting platform, auditing any specific impact claim requires manually locating the DD memo version, finding the relevant passage, and comparing it to quarterly submissions across multiple file systems. At 40+ portfolio companies, this is not a workflow — it is a research project.
← Scroll to see full comparison
✕ Manual DD Workflow Spreadsheet + Survey + Gen AI ◑ Reporting Platform e.g. Upmetrics, ImpactMapper ✓ Sopact Sense + Impact Intelligence
Five Dimensions Scoring Manual, analyst-dependent — same rubric applied differently by each reviewer; scores vary 0.5–1.0 points across analysts; portfolio comparability unreliable Structured templates for IRIS+ and Five Dimensions — consistent field collection; scoring still requires manual analyst input; no AI pre-scoring before review AI pre-scores every deal before any analyst opens the file — same rubric applied consistently; reviewers start from a calibrated baseline with evidence citations; override tracking builds rubric calibration
DD-to-Monitoring Link None — DD memo filed after investment decision; monitoring starts from scratch; quarterly forms use a generic template unconnected to individual DD commitments Manual baseline entry at onboarding — program officer can enter DD commitments into the platform; requires analyst time per company; no automatic extraction from DD documents Automatic — AI extracts commitments from DD documents at onboarding and pre-configures each company's monitoring form with their specific DD targets; the Verdict Fallacy closes before Q1 arrives
Impact Claims Auditing Manual reconstruction — auditing any claim requires locating the DD memo, finding the relevant passage, and comparing to quarterly submissions across multiple file systems; impractical at 40+ companies Comparison within the platform — year-over-year comparison of submitted metrics available; limited to quantitative data entered into the platform; no detection of qualitative theory of change drift Automatic divergence detection — every quarterly submission compared to DD baseline; qualitative narrative monitored for theory of change drift; risk-based escalation surfaces highest-divergence companies without manual review
Qualitative Analysis Manual review or non-reproducible ChatGPT analysis — open-ended responses and narrative updates either go unread or produce inconsistent outputs across sessions; Dimensions 1, 4, and 5 stay theoretical Narrative fields collected and displayed — primarily used for human-written impact narratives; limited systematic AI analysis of qualitative submissions across portfolio AI codes every open-ended response against the configured rubric — consistent across all companies and all cycles; TOC drift detected in narrative language; Dimensions 1, 4, and 5 operationalized
Portfolio Scaling Scales with analyst hours — each new deal or portfolio company requires new manual review setup; at 40+ portfolio companies, DD-to-monitoring continuity is practically impossible Scales reasonably for reporting — new grantees or investees added to the platform with manual baseline entry; audit capability limited by what was entered vs. what exists in source documents Rubric configured once — AI scoring applies to every new deal automatically; portfolio grows from 10 to 80+ without proportional analyst time increase; audit architecture scales with portfolio
Best Fit Seed-stage funds and grantmakers with fewer than 10 deals, where analyst context-holding is feasible and LP accountability pressure is low Foundations and funds that need structured grantee or investee reporting portals with IRIS+ metric tracking and funder-customizable dashboards — where consistent data submission is the primary need Impact funds, DFI, and grantmakers that need consistent Five Dimensions scoring, DD-to-monitoring continuity, auditable impact claims chains, and portfolio intelligence that compounds across cycles
What Sopact Sense + Impact Intelligence produces — impact DD deliverables
  • AI Five Dimensions scorecard: every deal scored with evidence citations — consistent rubric regardless of analyst, IC-ready before review begins
  • Living theory of change: extracted from DD documents + onboarding interview — both parties aligned on indicators before Q1 data collection begins
  • Impact claims audit trail: every quarterly submission automatically compared to DD commitments — divergences flagged as they appear
  • TOC drift detection: qualitative narrative monitoring flags theory of change language shifts from DD commitments — invisible to manual review at portfolio scale
  • Risk-based escalation: highest-divergence portfolio companies surfaced for priority IC review — without reconstructing evidence chains per company
  • LP-ready audit package: six report types per investee — scorecard, gap memo, IC brief, portfolio narrative, longitudinal trend, exit summary — generated overnight
Close the Verdict Fallacy — DD scoring that carries forward into monitoring, impact claims auditable from investment through exit.
See Impact Intelligence →

Step 4: From Impact DD to Continuous Portfolio Intelligence

Closing the Verdict Fallacy requires more than connecting DD to onboarding — it requires connecting every subsequent quarterly cycle back to the original investment thesis, automatically. This is what transforms impact due diligence from a screening exercise into what the IFC AIMM system calls "results measurement": evidence that the impact you predicted is the impact occurring.

For LP reporting, this means the six automated reports that Impact Intelligence generates per investee per quarter are not summaries of what companies submitted. They are comparisons of what companies committed to at DD against what they have delivered through every quarterly cycle — with qualitative evidence coded against the same rubric that was used during initial diligence. The LP who asks "is the impact thesis holding?" gets an evidence-linked answer, not a management assertion.

For Impact Funds
Your DD verdict doesn't have to expire at the investment decision.
Sopact reads every DD document, carries every Five Dimensions commitment forward, and generates six LP-ready reports per investee overnight — so the evidence lifecycle continues long after the IC meeting.
See Impact Intelligence →

For regulatory and LP audit contexts, this architecture produces what static DD processes cannot: a timestamped chain from investment thesis to quarterly outcome evidence, with qualitative stakeholder data coded consistently across cycles, and divergences between claimed and reported impact flagged automatically. The ESG portfolio management page covers the CSDDD effectiveness documentation specifically. The impact claims audit is the same structural requirement — evidence that what was claimed at investment is what is being delivered — with the Five Dimensions providing the organizing framework.

Watch
Impact Fund Intelligence — From DD Document to LP Report Without Context Loss
See the three-phase architecture that closes the Verdict Fallacy: Phase 1 reads every DD document and builds the Five Dimensions scoring baseline; Phase 2 carries every commitment forward into a Living Theory of Change before Q1 arrives; Phase 3 generates six LP-ready reports per investee overnight — so the evidence lifecycle never expires after the investment decision.
See the full fund lifecycle →

Frequently Asked Questions

What is impact investing due diligence?

Impact investing due diligence is the systematic assessment of an investment's potential for measurable social or environmental impact alongside financial returns — evaluated across the Five Dimensions of Impact (What, Who, How Much, Contribution, Risk). Effective impact investing due diligence connects screening findings to post-investment monitoring through persistent entity IDs, so DD evidence does not expire at the investment decision.

What is the Verdict Fallacy in impact due diligence?

The Verdict Fallacy is the structural error of treating impact DD as a one-time judgment — a score that answers "should we invest?" — rather than the first data point in a continuous evidence series. A static verdict cannot be proved correct, audited for accuracy, or connected to post-investment data. Sopact Sense closes the Verdict Fallacy by carrying every DD commitment forward automatically into quarterly monitoring through persistent entity architecture.

What are the Five Dimensions of Impact?

The Five Dimensions of Impact are the IMP/Impact Frontiers consensus framework: What outcome is pursued, Who the stakeholders are and how underserved, How Much change occurs (scale, depth, duration), What the Contribution (additionality) is, and What Risk exists that impact differs from expectations. Sopact Sense scores every DD document review automatically against all five dimensions, with evidence citations from source documents — replacing analyst-dependent, inconsistent manual scoring.

What are best practices for auditing impact claims in impact investing?

Best practices for auditing impact claims center on a risk-based approach: concentrate verification on claims with the highest materiality and weakest evidence. This requires three connected inputs — the original DD commitment record, the subsequent quarterly reporting record, and qualitative stakeholder evidence — all linked through persistent entity IDs. Without that architecture, auditing impact claims means manually reconstructing the evidence chain from scratch for each portfolio company each cycle.

What is risk-based auditing in impact investing portfolio companies?

Risk-based auditing in impact investing portfolio companies means concentrating rigorous impact claim verification on the highest-risk, highest-materiality commitments rather than attempting uniform verification across every indicator. Sopact Sense enables risk-based auditing by automatically comparing each company's quarterly submissions to their DD baseline commitments, flagging divergences the moment they appear — enabling prioritized escalation before LP review rather than during it.

What is due diligence in impact investing?

Due diligence in impact investing is the pre-investment process of assessing a company's impact thesis, theory of change, stakeholder evidence, and organizational capacity for impact delivery — scored against a structured framework like the Five Dimensions of Impact. Effective impact due diligence connects the pre-investment assessment to post-investment monitoring so the evidence lifecycle continues after the investment decision rather than resetting.

What is impact due diligence?

Impact due diligence is the systematic assessment of potential investees or grantees on their capacity for social or environmental impact — evaluating theory of change, outcome evidence, stakeholder experience, and organizational commitment. It differs from financial due diligence in that it assesses what is inherently qualitative and longitudinal: whether a company's stated impact thesis is credible, measurable, and connected to a monitoring system that can validate it over time.

What is portfolio due diligence for impact funds?

Portfolio due diligence for impact funds is the application of consistent DD standards across a growing pool of investees — ensuring that every portfolio company is scored against the same rubric, that DD findings are connected to post-investment monitoring, and that portfolio-wide auditing of impact claims is tractable. At 30+ portfolio companies, portfolio due diligence is only feasible when all entity records share a common architecture rather than existing in separate analyst folders.

How long does impact investing due diligence take?

Traditional impact investing due diligence takes 2–6 weeks per deal — manual document review, spreadsheet-based scoring, and email-thread deliberation. Building the scoring rubric alone requires 4–12 months according to PCV's implementation research. Sopact Sense generates AI-scored assessments from uploaded documents with evidence citations in minutes, making consistent Five Dimensions scoring across a full deal pipeline tractable without specialist staff.

How does automated impact investing due diligence work?

Automated impact investing due diligence uses AI to score uploaded DD documents against a configured rubric — Five Dimensions, ESG criteria, organizational commitment indicators — with evidence citations from source materials. Sopact Sense automates the scoring layer while keeping the investment decision with human reviewers. The automation closes the gap between what documents contain and what gets extracted into a usable, comparable assessment — and carries that assessment forward automatically into post-investment monitoring.

🔍
Impact Funds · Foundations · DFI
Close the Verdict Fallacy before your LP asks the question.
Every fund operating with static DD verdicts is one LP question away from discovering the gap between what companies claimed at investment and what they've delivered since. Sopact reads every DD document, scores every deal against the Five Dimensions, and carries every commitment forward — so impact claims are auditable from the investment decision through exit, not just at the IC meeting.
See How Impact Intelligence Works → Book a 20-minute session with your own DD documents
Five Dimensions scored before IC opens the file
DD commitments carried forward automatically
TOC drift detected in quarterly narratives
Impact claims auditable from DD through exit
TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

March 29, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI

TABLE OF CONTENT

Author: Unmesh Sheth

Last Updated:

March 29, 2026

Founder & CEO of Sopact with 35 years of experience in data systems and AI