play icon for videos

ESG Due Diligence: Checklist, Framework & AI Platform

ESG due diligence checklist: 24-point framework across E, S, and G pillars. AI scoring and persistent entity tracking for impact funds and supply chains.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 21, 2026
360 feedback training evaluation
Use Case

ESG Due Diligence in 2026: Checklist, Framework, and AI-Native Platform for Impact Funds and Sustainable Investors

Your investment committee meets in two hours. Three deals are on the table. Someone asks, "What's the ESG risk profile on these?" You have provider scores. You don't have understanding. The scores disagree with each other by 30 points, they were generated eight months ago from a static questionnaire, and none of them reflect what portfolio company stakeholders actually reported last quarter. That is The Scoring Trap — what happens when ESG due diligence optimizes for producing an auditable number instead of building actual understanding that holds up across a 3–5 year investment horizon.

Last updated: April 2026

The Scoring Trap is not a rating-agency problem. It is a structural gap in how ESG data is collected, linked, and analyzed across the full investment lifecycle — from pre-commitment due diligence through quarterly portfolio monitoring to LP reporting and CSDDD effectiveness documentation. This guide shows how Sopact Sense runs ESG due diligence as continuous intelligence rather than a frozen snapshot: structured DDQ collection at intake, AI reading every uploaded sustainability document with citation evidence, persistent portfolio company IDs connecting commitment to outcome, and longitudinal evidence chains that satisfy both LP reporting and regulatory effectiveness standards.

Use Case · ESG Due Diligence
ESG due diligence as continuous intelligence — not a score that disagrees with itself across providers.

Sopact Sense runs ESG due diligence from intake through portfolio monitoring with persistent entity IDs, anchored DDQ scoring, AI reading every uploaded document, and citation-level evidence on every rating. Built for impact funds, development finance institutions, and sustainability-led investors escaping The Scoring Trap.

The Scoring Trap — Same Company, Three Providers vs. Continuous Evidence
THREE RATING PROVIDERS · ONE COMPANY · ONE MOMENT 0 25 50 75 100 Provider A MSCI methodology 47 Provider B Sustainalytics 61 Provider C ISS ESG 74 Δ 27 POINTS SOPACT SENSE · CONTINUOUS EVIDENCE · ONE ENTITY ID DD Q1 Q2 Q3 Q4 LP intake report 📎 📎 📎 📎 📎 📎 one entity · every cycle · citation evidence
OWNABLE CONCEPT · THIS PAGE
The Scoring Trap
ESG due diligence that optimizes for producing an auditable score instead of building real understanding. The same company scores 47 with one provider and 74 with another, scores are re-pulled annually while reality moves on daily, and a point-in-time number cannot satisfy CSDDD's requirement to prove effectiveness over time. Sopact Sense enables the progression: SROI-style scoring as the entry point, contextual intelligence linked by persistent entity IDs as the destination.
50%
score variance
same entity, 3 providers
80%
assessment time spent
cleaning, not analyzing
1
entity ID from intake
through exit
CSDDD
ready evidence chains
by default

ESG Due Diligence Best Practices · 2026
Six principles that separate AI-native ESG DD from provider-score aggregation

What changes when every sustainability document gets read, every DDQ field is evidence-anchored, and The Scoring Trap stops setting investment committee conversations.

See Impact Intelligence →
01
Rubric Anchoring
Anchor every ESG rubric level with observable evidence — not adjectives

"Strong governance" scored privately by each analyst produces 12 private interpretations. Anchored criteria specify what documented evidence qualifies for each rating — board independence percentages, whistleblower reporting data, ESG oversight charter — so AI and human reviewers reach the same conclusion.

Provider scores vary 30–50 points for the same entity because each uses private interpretations.
02
Document Reading
Read every uploaded sustainability document with citation evidence

Sustainability reports, policy PDFs, certifications, and incident logs contain the real ESG signal — and the documents most likely to be skimmed at volume. AI processes every page with the same rubric applied to DDQ fields, producing scores tied to specific source content.

Without citation evidence, a score is just an opinion with a number attached.
03
Continuous State
Treat ESG DD as a continuous state — not a one-time event

The day investment closes, ESG DD should not end. It should continue as monitoring against the commitments captured at intake. Persistent entity IDs link every quarterly update, financial document, and worker survey back to the original DD thesis.

Event-based DD produces a filing cabinet. State-based DD produces intelligence.
04
Disaggregation
Structure disaggregation at collection — gender, region, sector

Development finance and gender-lens funds need gender-disaggregated data. Supply chain DD needs region-specific labor data. Structure these at intake — disaggregation retrofitted from a spreadsheet export is incomplete, inconsistent, and unauditable.

Retrofitting disaggregation from exports produces "not collected" cells that cannot be recovered.
05
CSDDD Evidence
Generate CSDDD-ready effectiveness chains by default

Regulators increasingly require evidence that ESG DD is effective at preventing harm — not just that DD was performed. Only longitudinal evidence linking commitments to verified outcomes across cycles satisfies this. Point-in-time scores cannot.

A provider-score subscription does not produce CSDDD evidence. An evidence chain does.
06
Integrated IMM
Connect ESG DD to impact measurement in the same data layer

For impact funds, ESG DD and impact measurement must share the same entity records. When they live in separate systems, neither produces defensible intelligence — LPs see one set of numbers, regulators another, and boards a third.

Running ESG DD separately from IMM means rebuilding the same entity records twice.

What is ESG due diligence?

ESG due diligence is the structured evaluation of a company's environmental, social, and governance performance before an investment, acquisition, grant, or supplier relationship — and the continuous verification of that performance after the relationship begins. It covers three pillars — environmental impact and climate risk, social and labor practices, and governance quality — assessed through a combination of structured DDQs, document review, stakeholder feedback, and longitudinal monitoring. AI-native ESG due diligence reads every uploaded document end-to-end against anchored rubric criteria, producing citation-level evidence rather than provider-dependent scores that vary up to 50% for the same entity.

For impact funds, ESG due diligence sits alongside impact thesis validation — a social or environmental outcome theory must be evidenced as clearly as financial projections. For ESG-integrated private equity and sustainable investors, ESG DD is a pre-commitment risk screen that must hold up against CSDDD, SFDR, and LP reporting requirements throughout the holding period. For supply chain teams, ESG due diligence applies continuously across supplier networks with gender-disaggregated, region-specific data structures.

What is an ESG due diligence checklist?

An ESG due diligence checklist is a structured data collection instrument covering the environmental, social, and governance dimensions that must be verified before a commitment and re-verified throughout the investment or vendor relationship. A functional ESG DD checklist is not a static 24-point PDF — it is a living rubric with observable evidence anchors at every scoring level, integrated into the DDQ and re-run against actual performance at each monitoring cycle.

Standard ESG DD checklist pillars include environmental (climate risk exposure, emissions disclosure, resource use, biodiversity, environmental management systems), social (workforce practices, health and safety, human rights, community engagement, supply chain labor standards), and governance (board independence, audit quality, whistleblower mechanisms, anti-bribery policies, ESG oversight accountability). Each pillar needs anchored scoring criteria specifying what observable evidence qualifies for each rating, not adjectives like "adequate" or "strong." Real-estate-focused ESG due diligence adds environmental liability, contamination history, and building energy performance. Supply chain ESG DD adds gender-disaggregated labor data, forced labor verification, and supplier tier mapping.

What is an ESG DDQ and how is it different from a score?

An ESG DDQ (Due Diligence Questionnaire) is the structured instrument an investor, funder, or procurement team sends to a prospective investee or supplier to collect standardized ESG data. It typically covers 40–150 questions across the three pillars, combining closed-ended compliance checks (certifications, policies, incident counts) with open-ended narrative responses (theory of change, corrective action plans, stakeholder engagement detail). A well-designed DDQ is evidence-anchored — every question maps to a rubric dimension, every open-ended response has AI coding instructions, and every financial field has validation rules at input.

An ESG score is what a rating provider (MSCI, Sustainalytics, ISS) produces from publicly available data. An ESG DDQ is what you collect directly from the entity being assessed. Scores vary up to 50% across providers for the same company. DDQs — when structured for AI analysis from the moment of submission — produce citation-linked evidence that holds up across regulatory review and LP diligence.

What is The Scoring Trap?

The Scoring Trap is the failure mode where ESG due diligence optimizes for producing an auditable number instead of building real understanding. Scores are provider-dependent, point-in-time, and non-reproducible — the same company scores 47 with one provider and 74 with another. For early-stage screening, a score is a reasonable anchor. But as your relationship with an investee, supplier, or grantee deepens, scores become actively misleading because they flatten the context that actually drives risk and opportunity.

Three structural failures define The Scoring Trap. First, scores are frozen. They capture one moment and are re-pulled annually or at exit, missing 11 months of context change per cycle. Second, scores reflect self-reported compliance, not stakeholder reality. A governance score can remain flat for three quarters while a critical labor risk emerges in worker-voice data that nobody reads. Third, scores cannot prove effectiveness. Under CSDDD, regulators require evidence that your ESG due diligence is actually "effective at preventing harm" — a single point-in-time score cannot demonstrate that. Only longitudinal evidence connected to the same entities across time can.

The progression Sopact Sense enables: SROI-style scoring as the entry point, contextual intelligence — theory of change data, qualitative stakeholder feedback, financial document analysis, persistent entity IDs — as the destination. Escaping The Scoring Trap means designing ESG data collection inside a single system from first contact, so qualitative and quantitative data link to the same entity record from day one.

Step 1: Identify your ESG due diligence situation

ESG due diligence means different things to an impact fund manager screening 40 portfolio companies, a development finance institution evaluating gender-smart investments across Sub-Saharan Africa, and a procurement team verifying 200 supplier compliance records for CSDDD. The data you need, the timeline, and the depth of analysis differ fundamentally. Before building any framework, identify which situation you are in — then design your data collection accordingly. The three archetypes in the scenario below cover the patterns that appear in 90% of ESG DD engagements.

Three ESG DD Archetypes · Same Scoring Trap
Whichever kind of ESG due diligence you run — the Trap shows up at the same point

Impact fund portfolio assessment, development finance deal screening, supply chain supplier compliance — each hits The Scoring Trap at a different volume and with different data structures, but the structural failure is identical: scores substitute for understanding, and nothing stays connected from intake to outcome.

An impact fund portfolio is expanding from 19 to 40+ companies. Every new onboarding consumes 2–3 hours of analyst time pulling theory of change data from investment memos. The CRM tracks deals but holds zero impact or ESG context. Quarterly data arrives in completely different formats from each company — 80% of the team's time goes to cleaning, not analyzing. LPs want ESG trend data at the portfolio level, and it takes three weeks to assemble.

Moment01
Portfolio intake
DDQ · investment memo · logic model extraction
Moment02
Quarterly monitoring
ESG pillar scores · financial updates · trend vs DD baseline
Moment03
LP report + CSDDD
portfolio trends · effectiveness evidence chain
Traditional Stack
Scores from three providers, disconnected from portfolio reality
  • MSCI / Sustainalytics / ISS scores vary 30–50 points for the same company
  • Logic models assembled manually from memos — 2–3 hours per company
  • CRM tracks deals with zero impact or ESG context
  • No persistent company ID connecting intake to quarterly updates
  • LP reports three weeks to assemble, outdated when delivered
With Sopact Sense
One entity ID from intake through exit — continuous intelligence
  • AI extracts theory of change from investment memos automatically
  • Persistent company ID connects DDQ, quarterly updates, and financial docs
  • Every ESG score traced to specific source content with citations
  • Quarterly comparisons automatic — not rebuilt from scratch
  • LP reports generated the night the quarter closes

Step 2: Design your ESG DDQ and scoring rubric

The highest-leverage action in ESG due diligence happens before any questionnaire is sent — designing a rubric with observable evidence anchors at every scoring level. An unanchored rubric criterion like "strong governance" means different things to every reviewer. Without calibration, DDQ completion rates look high while rubric interpretation fragments privately across your analyst team.

Anchored rubric criteria specify what evidence qualifies for each scoring level. A governance 5 might require a board with ≥40% independent directors, a documented whistleblower mechanism with reporting data, and an ESG oversight committee with defined charter. A governance 3 might describe the board qualitatively without documenting independence percentages. These anchors are what allow AI to score consistently across 40 portfolio companies or 200 suppliers and what allow human reviewers to calibrate against the same standard. Sopact Sense translates your existing DDQ — in any format, PDF, spreadsheet, or document — into AI-ready anchors while preserving your specific rubric dimensions and weights.

DDQ design for impact funds should integrate theory of change extraction alongside ESG scoring. Development finance DD needs gender-disaggregated fields structured at collection, not retrofitted from exports. Supply chain DD needs worker-voice survey instruments that code open-ended feedback for themes like forced overtime, retaliation, and wage issues. In every case, the DDQ is not a questionnaire — it is a data collection architecture that determines what kind of intelligence you can produce for the next five years.

Step 3: Run AI scoring with citation evidence

AI-native ESG due diligence reads every submitted document — DDQ responses, sustainability reports, policy PDFs, incident logs, certifications, financial statements — against rubric criteria in parallel. The output is not a score. It is a scored dataset showing composite ratings, per-pillar breakdowns, and the specific content in each document that generated each rating, with citations back to source paragraphs.

For impact fund DD on 40 portfolio companies, AI processes every investment memo, onboarding transcript, and supplementary document — extracting logic model structure, populating financial metrics, and flagging ESG risk exposures — in the time it takes to make coffee. For development finance DD on 200 SME applications, AI scores every submission against the gender-smart rubric before a human reviewer opens a file; the review panel gets 40 ranked finalists with citation evidence rather than 200 raw applications. For supply chain DD across 150 suppliers, AI thematic analysis of open-ended worker feedback surfaces emerging risks by region within hours of survey close.

Citation evidence is non-negotiable. Every score must trace to the specific content that generated it. This is what satisfies CSDDD effectiveness requirements, what holds up under LP diligence, and what makes rubric iteration possible — because when you adjust a criterion, you can see exactly which entities' scores shift and why. See how this works across impact measurement and management, application review, and impact reporting.

Provider Score Aggregation vs. AI-Native ESG DD
What changes when ESG data stays connected from intake through CSDDD effectiveness proof

Side-by-side across six dimensions — data architecture, document reading, scoring consistency, disaggregation, continuous monitoring, regulatory evidence — the structural differences that decide whether your ESG DD holds up under LP diligence and CSDDD review.

Risk 01
Provider score variance up to 50%
Same company scores 47 with one provider, 74 with another. A score is an opinion with a number attached — not evidence.
Scores are a starting anchor, not a destination.
Risk 02
Data fragments across six tools
DDQ in one system, audit findings as PDFs, CAPs in spreadsheets, worker surveys separately. No shared entity ID.
80% of assessment time spent cleaning, not analyzing.
Risk 03
Narrative responses go unread
Open-ended DDQ fields contain the real signal — and are the most likely to be skimmed at volume across 40+ portfolio companies.
The documented ESG story never reaches the investment committee.
Risk 04
CSDDD effectiveness unprovable
Regulators require proof that DD prevents harm. A score cannot demonstrate effectiveness — only longitudinal evidence can.
Platforms designed around point-in-time scores cannot produce effectiveness chains.
Feature Comparison
Provider score aggregation vs. AI-native ESG due diligence
Capability Provider scores / Traditional Sopact Sense (AI-native)
01 · Data Architecture
Entity identity
Across assessment cycles
Per-cycle records — no persistent ID
Intake data and monitoring data live separately with no automatic linkage.
One persistent entity ID from intake through exit
Every instrument — DDQ, quarterly update, financial doc, audit — links to the same record.
Collection source
Where ESG data originates
Imported from 4–6 upstream tools
Qualtrics surveys, email PDFs, spreadsheet uploads, separate audit platforms.
Designed inside Sopact Sense from first contact
Clean data by architecture — not retrofitted analysis on messy imports.
02 · Document Reading
Sustainability documents read
Policies, reports, certifications
Provider scores from public data only
Entity-submitted PDFs, policies, and onboarding transcripts never processed at scale.
100% — every uploaded document end-to-end
Citation evidence linking each score to the specific source paragraph.
Open-ended DDQ responses
Narrative sections
Skimmed, not analyzed thematically
The most valuable signal goes unread across 40+ portfolio companies or 200+ suppliers.
AI thematic coding across the full pool
Pattern analysis surfaces emerging risks before they appear in quantitative data.
03 · Scoring Consistency
Rubric consistency
Across entities and reviewers
30–50 point variance between providers
Each provider uses private interpretations of the same ESG dimensions.
One anchored rubric applied to every entity
Observable evidence anchors at every scoring level — AI and human reviewers reach the same conclusion.
Rubric changes mid-cycle
Iteration capability
Locked — re-scoring requires re-engagement
New LP requirement mid-year cannot be applied retroactively.
Standard — AI re-scores the full pool automatically
Pillar weights, anchor descriptions, new criteria apply across all entity records without invalidating history.
04 · Disaggregation
Gender-disaggregated data
Funder-required dimensions
Retrofitted from export
"Not collected" cells surface after the fact, cannot be recovered.
Structured at collection point
Women in leadership, gender of beneficiaries, pay equity — built into the data model from day one.
05 · Continuous Monitoring
Post-close ESG tracking
Intake to exit
Annual provider re-score
11 months of reality change between snapshots. Governance risks emerge invisibly.
Quarterly updates linked to the same entity record
Trend vs DD baseline automatic — not reconstructed from exports.
Corrective action tracking
Effectiveness over time
Separate spreadsheet, no re-survey linkage
"Did the CAP work?" is answered by memory, not evidence.
Linked to entity record with re-survey comparison
Every CAP tracked to re-survey evidence showing whether the risk resolved.
06 · Regulatory Evidence
CSDDD effectiveness chain
Proof DD prevents harm
Cannot be assembled from point-in-time scores
The architecture was never designed to preserve the cross-cycle evidence CSDDD requires.
Assembled by default — intake to outcome
Commitment → mid-cycle evidence → CAP → re-verification → outcome, all on the same entity.
ESG due diligence in 2026 cannot stop at a score. LPs ask for evidence. Regulators ask for effectiveness proof. Boards ask whether commitments held up. The platforms that answer those questions started with data architecture — not score aggregation.
Build with Sopact →

Step 4: Connect DD to continuous portfolio monitoring

The moment an investment closes or a supplier is onboarded, ESG due diligence should not end. It should continue as monitoring against the commitments captured at intake. Most ESG DD platforms fail at this transition — the DDQ data lives in one tool, quarterly reports arrive as PDFs in email, audit findings live in a third system, and worker-voice surveys live in a fourth. Without persistent entity IDs linking every instrument to the same company, monitoring becomes archaeological: compiled manually from disconnected sources, months after the decision window closes.

Sopact Sense assigns a persistent ID to every portfolio company, investee, or supplier at the point of first contact. Every subsequent data instrument — quarterly updates, financial document submissions, worker surveys, corrective action tracking, exit reports — connects to that same ID automatically. When your investment committee asks, "How has the governance score on Company X moved since intake?", the answer is one query, not three weeks of manual reconciliation. When CSDDD regulators ask you to prove your ESG DD is effective, the evidence chain is already assembled — intake commitment to mid-cycle verification to outcome measurement, all on the same entity record.

This is the difference between ESG due diligence as an event and ESG due diligence as a state. Event-based DD produces a filing cabinet. State-based DD produces intelligence. Learn more about continuous portfolio intelligence and how it unifies DD, monitoring, and LP reporting into a single evidence layer.

Step 5: Generate CSDDD-ready evidence chains

Under the EU Corporate Sustainability Due Diligence Directive (CSDDD), regulators require evidence that your ESG due diligence is "effective at preventing harm" — not just that it was performed. This is a fundamental shift from compliance-as-filing to compliance-as-proof. A point-in-time score cannot satisfy this. Only longitudinal evidence linking commitments at intake to verified outcomes across multiple reporting cycles can.

Sopact Sense generates CSDDD-ready evidence chains by default. Every entity has a persistent ID. Every DDQ submission is scored with citation evidence. Every corrective action is linked to the entity record. Every follow-up assessment is automatically compared to the prior cycle on the same entity. When regulators, auditors, or LPs ask for proof that your ESG DD is working, the chain is already assembled — intake commitment → mid-cycle evidence → corrective action → re-verification → outcome. This is what LPs increasingly expect from 2026 onward, and what competitors using provider-score aggregation cannot produce.

Step 6: Common mistakes in ESG due diligence

Mistake 1: Treating provider scores as the deliverable. Scores are a starting anchor, not a destination. Provider variance of 30–50 points for the same entity makes them unreliable as standalone decision tools. Move to contextual intelligence as data relationships deepen.

Mistake 2: Using the same DDQ for every deal type. A gender-lens facility in Sub-Saharan Africa needs different anchor criteria than a mid-market PE ESG screen in North America. Configure rubric dimensions per fund or per track.

Mistake 3: Importing from six tools instead of designing clean collection. Retrofitting analysis on top of messy data from Qualtrics, SurveyMonkey, and spreadsheets consumes 80% of assessment time. Start clean with data collection designed for AI analysis at the source.

Mistake 4: Static DDQ, no follow-up instruments. A DDQ filled at intake and never re-verified becomes an archaeological artifact by Year 2. Build follow-up instrument timing into the setup so persistent ID linkage is configured from day one.

Mistake 5: No thematic analysis of open-ended responses. Narrative DDQ fields contain the most valuable signal — and the least likely to be read at volume. AI thematic coding surfaces patterns across the full applicant or supplier pool that no manual reader could assemble.

Mistake 6: Treating ESG DD as separate from impact measurement. For impact funds especially, ESG DD and impact measurement and management must share the same entity records and the same data infrastructure. When they live in separate systems, neither produces defensible intelligence.

Masterclass
Impact Measurement & Management in the Age of AI — the architecture behind continuous ESG DD
See Impact Intelligence →
Impact Measurement and Management In the Age of AI — Sopact masterclass on AI-native ESG due diligence for impact funds
▶ Masterclass ESG DD · Portfolio Intelligence

Frequently Asked Questions

What is ESG due diligence?

ESG due diligence is the structured evaluation of a company's environmental, social, and governance performance before an investment, acquisition, grant, or supplier relationship — and the continuous verification of that performance afterward. It covers three pillars (environmental, social, governance) assessed through structured DDQs, document review, stakeholder feedback, and longitudinal monitoring. AI-native ESG due diligence reads every uploaded document end-to-end against anchored rubric criteria, producing citation-level evidence rather than provider-dependent scores that vary up to 50% for the same entity.

What is an ESG due diligence checklist?

An ESG due diligence checklist is a structured data collection instrument covering environmental, social, and governance dimensions that must be verified before a commitment and re-verified throughout the investment or vendor relationship. A functional ESG DD checklist is not a static PDF — it is a living rubric with observable evidence anchors at every scoring level, integrated into the DDQ and re-run at each monitoring cycle. Standard pillars include climate risk and environmental management, workforce and human rights, and governance quality.

What is an ESG DDQ?

An ESG DDQ (Due Diligence Questionnaire) is the structured instrument an investor, funder, or procurement team sends to a prospective investee or supplier to collect standardized ESG data. It typically covers 40–150 questions combining closed-ended compliance checks with open-ended narrative responses. A well-designed ESG DDQ is evidence-anchored — every question maps to a rubric dimension, every open-ended response has AI coding instructions, and every field has validation rules at input.

What is The Scoring Trap?

The Scoring Trap is the failure mode where ESG due diligence optimizes for producing an auditable number instead of building real understanding. Scores are provider-dependent (varying up to 50% for the same entity), point-in-time (captured once and re-pulled annually), and non-reproducible (reflecting self-reported compliance rather than stakeholder reality). Scores cannot satisfy CSDDD's requirement to prove effectiveness over time — only longitudinal evidence connected to the same entities across cycles can. Escaping The Scoring Trap means designing ESG data collection inside a single system from first contact.

What is an ESG due diligence framework?

An ESG due diligence framework is the methodology guiding how an organization collects, scores, and monitors ESG data across the investment or supplier lifecycle. Common frameworks include SFDR (Sustainable Finance Disclosure Regulation) for EU investors, CSDDD (Corporate Sustainability Due Diligence Directive) for large companies operating in the EU, IFC Performance Standards for development finance, and the ILPA ESG DDQ for institutional LPs evaluating fund managers. A good framework defines rubric dimensions, scoring weights, reporting cadence, and effectiveness evidence requirements — all configured in the data collection instrument at design time.

What is an ESG due diligence platform?

An ESG due diligence platform is the software system that hosts the DDQ, runs AI scoring against the rubric, manages entity records with persistent IDs, and generates reports for investment committees, LPs, or regulators. AI-native platforms read every uploaded document with citation evidence and connect DD data to continuous portfolio monitoring. Sopact Sense is an AI-native ESG DD platform purpose-built for impact funds, development finance institutions, and sustainability-led investors — combining DDQ design, rubric scoring, longitudinal monitoring, and CSDDD evidence chains in a single platform.

What is environmental due diligence?

Environmental due diligence is the subset of ESG due diligence focused on environmental risk exposure — climate risk, emissions, resource use, biodiversity, contamination history, and environmental management systems. For real estate transactions, environmental due diligence additionally covers site contamination, remediation liability, building energy performance, and regulatory compliance at the asset level. For investment DD, environmental due diligence assesses the company's climate alignment, transition plan, and physical-risk exposure. The data structures differ between asset-level and entity-level environmental DD — both require anchored scoring criteria and ongoing monitoring, not one-time assessment.

What is gender-smart ESG due diligence?

Gender-smart ESG due diligence integrates gender-disaggregated data collection and gender-lens scoring criteria into the DDQ at the point of submission. Standard dimensions include women in leadership percentages (typical target 40%), gender of primary beneficiaries, gender pay equity indicators, and theory of change for gender outcomes. Development finance institutions and gender-lens funds require gender-smart DD because funder requirements increasingly mandate gender-disaggregated evidence. The key architectural decision is structuring gender fields at collection, not retrofitting from a spreadsheet export after submission.

What is CSDDD and how does it affect ESG due diligence?

CSDDD is the EU Corporate Sustainability Due Diligence Directive, which requires large companies operating in the EU to identify, prevent, and remediate adverse human rights and environmental impacts across their value chains. The directive requires evidence that due diligence is "effective at preventing harm" — not just that DD was performed. This is a fundamental shift: a point-in-time score cannot satisfy this standard. Only longitudinal evidence linking commitments at intake to verified outcomes across multiple reporting cycles can. ESG DD platforms must generate CSDDD-ready evidence chains by default, with persistent entity IDs connecting intake to effectiveness proof.

Which ESG due diligence platform is best for impact funds?

The best ESG DD platform for impact funds depends on portfolio size, LP reporting requirements, and whether ESG DD is integrated with impact measurement or run separately. For impact funds with 10+ portfolio companies that need AI-extracted logic models, persistent entity IDs, and integrated impact measurement alongside ESG scoring, Sopact Sense is purpose-built for the continuous intelligence model. Provider score aggregators (MSCI, Sustainalytics) serve a different need — early-stage screening based on public data. The two are complementary; Sopact Sense handles the DD engagement and longitudinal monitoring; provider scores can feed in as one input signal.

How much does ESG due diligence software cost?

ESG due diligence software pricing varies widely by scope. Provider score subscriptions (MSCI, ISS, Sustainalytics) typically run $15,000–$75,000 per year depending on coverage. Traditional DDQ platforms (for private markets DD) run $20,000–$100,000+ per year. AI-native platforms like Sopact Sense scale with portfolio size and use-case complexity. Request a walkthrough for pricing specific to your portfolio size, LP reporting requirements, and integration needs.

How do you run ESG due diligence for supply chain and supplier networks?

Supply chain ESG due diligence applies the ESG DD rubric across supplier networks with two architectural additions: persistent supplier IDs connecting DDQ submissions to worker-voice surveys to corrective action tracking, and AI thematic analysis of open-ended worker feedback at scale. For CSDDD compliance, the supplier DDQ must generate evidence that due diligence is effective at preventing harm — which requires longitudinal data on the same supplier entities, not isolated snapshots. Tools that simplify ESG due diligence for supplier networks must handle both the structured rubric scoring and the unstructured worker-voice analysis within a single connected system.

Escape The Scoring Trap
Continuous ESG intelligence that replaces frozen provider scores — from intake DDQ to CSDDD effectiveness proof.

Sopact Sense runs ESG due diligence as a connected evidence layer — AI reads every uploaded sustainability document, persistent entity IDs link commitment to outcome, and one anchored rubric applies consistently across 40 portfolio companies or 200 suppliers. Built for impact funds, development finance institutions, and sustainability-led investors.

  • AI reads every uploaded sustainability document — policies, certifications, incident logs, financial statements — with citation evidence per rubric pillar
  • Persistent entity IDs from intake DDQ through quarterly monitoring to CSDDD effectiveness proof
  • One anchored rubric applied consistently with observable evidence anchors — no provider-variance, no private interpretations
Stage 01 · Intake DDQ
Evidence-anchored DDQ at first contact
Gender-disaggregated fields structured at collection. AI pre-scores every submission against the rubric with citation evidence.
Stage 02 · Continuous Monitoring
Quarterly updates on the same entity record
Trend vs DD baseline automatic — not reconstructed from exports. Every quarter compared against the original commitment.
Stage 03 · CSDDD Evidence Chain
Effectiveness proof assembled by default
Commitment → mid-cycle evidence → corrective action → re-verification → outcome, all on the same persistent entity ID.