play icon for videos

ESG Due Diligence: Checklist, Framework & AI Platform

ESG due diligence checklist: 24-point framework across E, S, and G pillars. AI scoring and persistent entity tracking for impact funds and supply chains.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 28, 2026
360 feedback training evaluation
Use Case
Below: why one ESG score is never enough.

Pull a rating on the same company from three providers. The numbers will not agree. Sometimes by 25 or 30 points. Each provider weighs different factors, draws on different disclosures, and updates on a different cycle. The score is fine as a first-pass anchor for screening. As soon as you are trying to monitor quarterly, prove diligence to LPs, or stand up CSDDD-grade evidence, the score does less work than it looks like it does.

The fix is not a fourth provider. The fix is upstream: one ID per investee or supplier, evidence chains that build year over year, and AI reading every uploaded sustainability document with citations to the source passage.

Three providers · one company · one moment

Provider A
MSCI methodology
47
Provider B
Sustainalytics
61
Provider C
ISS ESG
74
27-point gap on the same company

What it is

ESG due diligence, defined.

ESG due diligence is the structured assessment of a company's environmental, social, and governance performance before an investment, acquisition, grant, or supplier relationship, plus the continuous verification of that performance after the relationship begins.

It runs across three pillars. Environmental: climate risk, emissions, resource use, biodiversity. Social: workforce practices, human rights, community engagement, supply chain labor. Governance: board independence, audit quality, anti-corruption, ESG oversight accountability. The work happens in two modes: a structured DDQ at intake that captures policies, certifications, and self-assessment, and continuous evidence after intake that verifies whether the policies actually held.

The standards in regular use are well known: SFDR, CSDDD, IFC Performance Standards, ILPA ESG DDQ, GRI, SASB, IRIS+. The standards are not the hard part. Connecting them to the same company across DD, monitoring, and effectiveness proof is.

Two modes of one practice

Mode 01

Intake DDQ

What does the company say it does?

  • Policies and certifications
  • 40 to 150 self-assessment questions
  • Theory of change for impact
  • Closed-ended compliance checks

Mode 02

Continuous evidence

What is actually happening on the ground?

  • Sustainability documents read end-to-end
  • Worker-voice surveys
  • Audit findings and corrective actions
  • Re-survey vs. original DD baseline
Both modes, one entity record across the lifecycle

The standards ESG DD teams actually map to

CSDDD EU directive requiring proof that ESG DD is effective at preventing harm, not just performed.
Five Dimensions of Impact For impact funds running ESG alongside an impact thesis. What. Who. How much. Contribution. Risk.
IFC PS & ILPA ESG DDQ IFC Performance Standards for development finance. ILPA DDQ for institutional LPs evaluating fund managers.

The Scoring Trap

A score is fine for screening. Three things tend to break first.

Pull a rating on the same company from MSCI, Sustainalytics, and ISS ESG, and the numbers will not agree. Each provider weighs different factors, draws on different disclosures, and updates on a different cycle. As a first-pass anchor for screening, that is fine. As soon as you are trying to monitor a company quarterly, prove diligence to LPs, or stand up CSDDD-grade evidence, that score does less work than it looks like it does.

Break 01

The score is already old.

Most ratings refresh annually, or whenever the provider gets to it. Between pulls, months of news, layoffs, lawsuits, supplier changes, and worker-voice signal go uncaptured. The number on your dashboard reflects a company that may not exist anymore.

Break 02

It captures what the company says, not what is happening.

Ratings lean heavily on the company's own disclosures and policy documents. A governance score can sit flat for three quarters while a labor issue is showing up loudly in worker interviews, exit surveys, and press coverage that no rating provider has read.

Break 03

It cannot prove the diligence worked.

CSDDD asks for evidence that your due diligence is effective at preventing harm, over time, with a documented chain of action. A point-in-time number cannot demonstrate that. Only a record of what you saw, when you saw it, what you did, and what changed afterward can.

The fix is upstream. Keep the rubric as a starting anchor, but tie everything to one ID per investee, supplier, or grantee, and let the record build over time. Documents, DDQ responses, stakeholder interviews, financial filings, and news monitoring all attach to the same record from intake forward. Sopact Sense is built around this pattern. The score becomes one signal among many, and the underlying evidence is what regulators, LPs, and your IC actually need.

ESG due diligence in 2026 cannot stop at a score. LPs ask for evidence. Regulators ask for effectiveness proof. Boards ask whether commitments held up. The platforms that answer those questions started with data architecture, not score aggregation.

Sopact · ESG Partner Intelligence thesis

How the evidence chain builds

Every stage inherits the prior record.

One ID per investee or supplier, issued at the first DDQ. Every form, audit, and worker survey after that uses the same ID. The record gets richer at each stage. By the CSDDD review, the same row holds the full chain from commitment to verified outcome.

One entity ID issued at the first DDQ, used by every form, audit, and survey from intake through CSDDD effectiveness proof.

Stage 1

Intake DDQ

Pre-commitment

Stage 2

Continuous monitoring

Quarterly cycles

Stage 3

CSDDD evidence

Audit / regulator review

Identity Entity ID, sector, region, gender disaggregation
Captured

ID issued. Sector, region, and gender-disaggregated fields stored at intake.

Carried
Carried
DDQ scoring E, S, G pillars with anchored evidence
Scored

Each score linked to the source passage that supports it.

Re-scored

Same rubric. New evidence. Per-entity delta computed.

Trend
Documents read Sustainability reports, policies, audits
Every page

Sustainability report, policy PDFs, certifications scored against the rubric.

New uploads

Quarterly disclosures, audit findings, incident logs read at submit.

Full chain
Stakeholder voice Worker surveys, community feedback
Not yet
Themed

Worker-voice survey responses scored at submit. Cross-supplier patterns surface in minutes.

Re-survey
Corrective actions CAPs, follow-ups, re-verification
Not yet
Tracked

CAPs linked to the same entity record as the original finding.

Verified

Re-survey vs. prior cycle proves whether the CAP worked.

What sits underneath

Four analysis layers. Two work at collection. Two work at reporting.

Every layer works because every record carries the same entity ID. Without it, ESG due diligence over time is a manual cleanup project. With it, the analysis is a default output of collection itself.

01 · Cell

Intelligent Cell

Collection time · per document

Single-field analysis. Applied to one DDQ response, one sustainability report, or one worker-voice answer, with a rubric defined by the fund or the procurement team. Each score links to the source passage that supports it.

In ESG DD

A 60-page sustainability report uploaded at intake gets read end-to-end and scored against the fund's E, S, and G pillars. Reviewer can audit any score by clicking to the source paragraph.

02 · Row

Intelligent Row

Collection time · per entity

Multi-field analysis per record. Combines several Cells, structured DDQ responses, certifications, and incident logs into one consolidated entity profile. The IC reviewer or procurement lead sees one brief, not five tabs.

In ESG DD

DDQ + sustainability report + worker survey + audit findings rolled into a one-page entity brief. Inconsistencies between policy claims and worker feedback flagged automatically.

03 · Column

Intelligent Column

Reporting time · cross-portfolio

Cross-record patterns across all responses to one or more fields. Theme analysis across worker-voice surveys. Risk pattern detection across suppliers. IRIS+ indicator computation across every active investee.

In ESG DD

Theme analysis across 200 supplier worker-voice surveys surfaces which sites are flagging the same labor risk in their own words, before it becomes a portfolio-wide pattern.

04 · Grid

Intelligent Grid

Reporting time · full dataset

Full dataset analysis across every record and every field. LP ESG narrative, board ESG report, CSDDD effectiveness chains, supplier portfolio dashboards. The two weeks before an audit compress into hours.

In ESG DD

CSDDD-ready evidence chain assembled by default: commitment at intake, mid-cycle evidence, corrective action, re-verification, outcome, all on the same entity record.

Where teams use it

One platform. Many ESG contexts.

ESG Partner Intelligence is the foundation underneath every page in this section. Pick the page closest to your work and go deeper.

What’s different

Provider scores set the anchor. Spreadsheets carry the workload. Neither closes the evidence chain.

Most teams running ESG due diligence live in two systems at once: a provider score subscription that returns a number, and a folder of spreadsheets and consultant memos where the actual work happens. Both produce outputs. Neither carries an entity ID across the lifecycle, neither scores qualitative evidence at submit, and neither runs a human-in-the-loop accuracy checkpoint before data lands in front of the team.

Capability

Provider scores

MSCI, Sustainalytics, ISS ESG

Spreadsheets + consultants

Excel, Airtable, custom analyst work

Sopact Sense

ESG Partner Intelligence

One entity ID across DDQ, monitoring, and CSDDD

Not the model

Each provider holds its own per-cycle record of the company. None of them connect back to your DDQ, your audit findings, or your CAP tracker.

Manual match

Each tab issues its own identifier. Stitching IDs across cycles is the analyst’s job, every cycle.

Native primitive

Issued at the first DDQ. Used by every form, audit, and survey from intake through exit. Survives email changes, name spelling, supplier tier moves.

Sustainability documents read with citation evidence

Public data only

Providers score from publicly available disclosures. Entity-submitted PDFs, policies, and onboarding documents are never processed.

Manual review

Sustainability reports get skimmed, not analyzed. The most valuable signal goes unread across 40+ portfolio companies or 200+ suppliers.

100% with citations

Every uploaded document read end-to-end. Each score links back to the source paragraph. Reviewer can audit any judgment in seconds.

Scoring consistency across entities and reviewers

30 to 50 point variance

Same company scores 47 with one provider, 74 with another. Each provider uses private interpretations of the same ESG dimensions.

Reviewer drift

"Strong governance" scored privately by each analyst produces 12 private interpretations. No anchored evidence criteria.

One anchored rubric

Observable evidence anchors at every scoring level. AI and human reviewers reach the same conclusion. Re-scoring the full pool is one action.

CSDDD effectiveness evidence chain

Cannot produce

A point-in-time score cannot demonstrate that DD prevented harm over time. The architecture was never designed for it.

Reconstructed for audit

Effectiveness chain assembled by hand from disconnected sources at audit time. Often months of analyst work per cycle.

Assembled by default

Commitment at intake to mid-cycle evidence to corrective action to re-verification, all on the same entity record. Audit-ready by design.

Cross-portfolio queries on qualitative evidence

Quantitative only

Score-level comparisons across portfolio. Worker-voice patterns, narrative themes, and corrective-action stories are invisible.

Not feasible

Reading 200 supplier worker surveys to find a pattern is a multi-week investigation. Most patterns are missed.

Column + Grid

"Which suppliers flagged retaliation risk in their Q3 worker survey?" runs as a single query. Cross-supplier themes surface in minutes.

Human-in-the-loop accuracy checkpoint before data goes live

No checkpoint

Scores update on the provider’s schedule. Your team has no review step before scores reach the IC dashboard.

Manual ad-hoc

Whoever owns the master sheet eyeballs each submission. Catches some errors. Misses others. No structured release process.

Reviewer release

Submissions land in a reviewer queue. AI flags inconsistencies and missing fields. The data lead releases each record before it propagates to the team. Accuracy becomes a workflow, not a back-channel question.

Who runs it

Real partners. Real evidence chains.

Two customers. Two different ESG shapes: an impact-fund ESG report shaped over a 7-year partnership, and a supply-chain partner intelligence build for a food and nutrition org reporting to its funders.

Crossroads Impact Corp

Public co. · CDFI subsidiary · ESG portfolio

Seven years of ESG portfolio aggregation. The report you can read end-to-end.

7+ years

Continuous ESG partnership through Capital Plus Financial and Crossroads. Multiple consecutive published ESG impact reports drawing on the same connected record.

Crossroads Impact Corp asked the question every ESG-led investor eventually has to answer: how do we know our investments stay on the path of sustainability and inclusion across every stage, from sourcing to exit? The partnership started when the company was operating as Capital Plus Financial. Sopact mapped the relevant frameworks (EDCI, GIIN-IRIS, PRI), aggregated investee data across the lifecycle, and built the data layer beneath consecutive published ESG impact reports. Each year inherits the prior year’s baseline rather than starting over. The published report is a working example of what the connected record produces.

Our unwavering commitment to impact drives us and our partners, leading to a $223 million year-over-year increase in environmental and social loans. Collaborating with Sopact empowers us to create real, measurable change for those truly making a difference in our communities.

Eric Donnelly · CEO and Director, Crossroads Impact Corp

Read the ESG impact report

Food4Education

Kenya · school nutrition · supply chain

Multi-stage DDQ across the food supply chain. One auditable picture for funders.

Multi-partner

Legal, financial, and impact data captured across the supply chain. Centrally aggregated, scored against the program rubric, AI-analyzed for reporting.

Food4Education needed to systematically collect and aggregate data from its food supply chain partners to give funders a clear, auditable picture of partner impact. The proposal defined a multi-stage data collection workflow starting at due diligence, capturing legal, financial, and impact information from each partner, with Sopact aggregating the responses centrally, applying a rubric anchored on Food4Education’s internal goals, and automating the reporting layer. The result was supply-chain partner intelligence that funders can read end-to-end without reconstructing it from scratch every cycle.

The goal: bring every supply chain partner into one auditable view, with the same rubric across every partner, so what we communicate to funders matches what we actually know.

Food4Education · supply chain partner intelligence scope

FAQ

Questions LPs and procurement leads ask.

Eight questions impact fund managers, procurement leads, and ESG analysts ask in their first conversation. Visible Q&A so search engines and AI assistants can index every answer.

Q. 01
What is ESG due diligence?

ESG due diligence is the structured assessment of a company’s environmental, social, and governance performance before an investment, acquisition, grant, or supplier relationship, plus the continuous verification of that performance afterward. It runs across three pillars: environmental risk and climate exposure, social and labor practices, and governance quality. It happens in two modes: a structured DDQ at intake and continuous evidence after intake. The standards are well known. The hard part is connecting them to the same company across DD, monitoring, and effectiveness proof.

Q. 02
What is an ESG DDQ and how is it different from a score?

An ESG DDQ is the structured questionnaire an investor, funder, or procurement team sends to a prospective investee or supplier to collect standardized ESG data. It typically covers 40 to 150 questions across the three pillars, combining closed-ended compliance checks with open-ended narrative responses. An ESG score is what a rating provider produces from publicly available data. A DDQ is what you collect directly from the entity. Scores vary by 30 or more points across providers for the same company. DDQs, when scored at submit against an anchored rubric, produce citation-linked evidence that holds up under regulatory and LP review.

Q. 03
What is The Scoring Trap?

The Scoring Trap is the failure mode where ESG due diligence optimizes for producing an auditable number instead of building actual understanding. Scores are provider-dependent, point-in-time, and reflect what the company says, not what is happening on the ground. They cannot satisfy CSDDD’s requirement to prove effectiveness over time. Only a record of what you saw, when you saw it, what you did, and what changed afterward can. Escaping The Scoring Trap means designing ESG data collection inside a single system from first contact, with one ID per entity that carries forward across every cycle.

Q. 04
What is CSDDD and how does it change ESG due diligence?

CSDDD is the EU Corporate Sustainability Due Diligence Directive, which requires large companies operating in the EU to identify, prevent, and remediate adverse human rights and environmental impacts across their value chains. The directive requires evidence that due diligence is effective at preventing harm, not just that DD was performed. This is a fundamental shift: a point-in-time score cannot satisfy the standard. Only a documented chain linking commitments at intake to verified outcomes across multiple cycles can. ESG DD platforms must generate CSDDD-ready evidence chains by default, with one entity ID connecting intake to effectiveness proof.

Q. 05
How does AI change ESG due diligence in practice?

AI changes the operational economics of running ESG DD continuously across a portfolio. Every uploaded sustainability document gets read end-to-end against the fund’s rubric, with citations to the source paragraph. Every open-ended DDQ response gets scored at submit. Cross-portfolio queries on qualitative evidence that used to be multi-week investigations become single queries. Citation evidence is non-negotiable. Every score must trace to the specific content that generated it. This is what satisfies CSDDD effectiveness requirements and what holds up under LP review.

Q. 06
Does ESG due diligence need to be separate from impact measurement?

For impact funds especially, no. ESG DD and impact measurement should share the same entity records and the same data layer. When they live in separate systems, neither produces defensible intelligence. LPs see one set of numbers, regulators see another, and boards see a third. The Impact Measurement and Management page covers how the same architecture serves DD, monitoring, and LP reporting from one connected record.

Q. 07
How does ESG due diligence work for supply chain and supplier networks?

Supply chain ESG DD applies the rubric across supplier networks with two architectural additions. First, one supplier ID connecting DDQ submissions to worker-voice surveys to corrective action tracking and re-survey. Second, AI thematic analysis of open-ended worker feedback at scale across all sites simultaneously. For CSDDD compliance, the supplier DDQ must generate evidence that due diligence is effective at preventing harm, which requires year-over-year data on the same supplier entities, not isolated snapshots. Food4Education’s supply-chain partner intelligence build is one example of this pattern: legal, financial, and impact data captured at intake, scored against the program’s rubric, and aggregated into one auditable picture for funders.

Q. 08
How is Sopact different from MSCI, Sustainalytics, and ISS ESG?

Provider rating subscriptions and Sopact serve different needs. Providers score from publicly available data and serve early-stage screening across thousands of entities. Sopact is the system that runs the DD engagement and the year-over-year monitoring on the entities you actually invest in or contract with. Each holds one role. Provider scores can feed into Sopact as one input signal. The two are complementary, not substitutes. What Sopact replaces is the spreadsheet-and-consultant workflow that lives between the score subscription and the LP report.

Escape the scoring trap

Bring a portfolio. Leave with the evidence chain.

A 60-minute working session. Bring a sample DDQ from a real investee or supplier, plus your fund’s ESG rubric. We map the DDQ to anchored evidence criteria, the one entity ID across your CRM, and the CSDDD effectiveness chain you need at audit. By the end, you have a working setup and a path to running it next cycle.

Format 60-minute working session
Bring A sample DDQ + your rubric
Leave with A working evidence chain