play icon for videos

ESG Due Diligence: Checklist, Framework & AI Platform

ESG due diligence checklist: 24-point framework across E, S, and G pillars. AI scoring and persistent entity tracking for impact funds and supply chains.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 6, 2026
360 feedback training evaluation
Use Case
ESG due diligence · workflow

From anchored rubric to CSDDD effectiveness chain

One entity ID issued at the first DDQ. AI reads every uploaded document, DDQ response, and worker-voice survey against the same rubric, with citations to the source paragraph. The audit pack is assembled by default, not reconstructed in the two weeks before review.

Step 01 · Define the rubric

Every ESG cycle starts with the same artifact: an anchored rubric across the E, S, and G pillars, mapped to the standards the fund or supply chain reports against. Defined before the first DDQ goes out, so the AI and human reviewers reach the same conclusion against the same evidence anchors.

Step 02 · Generate the model

Every entity becomes a row against the same E, S, and G criteria. Subtopic scores aggregate to an overall, cited source passages attach to each, and the record threads into quarterly monitoring and CSDDD effectiveness proof.

Step 03 · Score every entity

DDQ responses, sustainability reports, audit findings, and worker-voice surveys all arrive as PDFs and forms. Sopact reads each at submit against the rubric and writes the evidence to the same entity row, so reviewers open one brief per entity instead of five tabs.

Step 04 · Read the report

The LP-ready report rolls all sources against the rubric, and every score traces back to a pillar criterion and a cited paragraph. The toggle flips between intake DDQ scores and continuous evidence views.

Step 05 · Catch what's missing

Same data, different lens. Sopact scans for provider score variance, evidence chain gaps, worker-voice red flags, and CSDDD effectiveness misses before the audit window opens.

Prompt

Draft the ESG rubric for Sustainability Fund VII. Twenty-four anchored criteria across E, S, and G pillars, mapped to IFC PS, ILPA DDQ, IRIS+, and Five Dimensions of Impact. One entity ID issued at the first DDQ, used by every form, document, and survey thereafter.

Working folder

/ esg-rubric-2025
rubric_v4_anchored.md
ddq_question_bank.json
csddd_chain_spec.md
standards_mapping.csv
ESG Rubric · Sustainability Fund VII
2025 cycle · 47 portfolio entities · 24-point framework · CSDDD-ready evidence chain

Fund context

Portfolio of 47 entities: 31 investees and 16 supply-chain partners across agriculture, infrastructure, education, and clean water. Rubric anchors map to IFC Performance Standards, ILPA ESG DDQ, IRIS+, and Five Dimensions of Impact. The binding constraint at audit is CSDDD effectiveness proof, not the score itself. A point-in-time number cannot demonstrate that due diligence prevented harm over time. Only a record of what was seen, when, what was done, and what changed afterward can.

Pillar structure

Twenty-four criteria across three pillars, each scored 1 to 5 with observable evidence anchors per tier:

  • Environmental. Climate risk, emissions, resource use, biodiversity, waste, water, energy intensity, supply-chain footprint
  • Social. Workforce practices, human rights, community engagement, supply-chain labor, worker voice, health and safety, gender, training, grievance, retaliation risk
  • Governance. Board independence, audit quality, anti-corruption, ESG oversight, stakeholder engagement, disclosure quality

Evidence chain configuration

One entity ID issued at the first DDQ, carried by every form, audit, document, and worker survey from intake through CSDDD review. AI reads every uploaded sustainability document end-to-end with citations to the source paragraph. Provider scores from MSCI, Sustainalytics, and ISS ESG flow in as one input signal among several, not as the answer.

Prompt

Score entity INV-042 against the rubric. Layer-level score per evidence type with cited source passages, aggregate to an overall E, S, and G score, then thread the record forward into quarterly monitoring and the CSDDD effectiveness chain.

Source

ESG Rubric · 24 anchored criteria · DDQ submission + 1 sustainability report + 142 worker-voice responses + 2 third-party audits · paragraph-level citation extractor active.

Rubric scoring model · Entity INV-042
Generated at submit
DDQ
86 questions across E, S, and G pillars
Self-assessment scored at submit, anchored
12 policy documents linked inline as evidence
Re-DDQ at Q4 against the same questions
Documents
1 sustainability report + 4 policy PDFs uploaded
Read end-to-end, scored against pillar anchors
8 source paragraphs cited per pillar on average
New uploads scored at submit, delta tracked
Stakeholders
142 worker-voice survey responses captured
Themed against S pillar anchors automatically
6 verbatim quotes cited, identifying info masked
Re-survey next cycle, cohort-matched delta
Audits
2 third-party audits + 1 incident log on file
Findings mapped to the CAP register row
4 remediation actions cited with timestamps
Re-verification at Q1, pass and fail logged
Outcomes
7 of 9 prior-cycle CAPs verified closed
Worker re-survey: retaliation risk index down
Year-over-year delta computed per pillar
CSDDD effectiveness chain assembled and audit-ready
Overall ESG score: E 4.2 · S 3.8 · G 4.0 · weighted 4.0 of 5. 18 source passages cited across the entity record. Q3 2025 cycle. CSDDD effectiveness chain assembled by default, no audit-time reconstruction required.
esg_portfolio_q3_2025.numbers
View
Zoom
Insert
Table
Chart
Text
Shape
Media
Share
Format
Portfolio
DDQ responses
Documents scored
Worker-voice
CAP register
Data dictionary
DDQ responses · Q3 2025
Sustainability Fund VII · 47 of 47 entities · cited paragraphs linked per row · linked by entity_id
Portfolio entities by overall ESG score
EntityScore / 5
INV-015 · renewable energy · 22 cited4.2
INV-042 · sustainable agriculture · 18 cited4.0
INV-028 · workforce training · 19 cited3.9
SUP-031 · food supply chain · 16 cited3.8
INV-056 · clean water · 21 cited3.7
SUP-019 · agricultural inputs · 14 cited3.5
INV-007 · education infrastructure · 17 cited3.4
SUP-024 · packaging · 12 cited3.2
Portfolio mean by pillar
PillarMean / 5
Environmental (8 criteria)3.9
Social (10 criteria)3.7
Governance (6 criteria)4.1
Coverage at submit
SourceQ3 2025
DDQ responses processed47 of 47
Documents read end-to-end312
Worker-voice responses themed6,470
CAPs open / verified closed14 / 23
Sheet name
DDQ responses
Background

Prompt

Build the LP-ready ESG report from the DDQ responses, sustainability documents, worker-voice surveys, and audit findings. Show portfolio coverage with cited paragraphs in line, and a toggle between intake DDQ and continuous evidence views. Every score traces back to entity_id.

Attachments

ddq_q3_2025.json
47 entities
documents.csv
312 PDFs
worker_voice.csv
6,470 rows
cap_register.json
37 actions
json · csv · linked by entity_id
Q3 2025 · Portfolio ESG report
47 entities · 24-point rubric · CSDDD-ready · LP-readable live link
Intake DDQ Continuous
Citation coverage
100%
▲ every score traceable
Source passages cited
868
▲ across portfolio Q3
Time to audit pack
0 days
▲ from 6 weeks reconstruction
Cited passages per entity by cycle
20100
Q4'24
Q1'25
Q2'25
Q3'25
Evidence sources Q3
DDQ 35%
Documents 28%
Worker voice 22%
Audits 15%

Prompt

Scan the Q3 portfolio against its own evidence chain and across prior cycles. Surface provider score divergence, evidence chain gaps, worker-voice red flags, and policy-claim mismatches before the LP review window opens.

Working folder

/ esg-portfolio-q3-2025
esg_portfolio_q3_2025.numbers
provider_scores_pull.json
csddd_chain_status.csv
anomaly_log.md
Anomaly & Gap Report
Q3 2025 · Sustainability Fund VII · 5 flags · scanned before LP review

Outliers detected

Provider score divergence · INV-042
Three rating providers scored the same entity 47, 61, and 74 on a 0 to 100 scale, a 27-point spread. Each provider weighs different factors and updates on a different cycle. The internal anchored rubric scored INV-042 a 4.0 of 5 with 18 cited passages. Provider scores noted as one input signal among several, with the gap surfaced in the LP narrative.
Worker-voice signal · SUP-031
6 of 142 worker-voice respondents at SUP-031 flagged retaliation risk in open-ended responses, a pattern not visible in any provider score or in the supplier's self-assessment DDQ. Cross-supplier theme analysis confirmed the signal is contained to this site. CAP triggered, mid-cycle re-survey scheduled.
Policy-claim mismatch · INV-015
Sustainability report claims zero water incidents in 2024, but the third-party audit log on the same record shows 2 minor breaches with corrective actions logged. Inconsistency surfaced at intake by row-level analysis. Worth a follow-up call before the LP narrative locks for Q3.

Missing data

Q3 disclosures · 8 entities pending
8 of 47 portfolio entities have not submitted Q3 sustainability disclosures. Window closes in 4 days. Personalized resends triggered on the original entity record, not as a bulk email. CSDDD chain marked incomplete on the affected rows until receipt.
Worker-voice gap · SUP-009
No worker-voice survey on file for SUP-009 since the 2023 cycle, required for the CSDDD effectiveness chain at this supplier. Survey scheduled for Q4 cycle. Gap noted in the audit prep packet, with a follow-up flagged for procurement to verify worker_count before resurvey.

What it is

ESG due diligence, defined.

ESG due diligence is the structured assessment of a company's environmental, social, and governance performance before an investment, acquisition, grant, or supplier relationship, plus the continuous verification of that performance after the relationship begins.

It runs across three pillars. Environmental: climate risk, emissions, resource use, biodiversity. Social: workforce practices, human rights, community engagement, supply chain labor. Governance: board independence, audit quality, anti-corruption, ESG oversight accountability. The work happens in two modes: a structured DDQ at intake that captures policies, certifications, and self-assessment, and continuous evidence after intake that verifies whether the policies actually held.

The standards in regular use are well known: SFDR, CSDDD, IFC Performance Standards, ILPA ESG DDQ, GRI, SASB, IRIS+. The standards are not the hard part. Connecting them to the same company across DD, monitoring, and effectiveness proof is.

Two modes of one practice

Mode 01

Intake DDQ

What does the company say it does?

  • Policies and certifications
  • 40 to 150 self-assessment questions
  • Theory of change for impact
  • Closed-ended compliance checks

Mode 02

Continuous evidence

What is actually happening on the ground?

  • Sustainability documents read end-to-end
  • Worker-voice surveys
  • Audit findings and corrective actions
  • Re-survey vs. original DD baseline
Both modes, one entity record across the lifecycle

The standards ESG DD teams actually map to

CSDDD EU directive requiring proof that ESG DD is effective at preventing harm, not just performed.
Five Dimensions of Impact For impact funds running ESG alongside an impact thesis. What. Who. How much. Contribution. Risk.
IFC PS & ILPA ESG DDQ IFC Performance Standards for development finance. ILPA DDQ for institutional LPs evaluating fund managers.

The Scoring Trap

A score is fine for screening. Three things tend to break first.

Pull a rating on the same company from MSCI, Sustainalytics, and ISS ESG, and the numbers will not agree. Each provider weighs different factors, draws on different disclosures, and updates on a different cycle. As a first-pass anchor for screening, that is fine. As soon as you are trying to monitor a company quarterly, prove diligence to LPs, or stand up CSDDD-grade evidence, that score does less work than it looks like it does.

Break 01

The score is already old.

Most ratings refresh annually, or whenever the provider gets to it. Between pulls, months of news, layoffs, lawsuits, supplier changes, and worker-voice signal go uncaptured. The number on your dashboard reflects a company that may not exist anymore.

Break 02

It captures what the company says, not what is happening.

Ratings lean heavily on the company's own disclosures and policy documents. A governance score can sit flat for three quarters while a labor issue is showing up loudly in worker interviews, exit surveys, and press coverage that no rating provider has read.

Break 03

It cannot prove the diligence worked.

CSDDD asks for evidence that your due diligence is effective at preventing harm, over time, with a documented chain of action. A point-in-time number cannot demonstrate that. Only a record of what you saw, when you saw it, what you did, and what changed afterward can.

The fix is upstream. Keep the rubric as a starting anchor, but tie everything to one ID per investee, supplier, or grantee, and let the record build over time. Documents, DDQ responses, stakeholder interviews, financial filings, and news monitoring all attach to the same record from intake forward. Sopact Sense is built around this pattern. The score becomes one signal among many, and the underlying evidence is what regulators, LPs, and your IC actually need.

ESG due diligence in 2026 cannot stop at a score. LPs ask for evidence. Regulators ask for effectiveness proof. Boards ask whether commitments held up. The platforms that answer those questions started with data architecture, not score aggregation.

Sopact · ESG Partner Intelligence thesis

How the evidence chain builds

Every stage inherits the prior record.

One ID per investee or supplier, issued at the first DDQ. Every form, audit, and worker survey after that uses the same ID. The record gets richer at each stage. By the CSDDD review, the same row holds the full chain from commitment to verified outcome.

One entity ID issued at the first DDQ, used by every form, audit, and survey from intake through CSDDD effectiveness proof.

Stage 1

Intake DDQ

Pre-commitment

Stage 2

Continuous monitoring

Quarterly cycles

Stage 3

CSDDD evidence

Audit / regulator review

Identity Entity ID, sector, region, gender disaggregation
Captured

ID issued. Sector, region, and gender-disaggregated fields stored at intake.

Carried
Carried
DDQ scoring E, S, G pillars with anchored evidence
Scored

Each score linked to the source passage that supports it.

Re-scored

Same rubric. New evidence. Per-entity delta computed.

Trend
Documents read Sustainability reports, policies, audits
Every page

Sustainability report, policy PDFs, certifications scored against the rubric.

New uploads

Quarterly disclosures, audit findings, incident logs read at submit.

Full chain
Stakeholder voice Worker surveys, community feedback
Not yet
Themed

Worker-voice survey responses scored at submit. Cross-supplier patterns surface in minutes.

Re-survey
Corrective actions CAPs, follow-ups, re-verification
Not yet
Tracked

CAPs linked to the same entity record as the original finding.

Verified

Re-survey vs. prior cycle proves whether the CAP worked.

What sits underneath

Four analysis layers. Two work at collection. Two work at reporting.

Every layer works because every record carries the same entity ID. Without it, ESG due diligence over time is a manual cleanup project. With it, the analysis is a default output of collection itself.

01 · Cell

Intelligent Cell

Collection time · per document

Single-field analysis. Applied to one DDQ response, one sustainability report, or one worker-voice answer, with a rubric defined by the fund or the procurement team. Each score links to the source passage that supports it.

In ESG DD

A 60-page sustainability report uploaded at intake gets read end-to-end and scored against the fund's E, S, and G pillars. Reviewer can audit any score by clicking to the source paragraph.

02 · Row

Intelligent Row

Collection time · per entity

Multi-field analysis per record. Combines several Cells, structured DDQ responses, certifications, and incident logs into one consolidated entity profile. The IC reviewer or procurement lead sees one brief, not five tabs.

In ESG DD

DDQ + sustainability report + worker survey + audit findings rolled into a one-page entity brief. Inconsistencies between policy claims and worker feedback flagged automatically.

03 · Column

Intelligent Column

Reporting time · cross-portfolio

Cross-record patterns across all responses to one or more fields. Theme analysis across worker-voice surveys. Risk pattern detection across suppliers. IRIS+ indicator computation across every active investee.

In ESG DD

Theme analysis across 200 supplier worker-voice surveys surfaces which sites are flagging the same labor risk in their own words, before it becomes a portfolio-wide pattern.

04 · Grid

Intelligent Grid

Reporting time · full dataset

Full dataset analysis across every record and every field. LP ESG narrative, board ESG report, CSDDD effectiveness chains, supplier portfolio dashboards. The two weeks before an audit compress into hours.

In ESG DD

CSDDD-ready evidence chain assembled by default: commitment at intake, mid-cycle evidence, corrective action, re-verification, outcome, all on the same entity record.

Where teams use it

One platform. Many ESG contexts.

ESG Partner Intelligence is the foundation underneath every page in this section. Pick the page closest to your work and go deeper.

What’s different

Provider scores set the anchor. Spreadsheets carry the workload. Neither closes the evidence chain.

Most teams running ESG due diligence live in two systems at once: a provider score subscription that returns a number, and a folder of spreadsheets and consultant memos where the actual work happens. Both produce outputs. Neither carries an entity ID across the lifecycle, neither scores qualitative evidence at submit, and neither runs a human-in-the-loop accuracy checkpoint before data lands in front of the team.

Capability

Provider scores

MSCI, Sustainalytics, ISS ESG

Spreadsheets + consultants

Excel, Airtable, custom analyst work

Sopact Sense

ESG Partner Intelligence

One entity ID across DDQ, monitoring, and CSDDD

Not the model

Each provider holds its own per-cycle record of the company. None of them connect back to your DDQ, your audit findings, or your CAP tracker.

Manual match

Each tab issues its own identifier. Stitching IDs across cycles is the analyst’s job, every cycle.

Native primitive

Issued at the first DDQ. Used by every form, audit, and survey from intake through exit. Survives email changes, name spelling, supplier tier moves.

Sustainability documents read with citation evidence

Public data only

Providers score from publicly available disclosures. Entity-submitted PDFs, policies, and onboarding documents are never processed.

Manual review

Sustainability reports get skimmed, not analyzed. The most valuable signal goes unread across 40+ portfolio companies or 200+ suppliers.

100% with citations

Every uploaded document read end-to-end. Each score links back to the source paragraph. Reviewer can audit any judgment in seconds.

Scoring consistency across entities and reviewers

30 to 50 point variance

Same company scores 47 with one provider, 74 with another. Each provider uses private interpretations of the same ESG dimensions.

Reviewer drift

"Strong governance" scored privately by each analyst produces 12 private interpretations. No anchored evidence criteria.

One anchored rubric

Observable evidence anchors at every scoring level. AI and human reviewers reach the same conclusion. Re-scoring the full pool is one action.

CSDDD effectiveness evidence chain

Cannot produce

A point-in-time score cannot demonstrate that DD prevented harm over time. The architecture was never designed for it.

Reconstructed for audit

Effectiveness chain assembled by hand from disconnected sources at audit time. Often months of analyst work per cycle.

Assembled by default

Commitment at intake to mid-cycle evidence to corrective action to re-verification, all on the same entity record. Audit-ready by design.

Cross-portfolio queries on qualitative evidence

Quantitative only

Score-level comparisons across portfolio. Worker-voice patterns, narrative themes, and corrective-action stories are invisible.

Not feasible

Reading 200 supplier worker surveys to find a pattern is a multi-week investigation. Most patterns are missed.

Column + Grid

"Which suppliers flagged retaliation risk in their Q3 worker survey?" runs as a single query. Cross-supplier themes surface in minutes.

Human-in-the-loop accuracy checkpoint before data goes live

No checkpoint

Scores update on the provider’s schedule. Your team has no review step before scores reach the IC dashboard.

Manual ad-hoc

Whoever owns the master sheet eyeballs each submission. Catches some errors. Misses others. No structured release process.

Reviewer release

Submissions land in a reviewer queue. AI flags inconsistencies and missing fields. The data lead releases each record before it propagates to the team. Accuracy becomes a workflow, not a back-channel question.

Who runs it

Real partners. Real evidence chains.

Two customers. Two different ESG shapes: an impact-fund ESG report shaped over a 7-year partnership, and a supply-chain partner intelligence build for a food and nutrition org reporting to its funders.

Crossroads Impact Corp

Public co. · CDFI subsidiary · ESG portfolio

Seven years of ESG portfolio aggregation. The report you can read end-to-end.

7+ years

Continuous ESG partnership through Capital Plus Financial and Crossroads. Multiple consecutive published ESG impact reports drawing on the same connected record.

Crossroads Impact Corp asked the question every ESG-led investor eventually has to answer: how do we know our investments stay on the path of sustainability and inclusion across every stage, from sourcing to exit? The partnership started when the company was operating as Capital Plus Financial. Sopact mapped the relevant frameworks (EDCI, GIIN-IRIS, PRI), aggregated investee data across the lifecycle, and built the data layer beneath consecutive published ESG impact reports. Each year inherits the prior year’s baseline rather than starting over. The published report is a working example of what the connected record produces.

Our unwavering commitment to impact drives us and our partners, leading to a $223 million year-over-year increase in environmental and social loans. Collaborating with Sopact empowers us to create real, measurable change for those truly making a difference in our communities.

Eric Donnelly · CEO and Director, Crossroads Impact Corp

Read the ESG impact report

Food4Education

Kenya · school nutrition · supply chain

Multi-stage DDQ across the food supply chain. One auditable picture for funders.

Multi-partner

Legal, financial, and impact data captured across the supply chain. Centrally aggregated, scored against the program rubric, AI-analyzed for reporting.

Food4Education needed to systematically collect and aggregate data from its food supply chain partners to give funders a clear, auditable picture of partner impact. The proposal defined a multi-stage data collection workflow starting at due diligence, capturing legal, financial, and impact information from each partner, with Sopact aggregating the responses centrally, applying a rubric anchored on Food4Education’s internal goals, and automating the reporting layer. The result was supply-chain partner intelligence that funders can read end-to-end without reconstructing it from scratch every cycle.

The goal: bring every supply chain partner into one auditable view, with the same rubric across every partner, so what we communicate to funders matches what we actually know.

Food4Education · supply chain partner intelligence scope

FAQ

Questions LPs and procurement leads ask.

Eight questions impact fund managers, procurement leads, and ESG analysts ask in their first conversation. Visible Q&A so search engines and AI assistants can index every answer.

Q. 01
What is ESG due diligence?

ESG due diligence is the structured assessment of a company’s environmental, social, and governance performance before an investment, acquisition, grant, or supplier relationship, plus the continuous verification of that performance afterward. It runs across three pillars: environmental risk and climate exposure, social and labor practices, and governance quality. It happens in two modes: a structured DDQ at intake and continuous evidence after intake. The standards are well known. The hard part is connecting them to the same company across DD, monitoring, and effectiveness proof.

Q. 02
What is an ESG DDQ and how is it different from a score?

An ESG DDQ is the structured questionnaire an investor, funder, or procurement team sends to a prospective investee or supplier to collect standardized ESG data. It typically covers 40 to 150 questions across the three pillars, combining closed-ended compliance checks with open-ended narrative responses. An ESG score is what a rating provider produces from publicly available data. A DDQ is what you collect directly from the entity. Scores vary by 30 or more points across providers for the same company. DDQs, when scored at submit against an anchored rubric, produce citation-linked evidence that holds up under regulatory and LP review.

Q. 03
What is The Scoring Trap?

The Scoring Trap is the failure mode where ESG due diligence optimizes for producing an auditable number instead of building actual understanding. Scores are provider-dependent, point-in-time, and reflect what the company says, not what is happening on the ground. They cannot satisfy CSDDD’s requirement to prove effectiveness over time. Only a record of what you saw, when you saw it, what you did, and what changed afterward can. Escaping The Scoring Trap means designing ESG data collection inside a single system from first contact, with one ID per entity that carries forward across every cycle.

Q. 04
What is CSDDD and how does it change ESG due diligence?

CSDDD is the EU Corporate Sustainability Due Diligence Directive, which requires large companies operating in the EU to identify, prevent, and remediate adverse human rights and environmental impacts across their value chains. The directive requires evidence that due diligence is effective at preventing harm, not just that DD was performed. This is a fundamental shift: a point-in-time score cannot satisfy the standard. Only a documented chain linking commitments at intake to verified outcomes across multiple cycles can. ESG DD platforms must generate CSDDD-ready evidence chains by default, with one entity ID connecting intake to effectiveness proof.

Q. 05
How does AI change ESG due diligence in practice?

AI changes the operational economics of running ESG DD continuously across a portfolio. Every uploaded sustainability document gets read end-to-end against the fund’s rubric, with citations to the source paragraph. Every open-ended DDQ response gets scored at submit. Cross-portfolio queries on qualitative evidence that used to be multi-week investigations become single queries. Citation evidence is non-negotiable. Every score must trace to the specific content that generated it. This is what satisfies CSDDD effectiveness requirements and what holds up under LP review.

Q. 06
Does ESG due diligence need to be separate from impact measurement?

For impact funds especially, no. ESG DD and impact measurement should share the same entity records and the same data layer. When they live in separate systems, neither produces defensible intelligence. LPs see one set of numbers, regulators see another, and boards see a third. The Impact Measurement and Management page covers how the same architecture serves DD, monitoring, and LP reporting from one connected record.

Q. 07
How does ESG due diligence work for supply chain and supplier networks?

Supply chain ESG DD applies the rubric across supplier networks with two architectural additions. First, one supplier ID connecting DDQ submissions to worker-voice surveys to corrective action tracking and re-survey. Second, AI thematic analysis of open-ended worker feedback at scale across all sites simultaneously. For CSDDD compliance, the supplier DDQ must generate evidence that due diligence is effective at preventing harm, which requires year-over-year data on the same supplier entities, not isolated snapshots. Food4Education’s supply-chain partner intelligence build is one example of this pattern: legal, financial, and impact data captured at intake, scored against the program’s rubric, and aggregated into one auditable picture for funders.

Q. 08
How is Sopact different from MSCI, Sustainalytics, and ISS ESG?

Provider rating subscriptions and Sopact serve different needs. Providers score from publicly available data and serve early-stage screening across thousands of entities. Sopact is the system that runs the DD engagement and the year-over-year monitoring on the entities you actually invest in or contract with. Each holds one role. Provider scores can feed into Sopact as one input signal. The two are complementary, not substitutes. What Sopact replaces is the spreadsheet-and-consultant workflow that lives between the score subscription and the LP report.

Escape the scoring trap

Bring a portfolio. Leave with the evidence chain.

A 60-minute working session. Bring a sample DDQ from a real investee or supplier, plus your fund’s ESG rubric. We map the DDQ to anchored evidence criteria, the one entity ID across your CRM, and the CSDDD effectiveness chain you need at audit. By the end, you have a working setup and a path to running it next cycle.

Format 60-minute working session
Bring A sample DDQ + your rubric
Leave with A working evidence chain