play icon for videos

AI for social impact: meaning, methods, and measurement

AI for social impact in plain terms. What it does, why data architecture decides what AI can prove, and how to recognize a working setup.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 2, 2026
360 feedback training evaluation
Use Case

AI for social impact

AI for social impact uses AI to measure whether programs change lives.

It works when each stakeholder has one record across every touchpoint, when demographic fields live on the intake form rather than the report template, and when the qualitative answers stay linked to the quantitative ones.

This guide explains AI for social impact in plain terms. What it actually does. Why the data setup under it controls what it can prove. How to recognize whether your current architecture supports continuous learning or only annual reporting. Worked examples come from workforce training programs, scholarship reviews, and impact funds. No prior background needed.

What this guide covers

01From collection to claim, the working pipeline
02Definitions of measurement, management, platform
03Six design principles for AI impact work
04Method choices that compound across cycles
05A workforce training worked example
06Applications across three program shapes

The pipeline

From collection to claim: how AI for social impact actually works.

The AI is the second-to-last step in a chain of six. The first three steps decide whether the AI has anything defensible to work with. The last three steps decide whether the result can be trusted by a funder, a board, or the program staff making the next decision. Each step depends on something being true that the step before guaranteed.

Pipeline from intake to evidence-linked report

01

Intake

Application or first form. ID assigned.

02

Touchpoints

Pre, mid, exit, follow-up. Same ID.

03

Linkage

Open text stays attached to scores.

04

AI reads

Themes, rubric scores, summaries.

05

Pattern

Cohort and site comparisons surface.

06

Claim

Report points back to the responses.

What has to be true at each step

ID is unique to one person.

Form fields hold across cycles.

Same record holds both.

Coding is reproducible.

Cohorts are comparable.

Aggregate links to source.

Break any one assumption, and every step downstream is doing arithmetic on something that does not hold.

Read the figure as a chain. Each box is a step. The thread under it is the thing that step depends on. The AI lives at step four. The first three steps decide what step four can read. The last two steps decide whether anyone outside the team can verify the result.

Definitions

Plain-language definitions, before the methods.

Four terms travel together in this space, and they are often used interchangeably. They are not the same thing. Each describes a different layer of what AI does for social programs, and the right one to use depends on what you are trying to do this quarter.

What is AI for social impact?

AI for social impact is the use of artificial intelligence to measure, manage, and improve the outcomes of social programs. It is not the same as AI for social good, which is the broader idea of using AI on humanitarian problems. The narrower version is operational: each stakeholder has one record across every touchpoint, demographic fields are captured at the form rather than added to the report, and the qualitative responses are read by AI at the moment of collection so they stay linked to the numbers.

The AI itself is a small part of the system. The setup that feeds it is the part that decides what it can prove. A team using AI for social impact well has a system where a funder asking an equity-disaggregated outcome question gets an answer the same afternoon, not three weeks later.

What is AI impact measurement?

AI impact measurement is the use of AI to count, score, and compare the changes a program produces in the people or organizations it serves. The AI reads open-ended responses to find themes, scores essays or applications against a rubric, summarizes documents, and surfaces patterns across cohorts.

It only works on data that was structured for it. If the same person enters as a fresh record each cycle, no AI can produce a valid pre-post comparison. If demographic fields were not captured at intake, no AI can produce equity-disaggregated outcomes. AI is the analysis layer. The collection layer determines what it can analyze. Most of the work that actually improves AI impact measurement is work on the collection layer.

What is AI impact management?

AI impact management is the ongoing practice of using AI-analyzed program data to make program-adaptation decisions. It is operational, not reportorial. A program team running AI impact management collects data continuously, sees themes and patterns within days of collection, adjusts the program before the next cohort begins, and uses the year-end report as a summary of what was already learned and acted on.

The shift from impact measurement to impact management is the shift from annual cycles to thirty-day cycles, and it is what AI makes possible when the data architecture supports it. Teams that have made this shift describe the change as moving from making decisions about a current cohort using last year's data, to making decisions during a current cohort using last month's.

What is community impact AI?

Community impact AI applies AI to programs serving a defined community: a neighborhood workforce program, a regional health initiative, a city-wide youth program, a community foundation portfolio. The community context adds two requirements that generic AI tools miss.

The first is multilingual qualitative analysis, because community programs collect in the languages people speak rather than only in English. The second is identity continuity across services, because a community member often touches multiple programs over years. AI that handles both, on data that was structured at intake, is what community impact AI means in practice.

Adjacent terms

AI for social impact vs. four nearby terms.

This page

AI for social impact

Operational discipline. AI applied to measure and improve program outcomes for the people the program serves.

Adjacent

AI for social good

The broader philosophy. Applying AI to humanitarian, environmental, or social problems. Describes intent. Does not require measurement of outcomes.

Different topic

AI's societal impact

A different question entirely: how AI affects employment, democracy, inequality, and human behavior at the population level. Studied by ethicists and policy researchers.

Synonym

AI for impact

Shorter form of AI for social impact. Some teams use it more broadly to cover impact investing or environmental impact alongside social.

Design principles

Six rules that decide whether AI helps or only looks like it does.

Programs that get value from AI for social impact follow these six rules without exception. Programs that struggle skip one or two and find that everything downstream gets faster but no truer.

01 · CLEAN AT SOURCE

The AI is only as good as the data it lands on.

Fix collection before you fix analysis.

Most teams add AI to a setup that was built for paper. The forms collect the wrong fields, in the wrong shape, on different platforms. The AI runs on the export. Speed goes up. Reliability does not. The fix that compounds is rebuilding the form, not buying the analysis.


Why it matters: teams that skip this principle spend 80% of their analysis time cleaning data that should never have been dirty.

02 · ONE PERSON, ONE ID

Every touchpoint links to the same record, automatically.

No matching by hand. Ever.

A persistent stakeholder ID is the smallest decision with the largest downstream effect. Without it, the same person enters as different records each cycle and pre-post comparison stops being possible. With it, every form submission attaches to the right record at the right moment.


Why it matters: every multi-year cohort comparison breaks here, in either direction.

03 · DISAGGREGATION AT INTAKE

Demographic fields belong on the form, not in the report template.

If it is not collected, it cannot be reported.

Equity reports require gender, geography, cohort, and program-type breakdowns. Adding those fields to a Google Sheet six months later means contacting two hundred participants again. Building them into the intake form once means the report writes itself when the funder asks.


Why it matters: the most common funder-report failure is missing fields, not wrong analysis.

04 · QUALITATIVE WITH QUANTITATIVE

Narratives stay linked to numbers, in the same record.

A score without context is a number on a slide.

A confidence score of 4.2 means little. A 4.2 plus the open-ended response that says "I finally felt like I belonged in a technical environment" means a great deal. AI for social impact keeps these together by storing them on the same record and analyzing them together.


Why it matters: funders increasingly ask for the why behind every metric. The link has to exist before they ask.

05 · CONTINUOUS, NOT ANNUAL

Insights arrive in days so the next cohort benefits.

Annual cycles improve next year. Thirty-day cycles improve next month.

A barrier theme that surfaces in week two of a cohort can be addressed in week three. The same theme surfacing in a year-end report informs a future cohort but not the one that raised it. AI processes data continuously when the architecture is built for it. Annual cycles are an artifact of older tooling.


Why it matters: the gap between annual learning and continuous learning is roughly twelve cohorts of compounded improvement.

06 · AUDITABLE CLAIMS

Every aggregate metric points back to the underlying voices.

A claim that cannot be verified is a claim that will not be trusted.

Reports that show a 28% confidence rise should let a reader click through to the specific responses, cohort breakdown, and demographic cuts that produced the number. AI that generates a number without keeping the link to the source is producing prose, not evidence.


Why it matters: funder due-diligence cycles are getting more rigorous. Aggregate-only reports are losing.

Method choices

Six choices that decide what AI for social impact can prove.

Each row is a decision program teams face when setting up an AI impact platform. The broken column describes the workflow most teams fall into when the choice goes wrong. The working column describes the setup that holds across cycles. The fourth column names what each decision actually controls.

The choice

Broken way

Working way

What this decides

Where the AI runs

Inside the form, or after the export.

Broken

AI is added on top of a CSV export. It analyzes whatever the form happened to capture, with no chance to ask for missing fields.

Working

AI runs at the moment of collection. The form is built for what the report will eventually need to claim, and the AI reads as the data arrives.

What gets analyzed. A retrofit AI is limited by collection it had no part in designing.

Stakeholder identity

Reconciled by hand, or assigned at intake.

Broken

Each form submission creates a fresh row. Sarah Johnson at intake becomes S. Johnson at exit. Matching across two hundred records is manual and never finishes.

Working

A persistent ID is assigned at first submission. Every later form submits to the same record. Matching is a data-model decision, not a labor one.

Whether pre-post is possible. No ID, no longitudinal comparison.

Disaggregation

Added to the report, or built into the form.

Broken

A funder asks for outcomes by ZIP code. The form did not collect ZIP code. Two hundred participants are re-contacted to fill in what should have been collected once.

Working

Demographic and geographic fields live on the intake form, every cycle. Reports cut by these fields are byproducts of normal collection.

What can be claimed. Equity reports require fields that have to exist before the question is asked.

Qualitative analysis

Coded by hand later, or read at collection.

Broken

Open-ended responses sit in a spreadsheet column. A consultant codes them six months later, if at all. The themes appear after the cohort has cycled out.

Working

AI reads each open-ended response as it arrives, attaches themes and sentiment to the same stakeholder record, and updates the cohort summary continuously.

When you can act on the why. Late coding informs next year. Live coding informs this month.

Reporting cadence

Annual document, or rolling dashboard.

Broken

Reports are produced once a year, in a three-week sprint that consumes the impact team. The numbers are old by the time the document arrives.

Working

Dashboards update as data arrives. Annual reports are summaries of what was already known and acted on. Funders get the same view the team uses.

Who learns first. Annual reporting hands the learning to next year. Continuous reporting keeps it for this cohort.

Evidence linkage

Aggregate-only, or metric-to-voice traceable.

Broken

Reports show aggregate numbers. A funder asking which responses produced the 28% confidence rise gets either silence or a side document assembled by hand.

Working

Every aggregate metric in the report links back to the specific responses, cohort breakdown, and demographic cuts that produced it. The link is structural, not assembled.

Whether the claim survives scrutiny. Linked evidence holds up. Aggregate-only does not.

Why row one matters most

The first decision controls the next five. If the AI runs after the export, the form was not built for the report, the IDs are not persistent, the disaggregation fields are not there, the qualitative coding is late, and the evidence linkage is gone. Fix the first row first. Everything else follows.

Worked example

A workforce training program, before and after.

One program. The same staff. The same participants. Two different setups for analyzing what they collected. The shape of the result, and the time to act on it, came out very differently.

We run a sixteen-week training program with three cohorts a year, about sixty participants a cohort. We had pre and post surveys with open-ended questions about barriers. For two years we ran them, exported to a spreadsheet, and looked at totals at the end of each year. Last spring we moved to a setup that read the open text at collection. Within four days we saw that tool access was the top barrier theme at one of our two sites, in roughly two-thirds of responses. We bought tool kits before the next cohort started. Confidence scores at that site rose by about a quarter over the next cycle. The other site stayed flat.

Workforce training program lead, mid-cohort cycle.

Quantitative axis

Confidence scores per cohort, per site.

Pre and post Likert scores on the same items. Site, cohort, and demographic cuts available because the fields existed at intake.

Qualitative axis

Barrier themes from open-ended responses.

Tool access, scheduling, transportation, family obligations. AI tagged each response at the moment of submission, attached to the same stakeholder record as the score.

Sopact Sense produces

  • Persistent IDs across pre, mid, post, follow-up

    The same participant submits four forms over the program, each linking to one record. Pre-post comparison is automatic.

  • Open text analyzed at collection (Intelligent Cell)

    Each open-ended response gets themes and sentiment attached to the stakeholder record as it arrives. No after-the-fact coding pass.

  • Cohort and site patterns surfaced (Intelligent Column)

    Tool-access theme spike at site one was visible in the dashboard four days into collection, not at year-end.

  • Evidence-linked outcome reports (Intelligent Grid)

    The 28% confidence claim points back to the specific responses, cohort, and demographic cuts that produced it. Funder-auditable by design.

Why traditional tools fall short

  • Each cohort enters as fresh records

    No persistent ID across forms. Pre-post matching has to be done manually, by spreadsheet lookup, every cycle.

  • Open text sits in a spreadsheet column

    Coding happens at year-end, if at all. By then the responses that mattered have stopped mattering for the cohort that gave them.

  • Patterns appear at year-end review

    The site-level barrier that could have been addressed in week three is reported in month twelve, after the cohort moved on.

  • Reports are aggregates without traceable evidence

    A 28% number on a slide. No path back to which responses, cohort cut, or demographic groups produced it. Funder due-diligence stalls.

The confidence rise was not a measurement question. It was an architecture decision. The data structure existed before the funder asked for it, and the open text was readable while the cohort was still in the room. Sopact Sense is not another report writer. It is the collection layer that makes the next report writable, the next theme actionable, and the next cohort the one that benefits.

Applications

Three program shapes that AI for social impact serves differently.

The architecture is the same. The collection points, the AI work, and the reports that come out look quite different depending on whether the program is cohort-based, application-driven, or portfolio-based. Three sketches.

01 · COHORT-BASED

Workforce training programs

Sixteen-week cohorts, 30 to 200 participants, two to four sites.

The typical shape. A workforce training program collects pre and post surveys with both Likert items and open-ended barrier questions. Mid-program check-ins capture how the program is going. Six-month follow-ups track employment. Each cohort runs three to four times a year.

What breaks. Forms differ between cohorts because someone updated the wording. The same person enters as Sarah Johnson at intake and S. Johnson at exit. Open-ended responses go into a spreadsheet column that nobody reads until year-end. By the time a barrier theme is identified, the cohort that raised it has moved on.

What works. Persistent IDs link every form for the same participant. AI reads the open text as it comes in. Site-level patterns surface within days. The program staff act before the next cohort starts. Continuous learning replaces annual reporting, and the second cohort each year benefits from what the first cohort showed. For deeper detail, the nonprofit impact measurement and training effectiveness guides cover the cohort architecture in full.

A specific shape

Sixty participants per cohort, two sites, three cohorts a year. Pre and post on the same five-item confidence scale. Open-ended barrier question after each. AI tags barrier themes per response, attached to the participant record. Site comparison and cohort comparison automatic. The program team sees a tool-access spike at one site in week two of cohort six and orders kits before cohort seven begins.

02 · APPLICATION-DRIVEN

Scholarship and grant programs

Application windows, 200 to 5,000 submissions, rubric review.

The typical shape. A scholarship or small-grant program runs an annual or rolling application window. Each application includes essays, recommendation letters, and supporting documents. A committee scores against a rubric. Awardees enter a program; many also report back at the end of the year. Some programs run multiple times a year.

What breaks. The committee fatigues. Application four hundred is read more strictly than application forty. Reviewers code for different things despite the rubric. Recommendation letters get skimmed. The applicant's voice from the essay never reaches the awardee follow-up year because the systems do not link.

What works. AI scores essays, recommendations, and documents against the same rubric for every applicant. Reviewer time concentrates on the borderline cases where judgment matters. The same persistent ID carries through to awardee reporting, so the original application essays travel with the participant. Equity reporting is a byproduct, because demographic fields lived on the application form. Programs that handle these applications with structured rubrics and AI screening run shortlist cycles that are faster and more consistent than committee-only review.

A specific shape

Two thousand applications, six-criterion rubric, three reviewers per application. AI scores all applications against the rubric and surfaces a top-quartile shortlist. Reviewers focus on the shortlist plus a sample of the rest as a check. Time-to-shortlist drops from six weeks to nine days. Awardees who report back the following year arrive with their original essay and rubric scores attached to their record.

03 · PORTFOLIO-BASED

Impact funds and ESG portfolios

10 to 80 portfolio companies or grantees, ongoing, multi-quarter.

The typical shape. An impact fund or ESG team manages a portfolio of organizations. Each grantee or investee submits quarterly updates that combine financial KPIs, narrative reports, and supporting documents. Portfolio review meetings happen monthly. The fund reports up to its own LPs or board annually.

What breaks. Each grantee submits in a different format. Reconciling KPIs across thirty submissions takes a quarter of the team's analysis time. Narrative reports go into a folder; their themes never make it into the portfolio review. When a grantee's community engagement drops, the signal arrives two quarters late.

What works. Each grantee has a persistent organizational ID. Updates submit through standardized forms and link to the same record. AI extracts KPIs from the financial submission, themes from the narrative report, and flags from compliance documents. The portfolio dashboard updates as updates arrive. The next portfolio review uses what arrived this week, not what was reconciled six weeks ago.

A specific shape

Twenty-four grantees, quarterly reporting, monthly portfolio reviews. Each submission lands a structured KPI block plus a narrative section. AI extracts both and flags anomalies. The portfolio manager sees a community-engagement drop at one grantee within two weeks of the quarterly submission and the follow-up call happens before the next review.

A note on tools

Most teams already have collection tools. The architectural gap sits elsewhere.

Sopact Sense Google Forms SurveyMonkey Submittable Qualtrics Typeform ChatGPT Claude

Form tools collect data well. General AI assistants summarize text well. Both are useful pieces of a working setup. The architectural gap is identity and linkage. Form tools do not assign a stakeholder ID that travels across separate forms, and AI assistants do not keep an aggregate metric tied to the responses that produced it. Pre-post, equity-disaggregated, multi-cohort reporting requires both, every cycle, with the link kept live by the platform rather than reassembled by hand.

Sopact Sense fills that gap by treating identity as a first-class field on every form, the AI analysis as a layer on the collection record (not on an export), and the report as a byproduct of how the data was collected. Most teams keep their existing form tool for one-off intake and use Sopact Sense for the program data that has to support multi-year claims.

FAQ

AI for social impact, questions answered.

The questions program teams ask most often when they start working with AI on social-impact data, with plain-language answers.

  • Q.01

    What is AI for social impact?

    AI for social impact is the use of artificial intelligence to measure, manage, and improve the outcomes of social programs. It is not the same as AI for social good, which is the broader idea of using AI on humanitarian problems. The narrower version is operational: each stakeholder has one record across every touchpoint, demographic fields are captured at the form rather than added to the report, and the qualitative responses are read by AI at the moment of collection so they stay linked to the numbers. The AI itself is a small part of the system. The setup that feeds it is the part that decides what it can prove.

  • Q.02

    What is the difference between AI for social impact and AI for social good?

    AI for social good is the broader philosophy of applying AI to humanitarian, environmental, and social problems. AI for social impact is the narrower operational discipline of using AI to measure and improve the outcomes of a program: who changed, by how much, why, and what should be different next cycle. Social good describes intent. Social impact describes accountability. Many AI-for-social-good projects produce no measurable social impact because the measurement setup was never built. The two terms travel together but answer different questions.

  • Q.03

    What is AI impact measurement?

    AI impact measurement is the use of AI to count, score, and compare the changes a program produces in the people or organizations it serves. The AI reads open-ended responses to find themes, scores essays or applications against a rubric, summarizes documents, and surfaces patterns across cohorts. It only works on data that was structured for it. If the same person enters as a fresh record each cycle, no AI can produce a valid pre-post comparison. If demographic fields were not captured at intake, no AI can produce equity-disaggregated outcomes. AI is the analysis layer. The collection layer determines what it can analyze.

  • Q.04

    What is AI impact management?

    AI impact management is the ongoing practice of using AI-analyzed program data to make program-adaptation decisions. It is operational, not reportorial. A program team running AI impact management collects data continuously, sees themes and patterns within days of collection, adjusts the program before the next cohort begins, and uses the year-end report as a summary of what was already learned and acted on. The shift from impact measurement to impact management is the shift from annual cycles to thirty-day cycles, and it is what AI makes possible when the data architecture supports it.

  • Q.05

    What is an AI impact platform?

    An AI impact platform is a software system that combines stakeholder data collection, AI analysis of that data, and reporting in one connected workflow. The defining test is whether the AI sits inside the collection layer or runs on exports from it. AI on top of exports analyzes whatever the form happened to capture. AI inside the collection layer can ensure the form captured what the eventual report needs to claim. Both call themselves AI impact platforms. Only the second works for multi-year, equity-disaggregated, qualitative-plus-quantitative reporting.

  • Q.06

    What is community impact AI?

    Community impact AI applies AI to programs serving a defined community: a neighborhood workforce program, a regional health initiative, a city-wide youth program, a community foundation portfolio. The community context adds two requirements that generic AI tools miss. The first is multilingual qualitative analysis, because community programs collect in the languages people speak. The second is identity continuity across services, because a community member often touches multiple programs over years. AI that handles both, on data that was structured at intake, is what community impact AI means in practice.

  • Q.07

    What is AI in the social sector?

    AI in the social sector covers four use patterns: drafting communications and grant text from notes, screening applications against a rubric, analyzing open-ended survey or interview text at scale, and producing reports that connect aggregate metrics to underlying responses. The first is general-purpose. The other three are specific to programs, and they need data structured at collection to produce defensible results. Many sector adopters start with the first and discover that the second through fourth need a different platform than the form tools they had been using.

  • Q.08

    What is AI impact analysis?

    AI impact analysis is the AI-assisted reading of program data to identify what the program changed, for whom, and why. It includes pre-post comparison on outcome scores, theme extraction from open-text responses, rubric scoring of essays or documents, and pattern detection across cohorts and sites. The output is a set of findings that connect numbers to narratives. The credibility of the findings depends on whether the same person can be tracked across pre, mid, post, and follow-up, and whether the qualitative responses were captured in a form the AI can read at scale.

  • Q.09

    What platforms can report on social impact?

    Form tools like Google Forms, SurveyMonkey, and Submittable can collect data and produce basic dashboards, but their reports stop at the aggregate level and cannot connect numbers to the responses that produced them. Workflow platforms add review and routing layers, but the analysis still happens after export. AI impact platforms build the reporting on top of structured collection and integrated AI analysis, so the report points back to the underlying responses by design. The right platform depends on whether the report needs to be auditable: if a funder might ask which responses produced a metric, the platform has to keep the link.

  • Q.10

    Can ChatGPT do AI impact measurement?

    ChatGPT and similar general AI tools can summarize a set of responses, draft narrative around metrics, and suggest themes from a sample of open text. They cannot reproduce the same output on the same input across days, cannot guarantee that two cohorts were analyzed under the same coding scheme, and cannot tie an aggregate finding back to the specific responses that produced it. For drafting, they save hours. For formal impact measurement that has to defend a claim to a funder or board, the lack of reproducibility is the problem. AI impact platforms run the same analysis the same way every time, on the same data structure.

  • Q.11

    How does Sopact handle AI for social impact?

    Sopact Sense assigns a persistent stakeholder ID at the first form submission. Every later touchpoint, including mid-program surveys, exit assessments, and follow-up forms, links to that same ID automatically. Demographic and disaggregation fields live on the intake form, not in a report template. The Intelligent Suite analyzes open-ended responses at the moment of collection, synthesizes a per-stakeholder summary, surfaces patterns across cohorts, and generates reports where every aggregate metric connects to the underlying responses. The AI is one layer of four. The collection setup is the layer that makes the rest possible.

  • Q.12

    What is the best social impact measurement software in 2026?

    The right tool depends on program complexity. For a single annual program with stable criteria and under two hundred participants, a well-set-up form tool plus a spreadsheet works. For multi-year outcome tracking, equity-disaggregated reporting across multiple funders, or qualitative analysis at scale, a platform with persistent identity, integrated qualitative and quantitative analysis, and evidence-linked reporting is the architecture that holds. The test question: can you answer an equity-disaggregated outcome question from eighteen months ago without assembling spreadsheets? If the answer is no, the bottleneck is the platform, not the analysis.

  • Q.13

    What is AI for impact?

    AI for impact is a shorter form of AI for social impact, used interchangeably. Some teams use it more broadly to cover impact investing or environmental impact alongside social impact. The operational definition is the same: AI applied to measure, manage, and improve the outcomes of programs that aim to produce a defined change in defined people or organizations. The same architectural rules apply: persistent identity, disaggregation at intake, qualitative responses linked to quantitative outcomes, and evidence that points back to the source.

  • Q.14

    Can Google Forms or SurveyMonkey support AI for social impact?

    They can support the collection layer for a single cycle, and basic AI tools can be applied to the export. The structural limit is identity. Neither tool assigns a persistent stakeholder ID across separate forms. Each cycle produces a fresh dataset, and matching the same person across application, mid-program, exit, and follow-up becomes a manual reconciliation job that grows with program scale. Form tools work for short, single-cycle programs. They reach a ceiling at multi-year, multi-touchpoint, multi-funder reporting, regardless of how good the AI applied to the export is.

Related guides

Where to go next

Each guide picks up a thread from this page and follows it deeper. Adjacent siblings clarify scope. The methodological pages explain how the architecture gets built. The sector pages show what the architecture produces.

Bring your program

Bring three cohorts of data. Leave with a working setup.

The page above describes a method. The method only matters once it meets a real program: your participants, your funder questions, your touchpoint schedule. The fastest way to test whether the architecture fits is a 60-minute working session with the person who designed it.

  • 1 Map your touchpoints onto the six-stage pipeline from this page.
  • 2 Identify which disaggregation cuts your funder will ask for.
  • 3 Locate the linkage gaps that block longitudinal AI analysis.
  • 4 Decide what changes for the next cohort, not the historical data.

Who runs the session. Unmesh Sheth, who has spent two decades building data systems for social-sector organizations and designed the architecture this page describes. The format is a working session, not a sales pitch. If your shape fits, the next step is implementation. If it does not, the conversation ends with the right next step regardless.