play icon for videos

Social Impact Assessment: AI-Ready Methodology & Tools

Step-by-step guide to social impact assessment methodology, process, and reporting. Includes examples, frameworks, and tools built for nonprofit programs.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 6, 2026
360 feedback training evaluation
Use Case
Social impact assessment
An SIA counts what was delivered. A strong SIA names what changed for whom. Most stop at the count.

This guide explains the five-stage process that owns the methodology, the six design principles that decide whether an SIA holds up under funder review, the methods that span the work, and a worked example from a 2X-aligned gender-lens fund scoring 24 portfolio companies. Plain language, no prior background needed.

What this page covers
  • 01 The five-stage SIA process
  • 02 Definitions and SIA acronym
  • 03 Six design principles
  • 04 Method-choice matrix
  • 05 Worked example: 2X portfolio
  • 06 Frequently asked questions
What gets measured at three levels of rigor
Level 1 · A delivery count

Invested $200M across 24 portfolio companies.

A capital deployment number. Not yet a social outcome.

Level 2 · A point-in-time score

60% of portfolio companies are women-led.

A snapshot. Useful for screening. Silent on what changed.

Level 3 · A social impact assessment

Year-over-year change in women's leadership share, employment quality, and customer reach across the same 24 companies.

2X Criteria scored against verified data. Operator narratives themed against scores. Framework-aligned LP report.

The process
The five stages of social impact assessment

Every social impact assessment moves through the same five stages, whether the team writes them down or not. Run them step-by-step, in order, and the report at the end can answer the question the assessment was supposed to answer. Skip a stage, or run it out of order, and it cannot.

Sequence of work
01 · SCOPE
Scope

Name the change in plain language: who, what changes, by how much, by when. Decide what the assessment must answer.

Common failure: writing the survey first, choosing indicators after.

02 · BASELINE
Baseline

Capture starting conditions before the program acts. Assign a persistent ID at first contact so later records link back automatically.

Common failure: treating intake forms as baseline.

03 · METHOD
Method

Choose how the data will be collected. Pre-post survey, longitudinal cohort, qualitative interviews, comparison group. Most credible SIAs use more than one.

Common failure: defaulting to whichever tool is already paid for.

04 · MEASURE
Measure

Run the instruments at the right cadence. Pre, post, follow-up. Same wording each time. Open responses themed at collection, not at report.

Common failure: changing question wording between cycles.

05 · REPORT
Report

Produce a framework-aligned narrative. Name what the data does and does not show. Treat the report as a living record that updates as new data arrives.

Common failure: shipping a static PDF that goes stale on day one.

What threads through every stage
A named change, in plain language
A persistent ID at first contact
A method that fits the change
The same instrument, every cycle
Framework alignment in the data dictionary

Each stage depends on the one before it. Skip baseline and the endline number means nothing.

Sources: the five-stage sequence aligns with international SIA practice as described in IAIA (International Association for Impact Assessment) guidelines, IFC Performance Standard 1, and program-evaluation literature. Sectors and frameworks vary; the stages do not.

Definitions
Social impact assessment, in plain terms

Five questions cover most of what people mean by social impact assessment. The answers below match the visible FAQ at the bottom of the page; both are written for someone meeting the methodology for the first time.

What is social impact assessment?

Social impact assessment is a systematic process for measuring whether a program, policy, or investment changed life outcomes for the people and communities it touched. It pairs quantitative indicators (employment rates, income, health markers, scores on validated scales) with qualitative evidence (interviews, open-ended responses, narrative themes), and reports against a chosen framework such as IRIS+, the UN Sustainable Development Goals, B4SI, or 2X Global.

The point of an SIA is to answer a specific question. Did the activity produce the change it set out to produce, and for whom. A weak SIA reports activity volumes and self-reported satisfaction. A strong SIA names what changed, by how much, for which people, and pairs the number with the narrative that explains it.

The SIA acronym is used widely. When social impact assessment is paired with environmental impact assessment, the combined study is called ESIA, common for infrastructure, extractive, and large energy projects under World Bank, IFC, or national regulator requirements.

What is the social impact assessment process?

Five sequential stages. Scope: name the change you want to measure and the question the SIA has to answer. Baseline: capture starting conditions before the program acts on the participant or site, with a persistent identifier so later records link back. Method: choose how the data will be collected, including whether you need a comparison group. Measure: run the instruments at the right cadence, usually pre, post, and follow-up, with the same wording each time. Report: produce a framework-aligned narrative that names what the data does and does not show.

The order matters more than any individual stage. Scoping after the survey is written produces an SIA that cannot answer the question. Skipping baseline produces an endline with nothing to compare against. Retrofitting a framework at report time produces alignment that does not survive a careful funder reviewer. Most SIA failures trace back to the order, not the technique.

What is a social impact assessment framework?

A social impact assessment framework is a structured language for what gets measured. The widely used ones in SIA practice are IRIS+ (the de facto standard for impact-fund reporting, from the Global Impact Investing Network), the UN Sustainable Development Goals (broad, often paired with IRIS+ for double-coding), B4SI (corporate community investment, formerly LBG), and 2X Global (gender-lens). Government and multilateral SIAs often add IFC Performance Standards, World Bank ESS, or national regulatory frameworks.

Pick the framework whose indicators you can actually source from your program, not the prestige framework whose indicators you cannot. Picking the prestige framework first and then discovering you cannot collect its indicators is the most common scoping mistake in SIA.

What are the methods for social impact assessment?

Common methods include pre-post survey design (the same instrument run at baseline and at endline), longitudinal cohort tracking (the same participants followed over months or years), qualitative interviews and focus groups, document review for portfolio and policy SIAs, comparison-group studies when causal claims are needed, and mixed-method designs that pair numbers with narratives. Most credible SIAs use more than one method.

The methods choice depends on the change being measured, the cadence the program supports, and the framework the funder asked for. The methodology has to fit the program, not the other way around. SIA methodology that copies a method used elsewhere without adjusting for the program context produces an SIA that everyone signs and nobody trusts.

What is the difference between SIA and ESIA?

SIA measures social outcomes for people and communities. ESIA combines SIA with EIA (environmental impact assessment) for projects whose social and environmental effects are linked, such as infrastructure, mining, or large energy installations. ESIA is often required by World Bank, IFC, or national regulators; standalone SIA is more common for development programs and impact funds.

The two share architecture: persistent IDs, mixed-method evidence, framework alignment, and continuous comparison. ESIA adds environmental measurements, regulator-defined milestones, and resettlement-related stakeholder consultations. Most SIA tools handle the social side cleanly; ESIA tools usually need to integrate environmental field data and regulatory disclosure formats on top.

Related-but-different terms
Distinctions worth knowing
SIA acronym
What does SIA stand for?

SIA stands for social impact assessment. The acronym is used across development finance, impact investing, foundation portfolios, and government policy work. In healthcare and IT contexts, SIA can also stand for system impact analysis or security impact assessment; the social impact meaning is the dominant one in the development and investment fields.

SIA vs. measurement
SIA vs. social impact measurement

Social impact measurement is the broader continuous practice of collecting evidence of change. Social impact assessment is the report against a framework that the measurement system produces. Measurement runs every day or every cohort. Assessment runs every reporting cycle. Programs that do measurement well produce assessments without much friction.

SIA vs. SROI
SIA vs. SROI

SROI (Social Return on Investment) is one method that can sit inside an SIA. It monetizes social outcomes into a benefit-cost ratio. SIA is the broader process. Most SIAs do not use SROI; they report against IRIS+ or sector-specific frameworks instead. SROI is most useful when funders explicitly require monetized impact evidence.

SIA vs. analysis
SIA vs. social impact analysis

Some authors use social impact analysis interchangeably with SIA. Others reserve "analysis" for the analysis-only step inside the broader assessment process. In practice, social impact analysis tools and SIA tools cover the same ground; the work that distinguishes them is the analysis layer of theming, disaggregation, and framework alignment. A social impact analysis example will typically show the analysis step in isolation: a coded interview transcript or a baseline-to-endline statistical comparison, without the full scope-to-report cycle around it.

Design principles
Six principles that decide whether an SIA holds

The same six principles separate an SIA that survives funder review from one that does not. They apply across program SIAs, portfolio SIAs, and project SIAs. Each one names a decision that gets made early and pays for itself across every later cycle.

01 · OUTCOME
Measure life change, not delivery counts

Trainings delivered is an activity. Employed at 90 days is an outcome.

Most SIAs report activity volumes because they are easier to count. The question funders are actually asking is whether the activity changed life outcomes for the people the program served. Outputs answer one question; outcomes answer the other.


Why it matters. A funder who funded outputs needs an output report. A funder who funded outcomes needs an SIA. Most funders mean the second.

02 · IDENTITY
Persistent ID at first contact

Assign the identifier the moment a participant or company enters the system, not at endline.

Every later touchpoint links to that ID: pre-survey, mid-program check-in, exit assessment, follow-up. Without a persistent ID, baseline-to-endline comparison becomes manual reconciliation by name and email, which fails as soon as someone changes either.


Why it matters. Identity is the difference between a query and a five-week reconciliation project.

03 · BASELINE
Capture starting conditions before the program acts

No baseline, no comparison. The endline number alone names a state, not a change.

Baseline runs at intake. The same instrument, or its validated parallel form, runs at endline. Different question wording at the two points invalidates the comparison. This is the single most common SIA failure across sectors.


Why it matters. A program with no baseline can describe its endline. It cannot defend a change claim.

04 · EQUITY
Disaggregate by group from the start

Race, gender, age, geography, disability, language. Define the cut at scoping; do not retrofit it.

Aggregate outcomes hide the ones that matter most. A program whose average employment outcome is 70% may be 85% for one group and 45% for another. Funders increasingly ask for disaggregated evidence; programs that captured disaggregation at intake produce it as a query.


Why it matters. Equity is the dimension funders ask for and the dimension teams most often forget at scoping.

05 · NARRATIVE
Theme open responses at collection

Pair every quantitative outcome with one qualitative question, themed and linked to the same record.

A score answers what changed. A narrative answers why. The two have to be linked at the participant level for the why to inform the next program cycle. Quotes pulled into the appendix at report time do not count as integration; they are decoration.


Why it matters. Every difficult finding is explained in the open responses. The number alone never explains the number.

06 · FRAMEWORK
Pick the framework at scoping

IRIS+, SDGs, B4SI, 2X Global. Indicators chosen at scoping fit cleanly. Indicators retrofitted at report time do not.

Each framework has its own vocabulary. The data dictionary should speak that vocabulary from collection forward. Picking the prestige framework before checking what your program can collect is the most common scoping mistake in SIA.


Why it matters. A framework crosswalk built at report time is the most fragile artifact in the SIA.

Method-choice matrix
Seven choices that shape a social impact assessment

Every SIA makes the same seven design choices. The matrix below names each choice, the failure mode it produces when handled badly, the working pattern when handled well, and what it actually decides. The first decision controls the others.

The choice
Broken way
Working way
What this decides
How you scope

The first decision in any SIA.

Broken

The survey is written first. Indicators are chosen later by browsing IRIS+ for the closest match. The change the program wants to produce never gets named in plain language.

Working

The change is named first, in one sentence. The indicators are chosen second, in framework language. The instrument is written third, against those indicators.

Decides whether the data answers the question or only exists.

How you identify participants

Who counts as the same person across cycles.

Broken

Names and emails captured per cycle, in separate spreadsheets, with no shared key. Reconciliation is manual. Eight participants with missing emails are silently dropped from comparison.

Working

A persistent ID is assigned at first contact and reused at every later touchpoint. Pre, mid, post, and follow-up all link to the same record automatically.

Decides whether comparison is a query or a five-week project.

How you collect baseline

What "before" looks like in the data.

Broken

Baseline is pulled from intake forms after the fact. Often nothing is comparable to the endline. Reading the endline as change requires assumptions the data cannot support.

Working

Baseline is the first instrument, run before activities begin, with the exact wording the endline will use. Validated parallel forms only when re-test sensitization is a real concern.

Decides whether change is measurable or inferred.

How you handle qualitative

Open responses, interviews, narratives.

Broken

Quotes pulled by hand into the report appendix at the end. Themes coded once and never reused. The why never sits next to the what at the participant level.

Working

Open responses themed at collection, linked to the same participant ID, queryable beside scores. AI-assisted coding compresses weeks to minutes when the data is clean.

Decides whether the SIA can explain its own numbers.

How you align to a framework

IRIS+, SDGs, B4SI, 2X Global.

Broken

Indicators retrofitted to a chosen framework at report time. Crosswalks built in spreadsheets. The crosswalk breaks when the framework version updates.

Working

Indicators chosen in framework language at scoping. Data dictionary speaks the framework from collection forward. Aligned report is automatic when data is in.

Decides whether framework alignment is structural or fragile.

How you handle equity

Disaggregation by group: race, gender, age, geography.

Broken

Demographic fields added to the endline survey. Group-level cuts assembled in Excel by hand. The disaggregation report takes a separate analyst week per cycle.

Working

Equity cuts defined at scoping, captured at intake, attached to the persistent ID. Disaggregated dashboard is one filter away when the funder asks.

Decides whether equity is built in or bolted on.

How you handle the report

The artifact funders, regulators, and boards see.

Broken

Static PDF written once, reviewed once, archived once. Stale on day two. Cannot answer follow-up questions without rebuilding the analysis.

Working

Living dashboard fed by the same record. New monitoring data updates the report. Funder asks a follow-up; the answer is one filter away.

Decides whether the SIA is current or frozen.

Compounding effect

The first decision controls all the others. Indicators retrofitted to outcomes never quite fit. Frameworks layered onto the wrong data never produce credible alignment. Reports built from disconnected evidence never update. Get the scope right and the rest is mechanical. Get the scope wrong and every later decision pays for it.

Worked example
A 2X-aligned gender-lens fund scoring 24 portfolio companies

A fund manager describes the SIA her LPs are asking for, what last year's setup produced, and what changes for the next assessment cycle. The example uses the 2X Global Criteria, the most widely adopted gender-lens framework in development finance, applied to a real-world portfolio shape.

"We are a $200M emerging-markets gender-lens fund. We made our 2X Challenge commitment three years ago. Across 24 portfolio companies in financial services, healthtech, and agribusiness, we have to score each company annually against the 2X Criteria: entrepreneurship, leadership, employment, consumption, and investments through financial intermediaries. Last year, the assessment took 11 weeks. Our analyst sent five different Excel templates per company, chased 17 of 24 for follow-up, normalized definitions by hand, and pulled the LP-facing report together at midnight before the AGM. The fund expects to grow to 40 portfolio companies by 2027. Eleven weeks does not scale to 40."

Gender-lens fund manager, between annual reporting cycles.

The two axes the SIA has to measure
Quantitative axis
2X Criteria scoring

Entrepreneurship (women-founded share), leadership (women in C-suite and on board), employment (women in workforce + quality job indicator), consumption (products serving women), investments through financial intermediaries (share to women-led businesses). Year-over-year by company ID.

Bound at collection by company ID
Qualitative axis
Operator narrative

Open-ended interviews with founders and HR leads. Themed for workforce gender dynamics, leadership pipeline shifts, customer-base changes, and policy adoption. Linked to the same company-level scoring records, queryable beside the criteria.

Sopact Sense produces
  • Year-over-year 2X scoring per company

    Same persistent company ID at investment. Each year's 2X Criteria attached to the same record. Score changes are a query, not a manual merge.

  • Portfolio rollup with criterion-level filters

    All 24 companies aggregate by sector, geography, ticket size, and 2X criterion. The dashboard shows where the portfolio is moving and which companies are pulling the average.

  • Themed operator narrative beside scores

    Founder interviews themed at collection. Themes link to the same company records. The why sits beside the score for every portfolio company.

  • 2X Challenge-aligned LP report

    Mapped to the 2X Reference Guide indicators at scoping. Populated automatically. The annual LP narrative assembles itself when the data is in.

Why the traditional setup fails at 24 companies
  • Five Excel templates per company per year

    Each company submits in its own format. The analyst rebuilds the comparison sheet by hand every cycle.

  • Definitions drift across companies

    What counts as "senior leadership" varies. What counts as "women-led" varies. Cross-portfolio comparison loses precision before the analysis starts.

  • Founder interviews captured nowhere

    Operator narratives sit in call notes the IR team takes during diligence reviews. Themes are not captured systematically. The why is not in the assessment.

  • LP report rebuilt every year from scratch

    The 2X Reference Guide crosswalk lives in the analyst's head and a single Excel file. The next analyst inherits neither.

Why this is structural, not procedural

The integration is structural in Sopact, not procedural. A persistent company ID is assigned at investment and used by every later instrument. The 2X Criteria scoring template lives in the data dictionary, mapped to 2X Reference Guide indicators once. Operator interviews are themed at collection by the Intelligent Suite and linked to the same company records. The LP-facing report assembles when the data is in. The 11-week analyst pull becomes a query, and the fund can scale to 40 companies without scaling the assessment team.

Applications
Three other shapes the SIA architecture supports

The 2X portfolio worked example is one shape. Three more cover most of what teams call social impact assessment in practice. Each block points to recognized published research that uses the shape, so the architecture is anchored in real SIA studies rather than vendor claims.

01 · PROGRAM
Program-level SIA

Single program, cohort-based, IRIS+ alignment. The most common SIA shape.

A workforce, education, or community program runs an SIA across one or several cohorts. The assessment compares baseline to endline within the same participant population, themes the qualitative evidence behind the change, and reports against a framework the funder named at scoping. Year Up's published longitudinal SIA on workforce program participants is a widely cited case study in this shape, as is Acumen's Lean Data approach developed for last-mile customer programs.

What breaks: the assessment instrument is written after the curriculum. Pre-program baseline lives in a different tool from the exit survey, with no shared identifier. The funder asks for a disaggregated cut at the AGM and the analyst pulls a 6-week ad-hoc study. By cycle three, every cohort comparison requires a manual merge of three datasets.

What works: persistent participant ID at intake, threading every later instrument back to the same record. Confidence-lift, employment-at-90-days, average-wage, and themed open responses all attach to the same person. IRIS+ indicators mapped once at scoping. The next cohort starts where the last one ended.

A specific shape

320 participants across three cohorts. Confidence pre-post on a 5-point scale, employment status at 90 days, themed open-ended barriers at exit. All linked to the same participant ID. Funder report runs on the live record, refreshed continuously.

02 · GRANTEE NETWORK
Foundation grantee SIA

Multi-grantee, multi-site, cross-portfolio comparison. Foundation-driven.

A foundation funds 12 to 60 grantees across several program areas. The SIA shape is a portfolio rollup that has to compare grantees fairly, identify which sites moved the most, and report against the foundation's chosen framework. Most use IRIS+ (impact funds), B4SI (corporate community investment), or the SDGs (multilateral alignment). The GIIN's Annual Impact Investor Survey documents how investors run cross-portfolio SIA at scale; 60 Decibels' lean SIA approach is widely used by grantee networks that want survey-based outcome data without the consultant overhead. Each is a published social impact assessment case study worth reading before designing a portfolio SIA from scratch.

What breaks: each grantee uses a different data tool. Each grantee reports a slightly different indicator with a slightly different definition. Cross-portfolio comparison requires weeks of cleanup before any analysis can begin. The annual portfolio report is six months out of date the day it ships.

What works: shared instrument library and shared ID structure across the portfolio. Each grantee uses its own population, but the indicator definitions, framework alignment, and core instrument items are common. The portfolio dashboard shows aggregate and site-level variance without a reconciliation project.

A specific shape

A foundation funds 24 youth programs across three cities. All 24 use the same intake ID structure and core instrument set. Cross-program comparison surfaces which sites produce the strongest qualitative evidence alongside the strongest outcome gains.

03 · PROJECT / POLICY
Project SIA and ESIA

Project-level or policy-level SIA, often combined with EIA as ESIA.

Infrastructure, resettlement, and large energy projects often require SIA combined with environmental impact assessment as a single ESIA, under IFC Performance Standards, World Bank Environmental and Social Standards, or national regulator requirements. The shape is project-based, milestone-driven, and stakeholder-heavy. Public consultation, livelihood baselines, and grievance mechanisms become explicit instruments inside the SIA.

What breaks: stakeholder consultation lives in meeting-minute documents that never integrate with technical assessment. Livelihood baselines run once and become stale by month six of a multi-year project. Grievance logs sit in the project email inbox. The compliance report at construction milestone has to reconcile all three by hand.

What works: persistent stakeholder and household IDs linking baseline, consultation, monitoring, and grievance records. Themes from public consultation surface alongside livelihood indicators. The compliance report draws from the same connected record across project milestones, rather than being assembled by a consultant against the original baseline study.

A specific shape

A solar installation displaces 80 households for a project right-of-way. Each household has a persistent ID from baseline forward. Livelihood indicators, resettlement-package status, and themes from quarterly community consultations all attach to the same record. The IFC Performance Standard 5 disclosure assembles from the live data.

A note on tools
Where the architectural gap shows up
SurveyMonkey Qualtrics 60 Decibels SocialSuite Submittable Excel + consultants Sopact Sense

Most SIA tools handle one layer well. Survey platforms run the instrument. Qualitative tools code the open responses. Consultant teams write the report. The architectural gap is between them: persistent identity that links collection to analysis, qualitative themes linked to quantitative scores at the participant level, and framework alignment built into the data dictionary rather than retrofitted at report time. The question buyers usually open with, "what platforms can report on social impact end-to-end", or "top-rated tools for social impact assessment and reporting", surfaces the same architectural gap from a different angle.

Sopact Sense addresses the gap directly. Identity, mixed-method analysis, framework alignment, and continuous reporting share one pipeline. Across program SIAs, portfolio SIAs, grantee networks, and ESIA-style project work, the same architecture carries through. The tools above remain useful for what they were built for. The integrating layer is what changes.

FAQ
Social impact assessment questions, answered
  • Q.01
    What is social impact assessment?

    Social impact assessment is a systematic process for measuring whether a program, policy, or investment changed life outcomes for the people and communities it touched. It pairs quantitative indicators with qualitative evidence, reports against a chosen framework such as IRIS+, the UN Sustainable Development Goals, B4SI, or 2X Global, and runs across the program cycle rather than as a one-time consultant deliverable. The assessment answers a specific question: did the activity produce the change it set out to produce, and for whom.

  • Q.02
    What does SIA stand for?

    SIA stands for social impact assessment. The acronym is used across development finance, impact investing, foundation portfolios, and government policy work. When SIA is combined with environmental impact assessment, the combined study is called ESIA. Some regulators and multilateral lenders require ESIA for infrastructure, extractive, and large energy projects.

  • Q.03
    What is the social impact assessment process?

    Five stages, in order. Scope: name the change you want to measure and the question the assessment has to answer. Baseline: capture starting conditions before the program acts on the participant or site, with a stable identifier so later records link back. Method: choose how the data will be collected, including whether you need a comparison group. Measure: run the instruments at the right cadence, usually pre, post, and follow-up. Report: produce a framework-aligned narrative that names what the data does and does not show. The order matters more than any individual stage.

  • Q.04
    How do you conduct a social impact assessment?

    Begin by writing the change in plain language: who, what changes, by how much, by when. Choose the framework whose indicators match the change. Assign a persistent identifier to every participant or site at first contact. Run the baseline instrument before activities begin. Collect mid-program and endline data with the same instrument and the same identifier. Theme open-ended responses at collection, not at report time. Produce the framework-aligned report and treat it as a living record that updates as new data arrives.

  • Q.05
    What is a social impact assessment framework?

    A social impact assessment framework is a structured language for what gets measured. The widely used ones are IRIS+ (Global Impact Investing Network), the UN Sustainable Development Goals, B4SI for corporate community investment, and 2X Global for gender-lens assessment. Each defines indicators and scoring rules. Pick the framework whose indicators you can actually source from your program, not the prestige framework whose indicators you cannot.

  • Q.06
    What are the methods for social impact assessment?

    Common methods include pre-post survey design, longitudinal cohort tracking, qualitative interviews and focus groups, document review, comparison-group studies, and mixed-method designs that pair numbers with narratives. The choice depends on the change being measured, the cadence the program supports, and the framework the funder asked for. Most credible SIAs use more than one method; the methods are linked at the participant or site level.

  • Q.07
    What is the difference between SIA and ESIA?

    SIA measures social outcomes for people and communities. ESIA combines SIA with EIA (environmental impact assessment) for projects whose social and environmental effects are linked, such as infrastructure, mining, or large energy. ESIA is often required by World Bank, IFC, or national regulators; SIA on its own is more common for development programs and impact funds. The two share architecture: persistent IDs, mixed-method evidence, framework alignment, and continuous comparison.

  • Q.08
    What are examples of social impact assessment?

    Examples by sector. Workforce: a training program measures employment at 90 days and confidence pre-post on a 5-point scale, aligned to IRIS+ PI2387. Education: a literacy initiative measures reading-level gain by school site against IRIS+ PI4923. Gender-lens fund: a portfolio scores 24 companies annually against the 2X Global Criteria for entrepreneurship, leadership, employment, consumption, and investments. Community health: an outreach program measures A1C improvement at 6 months disaggregated by race and language. Each case pairs a structured indicator with qualitative evidence linked to the same record.

  • Q.09
    What is in a social impact assessment report?

    A useful social impact assessment report contains five things. The scope and the change theory it tested. The baseline-to-endline comparison, with sample size and disaggregation by gender, age, geography, or other relevant groups. Themed qualitative evidence linked to the quantitative outcomes. Framework alignment named explicitly by indicator. And a methods section honest about confidence intervals, missing data, and what the assessment cannot say. The report format usually includes an executive summary, methodology, findings, and an indicator-by-indicator appendix; some funders also ask for a separate social impact assessment plan or workplan describing the next cycle. Reports built as PDFs go stale on day one. Reports built as live records stay current.

  • Q.10
    What is a social impact assessment template?

    A social impact assessment template is a reusable instrument set or rubric, used for repeated assessments across cohorts, sites, or portfolio companies. Strong templates live in the data dictionary so the next cycle inherits the indicator definitions. Weak templates are Word or Excel files that drift in interpretation across cycles, with each program lead choosing slightly different wording and breaking comparison. The 2X Global rubric, the B Impact Assessment, and IRIS+ thematic indicator sets are widely used templates.

  • Q.11
    What are social impact assessment tools?

    Social impact assessment tools are the software platforms and analytical methods used across the assessment process: survey platforms for collection, qualitative coding for open responses, statistical packages for analysis, dashboards for reporting, and framework libraries for alignment. Older stacks treat each as a separate product. Newer platforms consolidate collection, identity, mixed-method analysis, and framework-aligned reporting into one pipeline. The choice depends on the cadence the program needs and the frameworks the funder requires.

  • Q.12
    How is SIA different from social impact measurement?

    Social impact measurement is the broader continuous practice of collecting evidence of change. Social impact assessment is the report against a framework that the measurement system produces, on a defined cadence. Measurement runs every day or every cohort. Assessment runs every reporting cycle. Both depend on the same connected evidence. Programs that do measurement well produce assessments without much friction; programs that skipped continuous measurement struggle to produce a credible assessment when a funder asks.

  • Q.13
    Can I use Google Forms or SurveyMonkey for social impact assessment?

    For a one-time data pull, yes. For an SIA that has to run cycle after cycle, the limit shows up by the second round. Forms cannot keep a persistent participant ID across cycles. Open responses sit as unanalyzed text. Framework alignment is manual at report time. The data is collected; it is not connected. The architectural choice is whether collection and analysis live in one record or in a stack of unconnected exports that an analyst rebuilds every cycle.

  • Q.14
    How does Sopact Sense handle social impact assessment?

    Sopact Sense treats social impact assessment as a continuous pipeline rather than a one-off project. A persistent stakeholder ID is assigned at first contact and threads every later touchpoint to the same record. Quantitative scores and open-ended narrative are themed together by the Intelligent Suite. Framework alignment to IRIS+, SDGs, B4SI, or the 2X Global Criteria is set in the data dictionary, not retrofitted at report time. The next assessment cycle starts where the last one ended, so the second SIA does not start from zero.

Working session
Bring an SIA scope. Leave with a build.

The shape of an SIA is a methodology decision before it is a software decision. A 30-minute working session translates the scope you already have into a shape that runs cycle after cycle: persistent identity, mixed-method evidence, framework alignment, and a report that stays current. No demo. The conversation is about your assessment, not the tool.

Format
30-minute scoping call

Founder-to-founder conversation. Working session, not a sales presentation. Camera optional.

What to bring
The change you want to measure

A program description. A funder ask. A 2X portfolio brief. An ESIA scope. Any of these is enough to start.

What you leave with
A drafted SIA shape

Identity model, instrument set, framework alignment, and the cadence the assessment will actually run on.