play icon for videos

Impact Assessment: Methods, Frameworks and Tools

Impact assessment in plain terms. The four working domains, six design principles, methods, and a worked example from a 12-school education program.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 6, 2026
360 feedback training evaluation
Use Case
Methodology guide
Impact assessment measures whether a program changed outcomes for the people it touched.

It spans four working domains: social, environmental, organizational, and sustainability. Each uses the same architecture. Define the change. Link records to the same stakeholder over time. Capture baseline before activities begin. Pair every score with the qualitative evidence that explains it.

This guide explains the four domains in plain terms, the six design principles that decide whether any single assessment holds up under funder review, the methods that cut across all four domains, and a worked example from a 12-school reading-acceleration program. No prior background needed.

What this page covers
  • 01 The four working domains
  • 02 Definitions and process
  • 03 Six design principles
  • 04 Method-choice matrix
  • 05 Worked example: 12 schools
  • 06 Frequently asked questions
What "assessment" means at three levels of rigor
Level 1 · An activity count

240 students completed the reading program.

A number. No comparison. Nothing to learn from.

Level 2 · An outcome score

Average reading level moved from 2.8 to 3.4.

A change is visible. But not who moved, why, or how it compares.

Level 3 · An impact assessment

Reading gain by participant ID. Themed teacher interviews beside scores. Aligned to IRIS+ PI4923.

Linked record. Mixed-method evidence. Framework-aligned. Comparable across cohorts.

Typology
The four working domains of impact assessment

Most types of impact assessment fall into one of four domains. The domains differ in what they measure and which framework they report against. They share the same architecture underneath. The architecture is what decides whether the assessment holds up.

Working domains
Shared architecture across all four domains
Persistent IDs

A stable identifier per stakeholder or site, used at every later touchpoint.

Mixed-method evidence

Numbers and narratives collected together, themed together at analysis.

Framework alignment

Indicator language chosen at scoping, not retrofitted at report time.

Continuous comparison

Baseline, midline, and endline link to the same record. The next cycle starts where the last one ended.

A social assessment and an environmental assessment look different on the surface. The architecture decides whether either holds up.

Sources for framework references: IRIS+ (Global Impact Investing Network), UN Sustainable Development Goals, GRI Standards, SASB Standards, B4SI (Business for Societal Impact), 2X Global, OCAT (McKinsey Organizational Capacity Assessment Tool), TCFD recommendations, CSRD (EU Corporate Sustainability Reporting Directive).

Definitions
Impact assessment, in plain terms

Five questions cover most of what people mean by impact assessment. The answers below match the visible FAQ at the bottom of the page; both are written for someone meeting the methodology for the first time.

What is impact assessment?

Impact assessment is a systematic process for measuring whether a program, policy, or investment changed outcomes for the people, communities, or environment it touched. It pairs quantitative indicators (numbers, scores, rates) with qualitative evidence (interviews, open-ended responses, narratives) and reports against a chosen framework such as IRIS+, the UN Sustainable Development Goals, GRI, or SASB.

The point of an assessment is to answer a specific question. Did the activity produce the change it set out to produce, and for whom. Said another way: was the program worth running, and is the next cycle worth running differently. An assessment that does not answer either question is data collection without a purpose.

What are the types of impact assessment?

Four working domains cover most assessments. Social impact assessment measures outcomes for people and communities. Environmental impact assessment studies effects on ecosystems and resources. Organizational assessment measures capacity and maturity inside an organization. Sustainability assessment tracks ESG performance over time. Adjacent forms include economic impact analysis, training and learning impact, gender-lens assessment, business impact analysis, and AI system impact assessment.

Most organizations need two or three of these, not all of them. A workforce nonprofit runs social and organizational. A renewable-energy fund runs environmental and sustainability. A multi-program foundation often runs all four across its portfolio. The common mistake is starting a new assessment from scratch each time. The four domains share the same architecture; the data does not need to.

What is an impact assessment framework?

An impact assessment framework is a structured language for what gets measured. The widely used ones are IRIS+ (from the Global Impact Investing Network, the de facto standard for impact-fund reporting), the UN Sustainable Development Goals (broad, often paired with IRIS+ for double-coding), GRI and SASB (sustainability and ESG reporting), B4SI (corporate community investment), and 2X Global (gender-lens).

The framework is only as useful as the evidence underneath it. Indicators retrofitted to an old dataset rarely fit. Frameworks chosen at scoping fit cleanly because the data dictionary already speaks the framework's language. Picking the prestige framework first and discovering you cannot collect its indicators is the most common scoping mistake in impact assessment.

What is the impact assessment process?

Five stages, in order. Scope: name the change you want to measure and the question the assessment has to answer. Baseline: capture starting conditions before the program acts on the participant or site, with a stable identifier so later records link back. Method: choose how the data will be collected, including whether you need a comparison group. Measure: run the instruments at the right cadence, usually pre, post, and follow-up. Report: produce a framework-aligned narrative that names what the data does and does not show.

The order matters more than any individual stage. Scoping after the survey is written produces an assessment that cannot answer the question. Skipping baseline produces an endline with nothing to compare against. Retrofitting a framework at report time produces alignment that does not survive a careful funder reviewer. Most assessment failures trace back to the order, not the technique.

What is the difference between impact assessment and impact evaluation?

Impact assessment measures what changed and reports against a framework. Impact evaluation asks whether the program caused the change, using a comparison group or counterfactual. Assessment can run on any program; evaluation requires the design choices that allow a causal claim. Many funders use the words interchangeably, and that is fine for everyday conversation. The design difference still matters when the question is "did our program produce this outcome, or would it have happened anyway."

In practice, assessments produce the running record of outcomes across cycles. Evaluations produce the periodic causal study that depends on baseline data, comparison conditions, and a sample size that supports inference. Programs that have done assessment well for several cycles can usually run a credible evaluation when a funder asks. Programs that have skipped baseline cannot.

Distinctions
Related-but-different terms
Assessment vs. Analysis
Impact assessment vs. impact analysis

Impact analysis is a technical term used in software engineering and business continuity to mean change-impact analysis. Outside those fields, "impact analysis" is sometimes used loosely to mean impact assessment. The two are not the same. Impact analysis tools and frameworks for SaaS are change-management software, not measurement software.

Assessment vs. Measurement
Impact assessment vs. impact measurement

Impact measurement is the broader practice: collecting evidence of change continuously. Impact assessment is the report against a framework that the measurement system produces, on a defined cadence. Measurement runs every day. Assessment runs every cycle. Both depend on the same connected evidence.

Assessment vs. Evaluation
Impact assessment vs. impact evaluation

Assessment reports what changed. Evaluation tests whether the program caused the change. Evaluation requires a comparison condition. Many programs run assessment for years and an evaluation periodically when a funder commissions one.

Social vs. Environmental
Social impact assessment vs. environmental impact assessment

Social and environmental assessments differ in what they measure (people vs. ecosystems) and in regulatory weight (environmental is often legally required, social is often funder-driven). They share the same architecture: persistent IDs, mixed-method evidence, framework alignment, and continuous comparison.

Design principles
Six principles that decide whether an assessment holds

The same six principles separate an assessment that survives funder review from one that does not. They apply across all four working domains. The order matters: the first decision constrains every later one.

01 · SCOPE
Name the change before naming the indicator

A program does not have an outcome until you have written the change in plain language.

Most assessment work begins with an indicator list pulled from a framework. That order is backward. The indicator should follow the change. Without a named change, indicators measure activity, not outcome.


Why it matters. An assessment that cannot say what change it tested cannot survive a careful reviewer.

02 · IDENTITY
Persistent IDs at first contact

Assign a stable identifier the moment a stakeholder enters the system, not at endline.

Every later touchpoint links to that identifier: pre-survey, mid-program check-in, exit assessment, follow-up. Without a persistent ID, baseline-to-endline comparison becomes manual reconciliation by name and email, which fails as soon as someone changes either.


Why it matters. Identity is the difference between a query and a five-week reconciliation project.

03 · BASELINE
Capture starting conditions before the program acts

Without baseline, there is no comparison; without comparison, the endline number means nothing.

Baseline runs at intake. The exact same instrument or its validated parallel form runs at endline. Different question wording at the two points invalidates the comparison. This is the single most common assessment failure.


Why it matters. A program with no baseline can describe its endline. It cannot defend a change claim.

04 · MIXED METHOD
Numbers and narratives, side by side

Pair every quantitative outcome with one qualitative question, themed and linked to the same record.

A score answers what changed. A narrative answers why. The two have to be linked at the participant level for the why to inform the next program cycle. Quotes pulled into the appendix at report time do not count as integration.


Why it matters. Most assessment failures are explained in the open responses. The number alone never explains the number.

05 · FRAMEWORK
Choose the language; the data must support it

Indicators chosen at scoping in framework language fit cleanly. Indicators retrofitted to old data do not.

IRIS+, GRI, SASB, B4SI, 2X Global, the Sustainable Development Goals: each framework has a vocabulary. The data dictionary should speak that vocabulary from collection forward, not at report time. Picking the prestige framework before checking what the data can support is the most common scoping mistake.


Why it matters. A framework crosswalk built at report time is the most fragile artifact in the assessment.

06 · CONTINUITY
Assessment is a pipeline, not a project

The next cycle should start where the last one ended, not from a blank spreadsheet.

A consultant report archived as a PDF cannot inform the next cohort. A connected record can. Continuity is what turns assessment from an annual deliverable into a feedback loop that improves the program.


Why it matters. Continuity is what compounds. Programs that run assessment continuously gain a year of insight every year.

Method-choice matrix
Seven choices that shape an impact assessment

Every assessment makes the same seven design choices, whether the team writes them down or not. The matrix below names each choice, the failure mode it produces when handled badly, the working pattern when handled well, and what it actually decides.

The choice
Broken way
Working way
What this decides
How you scope

The first decision in any assessment.

Broken

The survey is written first. Indicators are chosen later by browsing IRIS+ for the closest match. The change the program is trying to produce never gets named in plain language.

Working

The change is named first, in one sentence. The indicators are chosen second, in framework language. The instrument is written third, against those indicators.

Decides whether the data answers the question, or only exists.

How you identify stakeholders

Who counts as the same person across cycles.

Broken

Names and emails captured per cycle, in separate spreadsheets, with no shared key. Reconciliation is manual and partial. Sarah Johnson becomes S. Johnson and her email changed.

Working

A persistent ID is assigned at first contact and reused at every later touchpoint. Pre, mid, post, and follow-up all link to the same record automatically.

Decides whether comparison is a query or a five-week project.

How you collect baseline

What "before" looks like in the data.

Broken

Baseline is "what we have" from intake forms. Often nothing comparable to the endline. Reading the endline as change requires assumptions the data cannot support.

Working

Baseline is the first instrument, run before activities begin, with the exact wording the endline will use. Validated parallel forms only when re-test sensitization is a real concern.

Decides whether change is measurable or inferred.

How you handle qualitative evidence

Open-ended responses, interviews, narratives.

Broken

Quotes pulled by hand into the report appendix at the end. Themes coded once and never reused. The why never sits next to the what at the participant level.

Working

Open responses are themed at collection, linked to the same participant ID, and queryable beside quantitative scores. AI-assisted coding speeds this from weeks to minutes when the data is clean.

Decides whether the assessment can explain its own numbers.

How you align to a framework

IRIS+, GRI, SASB, B4SI, SDGs, 2X Global.

Broken

Indicators retrofitted to a chosen framework at report time. Crosswalks built in spreadsheets. The crosswalk breaks the moment the framework version updates.

Working

Indicators chosen in framework language at scoping. Data dictionary speaks the framework from collection forward. Aligned report is automatic once the data is in.

Decides whether framework alignment is structural or fragile.

How you handle the report

The artifact funders, regulators, and boards see.

Broken

Static PDF written once, reviewed once, archived once. Stale on day two. Cannot answer follow-up questions without rebuilding the analysis.

Working

Living dashboard fed by the same record. New monitoring data updates the report. Funder asks a follow-up; the answer is one filter away.

Decides whether the assessment is current or frozen.

How you handle multiple types

Across social, environmental, organizational, sustainability.

Broken

A new consultant, a new tool, and a new dataset for each domain. Three reports that share none of their evidence. The third assessment starts from zero.

Working

One pipeline. Domains share the same IDs, instrument library, and framework dictionary. The next domain inherits everything the previous one built.

Decides whether the next cycle compounds or restarts.

Compounding effect

The first decision controls all the others. Indicators retrofitted to outcomes never quite fit. Frameworks layered onto the wrong data never produce credible alignment. Reports built from disconnected evidence never update. Get the scope right and the rest is mechanical. Get the scope wrong and every later decision pays for it.

Worked example
A 12-school reading-acceleration program

An education program lead describes the assessment her foundation funder is asking for, what last year's setup produced, and what changes for next term. The example combines social and organizational dimensions of impact assessment in one shared record.

"We run reading-acceleration programs in twelve schools across two districts. Across the cohort we have 240 students. Our funder is an education foundation that wants to see, for next year's grant, both the reading-level gains by school and the qualitative evidence behind why some sites moved more than others. Last year we ran the assessment as three separate things: pre-test in one tool, post-test in another, teacher interviews in a third. Pulling that into one coherent report took our analyst five weeks. We need a different setup before next term, or we will be back here in a year doing the same five-week pull."

Education program lead, between cohort cycles.

The two axes the assessment has to measure
Quantitative axis
Reading-level gain

Pre-test and post-test scores by student. Disaggregated by school site, teacher, prior reading level, and language at home. IRIS+ PI4923 (reading proficiency change).

Bound at collection by participant ID
Qualitative axis
Themes from teachers and students

Open-ended teacher interviews and student reflections. Themed for belonging, teacher relationship quality, classroom barriers, and confidence shifts. Linked to the same student records.

Sopact Sense produces
  • Pre-post comparison by student

    Same persistent ID at intake and at endline. Baseline-to-endline reading gain is a query, not a manual merge.

  • Site-level disaggregation

    Reading gain by school, by teacher, by prior reading level. The portfolio dashboard shows aggregate and site variance without a wrangling project.

  • Themed qualitative evidence

    Teacher interviews and student reflections themed at collection, linked to the same student records. The why sits beside the score.

  • IRIS+ aligned report

    PI4923 mapped at scoping, populated automatically. Funder report assembles itself when the data is in.

Why traditional tools fail
  • Three tools, three datasets

    Pre-test in one platform, post-test in another, interviews in a third. No shared key. Reconciliation is the project.

  • Manual matching by name

    Students matched by name and email each cycle. Errors compound. The eight students with missing emails are silently dropped from the comparison.

  • Quotes pulled by hand

    Teacher interview themes coded once at report time. Never linked to specific student outcomes. Quotes appear in the appendix, not in the analysis.

  • Crosswalk built in Excel

    IRIS+ alignment retrofitted at report time. The crosswalk breaks when the framework updates. The next cycle rebuilds it from scratch.

Why this is structural, not procedural

The integration is structural in Sopact. A persistent student ID is assigned at intake and used by every later instrument. Pre-test, post-test, teacher interview, and 90-day follow-up all link to the same record automatically. Intelligent Cell themes the open responses against student-level scores. Intelligent Grid generates the IRIS+ aligned report when the funder asks. The five-week analyst pull becomes a query. Next year's cohort starts where this one ended.

Applications
Three shapes that show the architecture working

Three different program contexts illustrate how the four-domain architecture plays out in practice. The shapes differ. The architecture does not.

01 · COHORT
Workforce training programs

Single cohort, employment outcomes, IRIS+ PI2387 alignment.

A typical workforce program enrolls a cohort of 200 to 400 participants, runs an eight-to-twelve-week curriculum, and reports against employment-at-90-days and average-wage outcomes for funders. The assessment shape is social impact assessment with an organizational dimension that tracks delivery quality.

What breaks: the assessment instrument is written after the curriculum. Pre-program baseline is captured in the same tool as the application form, with no shared key to the post-program survey. Employer follow-up calls live in a spreadsheet that nobody touches between cohorts. By cycle three, comparison across cohorts requires a manual merge of three datasets that have drifted in their column names.

What works: persistent participant ID at application, with the same ID threading through pre-program survey, mid-program check-in, exit assessment, and 90-day employment follow-up. Confidence-lift, employment-at-90-days, and average-wage all attach to the same record. IRIS+ PI2387 mapped once at scoping. The third cohort starts where the second one ended.

A specific shape

320 participants, three cohorts. Confidence lift, pre to post, on a 5-point scale. 90-day employment status. Open-text barriers themed at exit. All linked to the same participant ID. Funder report runs on the live record.

02 · PORTFOLIO
Foundation portfolios

Multi-grantee, multi-domain, cross-portfolio comparison.

A foundation funds 12 to 60 grantees across several program areas. The assessment shape is a portfolio-level rollup that has to compare grantees fairly, identify which sites moved the most, and report against the foundation's chosen framework: usually IRIS+ for impact funds, B4SI for corporate foundations, or the SDGs for multilateral alignment.

What breaks: each grantee uses a different data tool. Each grantee reports a slightly different indicator with a slightly different definition. Cross-portfolio comparison requires weeks of cleanup before any analysis can begin. The annual portfolio report is six months out of date the day it ships.

What works: shared instrument library and shared ID structure across the portfolio. Each grantee uses its own participant population, but the indicator definitions, framework alignment, and core instrument items are common. The portfolio dashboard shows aggregate and site-level variance without a reconciliation project.

A specific shape

A foundation funds 24 youth programs across three cities. All 24 use the same intake ID structure and core instrument set. Cross-program comparison shows which sites produce the strongest qualitative evidence alongside the strongest outcome gains.

03 · SUPPLIER NETWORK
Sustainability and ESG programs

Supplier or site-level scoring, GRI or SASB alignment, regulatory cadence.

A corporate sustainability team runs an annual ESG assessment across 80 to 200 suppliers and a smaller set of operating sites. The shape spans environmental and sustainability domains. The reporting framework is usually GRI, SASB, or the new CSRD requirements for European operations. The audience is the board, regulators, and investor relations.

What breaks: the supplier survey runs once a year as a long PDF. Open-text responses to questions about labor practices and grievance mechanisms are read by hand, sometimes not at all. Site-level environmental measurements live in a separate operations system. The materiality matrix and the supplier scorecard share none of their evidence.

What works: supplier and site identifiers persist across years. Open responses are themed at collection. Environmental measurements link to the same site IDs. The materiality assessment, supplier scorecard, and CSRD-aligned narrative share the same record. The next year's assessment inherits everything the current one built.

A specific shape

120 suppliers assessed annually on labor, carbon disclosure, and governance. Themes from open responses linked to supplier IDs and to scorecard scores. 12 high-risk suppliers surface automatically. The CSRD narrative draws from the same record.

A note on tools
Where the architectural gap shows up
SurveyMonkey Qualtrics Submittable Typeform Excel + consultants Sopact Sense

Most impact assessment tools handle collection well. Survey platforms run the instrument. Form builders capture the application. Consultant teams write the report. The architectural gap is between them: persistent identity that links collection to analysis, qualitative themes linked to quantitative scores, and framework alignment built into the data dictionary rather than retrofitted at report time.

Sopact Sense addresses the gap directly. Identity, mixed-method analysis, framework alignment, and continuous reporting share one pipeline. Across the four working domains, the same architecture carries through. Tools above remain useful for what they were built for. The integrating layer is what changes.

FAQ
Impact assessment questions, answered
  • Q.01
    What is impact assessment?

    Impact assessment is a systematic process for measuring whether a program, policy, or investment changed outcomes for the people, communities, or environment it touched. It pairs quantitative indicators with qualitative evidence and reports against a chosen framework such as IRIS+, the UN Sustainable Development Goals, GRI, or SASB. The assessment answers a specific question: did the activity produce the change it set out to produce, and for whom.

  • Q.02
    What are the types of impact assessment?

    Four working domains cover most assessments. Social impact assessment measures outcomes for people and communities. Environmental impact assessment studies effects on ecosystems and resources. Organizational assessment measures capacity and maturity within an organization. Sustainability assessment tracks ESG performance over time. Adjacent forms include economic impact analysis, training impact, gender-lens, and AI system impact assessment. Most organizations need two or three of these, not all of them.

  • Q.03
    What is an impact assessment framework?

    An impact assessment framework is a structured language for what gets measured. The widely used ones are IRIS+, the UN Sustainable Development Goals, GRI, SASB, B4SI for corporate community investment, and 2X Global for gender-lens. Each defines indicators and how to score them. The framework is only as useful as the evidence underneath. Indicators retrofitted to an old dataset rarely fit. Frameworks chosen at scoping fit cleanly.

  • Q.04
    What is the impact assessment process?

    Five stages. First, scope the change you want to measure. Second, capture a baseline before the program acts on the participant or site. Third, choose the method, which usually means deciding whether you need a comparison group. Fourth, measure at the right cadence: pre, post, and follow-up. Fifth, produce a framework-aligned report. The order matters. Skipping baseline or scoping after the survey is written produces an assessment that cannot answer the question.

  • Q.05
    What is the difference between impact assessment and impact evaluation?

    Impact assessment measures what changed and reports against a framework. Impact evaluation asks whether the program caused the change, using a comparison group or counterfactual. Assessment can run on any program. Evaluation requires the design choices that allow a causal claim. In practice, assessments produce the running record of outcomes; evaluations produce the periodic causal study. Many funders use the words interchangeably; the design difference still matters.

  • Q.06
    What are impact assessment tools?

    Impact assessment tools are the software platforms, techniques, and analytical methods used across the assessment process: survey platforms for collection, qualitative coding for open responses, statistical packages for analysis, dashboards for reporting, and framework libraries for alignment. Older stacks treat each as a separate product and benchmark each separately. The architectural shift in 2026 is consolidating collection, identity, mixed-method analysis, and framework-aligned reporting into one pipeline. The strongest impact assessment solutions in the market handle this consolidation directly, so the next assessment cycle does not start from zero.

  • Q.07
    What is the best impact assessment software?

    The strongest impact assessment software handles four functions as one pipeline: clean collection with persistent stakeholder IDs, mixed-method analysis combining numbers and narratives, automatic framework alignment such as IRIS+ or GRI, and continuous reporting that updates as new data arrives. Most legacy platforms handle only one or two of these. Sopact Sense is built around all four. The right choice depends on which domains you are assessing and how often the data has to refresh.

  • Q.08
    What is AI impact assessment?

    The phrase has two distinct meanings. In impact measurement, AI impact assessment refers to using AI as a tool to theme open-ended responses, analyze documents like grant reports, and pair qualitative themes with quantitative scores at scale. AI impact assessment tools and AI impact assessment software in this sense are the platforms that pair clean collection with AI analysis. In governance, AI impact assessment refers to a structured study of the social and ethical effects of an AI system itself, often required by emerging regulation; here, an AI impact assessment template is a fixed set of risk questions a deployer must answer. Make sure you know which one a funder or regulator means before scoping.

  • Q.09
    How do I choose an impact assessment framework?

    Start with who is reading the report. A foundation funder typically asks for IRIS+ or SDG alignment. A public-company sustainability team typically reports against GRI or SASB. A corporate community-investment program often uses B4SI. A gender-lens fund uses 2X Global. Pick the framework whose indicators you can actually source. Picking the prestige framework and then discovering you cannot collect its indicators is the most common scoping mistake.

  • Q.10
    What does an impact assessment report contain?

    A useful impact assessment report contains five things. The scope and the change theory it tested. The baseline and endline comparison, with sample size and disaggregation. Qualitative evidence linked to the quantitative outcomes. Framework alignment, named explicitly by indicator. And a methods section honest about confidence, missing data, and what the assessment cannot say. Reports built as PDFs go stale on day one. Reports built as live records stay current.

  • Q.11
    What are examples of impact assessment?

    Examples by domain. Social: a workforce program measures employment at 90 days and confidence pre and post on a 5-point scale, aligned to IRIS+ PI2387. Environmental: a renewable-energy project measures grid emissions avoided and community air-quality concerns at six monitoring sites, against GRI 305. Organizational: a nonprofit scores its own capacity across governance, finance, and program delivery using McKinsey OCAT. Sustainability: a corporate buyer scores 120 suppliers on labor practices, carbon disclosure, and governance against SASB.

  • Q.12
    How long does an impact assessment take?

    A traditional consultant-led assessment takes three to nine months from scoping to report, depending on domain. Environmental assessments can run a year or more in regulated settings. The cadence is set by the program cycle, not by the report deadline. Continuous assessment changes the unit. Once collection runs through one platform with persistent IDs and framework alignment in the data dictionary, an updated report becomes a query rather than a project, taking hours rather than weeks.

  • Q.13
    Can I use Google Forms or SurveyMonkey for impact assessment?

    For a one-time data pull, yes. For an assessment that has to run cycle after cycle, the limit shows up by the second round. Forms cannot keep a persistent participant ID across cycles. Open responses sit as unanalyzed text. Framework alignment is manual at report time. The data is collected; it is not connected. The architectural choice is whether collection and analysis live in one record or in a stack of unconnected exports.

  • Q.14
    How does Sopact Sense handle impact assessment?

    Sopact Sense treats impact assessment as a continuous pipeline rather than a one-off project. A persistent stakeholder ID is assigned at intake and threads every later touchpoint to the same record. Quantitative scores and open-ended narrative are themed together by the Intelligent Suite. Framework alignment to IRIS+, SDGs, GRI, SASB, B4SI, or 2X Global is set in the data dictionary, not retrofitted. The four working domains share the same architecture, so the next assessment cycle does not start from zero.

Working session
Bring an assessment scope. See it built.

A 60-minute working session against your actual scope. No demo of generic dashboards. We translate your change theory into instruments, link the participant or site IDs, map the framework indicators, and show what the report looks like once the architecture is in place.

Format

60-minute video call. Working session, not a sales pitch. Camera optional.

What to bring

A scope statement or change-theory sketch. A current instrument if you have one. The framework your funder asked for.

What you leave with

A mapped indicator set, an ID structure for your stakeholders, and a draft architecture for next cycle.