play icon for videos

Sopact Sense — The AI Workflow for Impact Data

A survey is a snapshot. Sopact runs the workflow underneath. Read any data type on arrival, carry one record across every cycle. Book a 60-min walkthrough.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
April 29, 2026
360 feedback training evaluation
Use Case
Impact Portfolio Intelligence
A survey is a snapshot. Every organization runs on a workflow.

You have one too. Application review, selection, mid-cycle check-in, outcome reporting, follow-up. The cycle that turns data into decisions. Today most of it lives in someone's head, on someone's laptop, in fragments nobody can repeat.

Sopact treats the workflow itself as the unit. Bring in any data type. Forms, documents, files, interviews. Read it the moment it arrives. Iterate on small samples. Build the cycle that fits how you already work.

From snapshot to workflow A snapshot survey shown above as a small isolated form with a few bars. Below, a workflow shown as five touchpoints over time, each labeled with a stage name and the mix of data types collected at that stage, all connected by a single clay-colored thread. SNAPSHOT 1 form. 1 moment. 1 set of answers. WORKFLOW Application form + PDF Onboarding form + file Mid-cycle interview Outcome form + report Follow-up form + audio One record. Every touchpoint. AI reading every type, every time. YOUR WORKFLOW
The architecture
Small parts. Many shapes. Yours to build.

Software usually gives you a fixed feature menu and asks you to fit. Sopact gives you a small set of parts. Ways to bring data in. Ways to read it. Ways to iterate until the rubric works. You compose the workflow that fits.

01 · COLLECT
Bring in any data type.

Snapshot data was capped at what fits in a form field. Decision data starts with everything you collect, structured or not.

  • Forms and surveys
  • PDFs, reports, applications
  • File uploads, any format
  • Interview audio and transcripts
  • Offline collection that syncs
  • Multiple languages, same record
02 · ANALYZE
Read it the moment it arrives.

AI now reads structured and unstructured data together. The cleanup step that used to take two days happens at upload.

  • Score open-ended answers against your rubric
  • Extract structured fields from documents
  • Connect every record to the same person or org
  • Surface patterns across the whole cohort
  • Flag exceptions for human review
03 · ITERATE & COMPOSE
Test small. Build the cycle that fits.

Don't configure for a year before launching. Start with ten records, adjust the rubric, re-run. Compose the workflow when you trust it.

  • Test with 10 records, not 10,000
  • Adjust the rubric, re-run instantly
  • Promote when it works
  • Add stages as your cycle grows
  • Application → outcome → next round, all one record
The flexibility is the platform. What you compose with these parts is up to you.
The lifecycle
What's known about each record grows at every stage.

A workflow has stages. Each stage adds new data: a document, a survey, a check-in, a file. The record carries all of it forward, scored and connected, ready for the next decision. By follow-up time, year three already knows what the application said.

Context known per record
Apply Review Onboard Mid-cycle Outcome Follow-up
Stage 01
Apply
Application form. Resume or pitch deck. Recommendation letters.
Cell scores essay against rubric. Cell extracts experience signals from resume.
Identity. Initial profile. First scored signals.
Stage 02
Review
Reviewer notes. Interview transcripts. Scoring rubrics from multiple readers.
Row consolidates all docs into one reviewer brief. Cell scores transcripts.
Decision packet. Reviewer-ready synthesis. Reasoning trail.
Stage 03
Onboard
Baseline survey. Onboarding form. First file uploads.
Row builds participant profile combining application plus baseline.
Pre-program baseline. Goals. Starting indicators.
Stage 04
Mid-cycle
Check-in survey. Open-ended reflection. Quarterly file.
Cell scores reflections. Column flags emerging patterns across cohort.
Trajectory. Risk flags. Comparison to baseline.
Stage 05
Outcome
Post-program survey. Narrative report. Outcome documentation.
Row synthesizes pre and post. Grid rolls up cohort outcomes.
Pre-post comparison. Outcome scores. Funder-ready story.
Stage 06
Follow-up
12-month survey. Audio interview. Long-term outcome data.
Cell scores audio transcript. Column tracks year-over-year patterns.
Multi-year story. Lasting outcomes. Input for next round.

Year three already knows what the application said. Year five already knows what year three said. The record is the workflow.

The Sopact Sense thesis
The Intelligent Suite
Four AI layers. Two read on arrival. Two read across the whole.

AI does different work at different scales. The Suite names what's running where, so you can see what's reading your data and when. The first two layers run the moment data arrives. The other two run across every record at once.

Intelligent Cell
On arrival
One field. One prompt. One signal.

Single-field analysis on one open-text answer or one file upload, scored against a rubric the program owner defines.

  • Score an essay against a 5-point rubric, with reasoning for every score
  • Read a resume on upload and pull out experience signals
  • Read a recommendation letter and surface named claims about strengths
Replaces: the first two days of analysis spent coding documents by hand.
Intelligent Row
On arrival
All fields for one record. One coherent view.

Multi-field analysis per record, combining several Cell outputs plus structured answers into a single reviewer-ready synthesis.

  • Resume plus letters plus essay rolled up into one applicant brief
  • Pre-program survey plus onboarding plus first check-in into a baseline profile
  • Multiple grant application sections into a one-page reviewer summary
Replaces: the reviewer with five tabs open trying to mentally synthesize a candidate.
Intelligent Column
Across all records
One field across everyone. The pattern emerges.

Cross-record analysis on one or more fields. Theme extraction, sentiment trends, indicator computation across the full dataset.

  • Theme extraction across 1,200 answers to "what challenge are you facing?"
  • IRIS+ indicator computation across every active investee
  • Sentiment trend across quarterly check-ins for a multi-year cohort
Replaces: NVivo, ATLAS.ti, MAXQDA running as a separate analysis silo.
Intelligent Grid
Across the whole dataset
Every field, every record. Decision-ready output.

Full dataset analysis across every record and every field. Portfolio dashboards, funder reports, cohort comparison, all from the same data layer.

  • Portfolio dashboard for an impact fund: every indicator by every investee by every period
  • One-click funder report aggregating outcomes across a multi-program foundation
  • Cohort vs cohort comparison for a five-year accelerator program
Replaces: the two weeks before a board meeting spent reconciling spreadsheet exports.
What teams build on it
Two shapes of workflow. One platform underneath.

Impact Portfolio Intelligence sits over two recurring shapes. Some workflows run on individual people: applicants, students, trainees, alumni. Others run on organizations: investees, grantees, suppliers, cohort companies. The architecture is the same. The configuration is yours.

What's different
Where each tool stops. Where Sopact carries forward.

Survey tools collect. Application platforms intake. Bundled platforms wrap workflow modules around a CRM. None of them treat the workflow itself as the unit. Sopact does.

Capability

Survey tools

SurveyMonkey, Qualtrics, Typeform

Application platforms

Submittable, WizeHive, Award Force

Bundled platforms

Bonterra, Blackbaud

Data types collected
Forms only. Open-ended fields, no real document or audio handling.
Forms plus intake documents. Limited to the application stage.
Mixed, but each module owns its own data layer.
One record across cycles
No. Each survey is isolated. Embedded-data hacks required.
No. Each application cycle starts fresh.
Partial. Donor records yes, program stakeholders rarely.
AI analysis on arrival
Add-on text analysis as a separate dashboard. Analyst-defined, not program-owner-defined.
No. Manual review of every document.
Generic AI bolted on. Not rubric-driven.
Workflow customization
Skip logic only. No cross-stage flow.
Templated. Hard to extend past intake.
Module-bundled. You buy what they ship.
Time to first decision
Weeks of cleanup after each survey closes.
Days per applicant for manual review.
Months of implementation before any data flows.
Where teams already run on it
Three workflow shapes. Same architecture underneath.

Different organizations. Different cycles. Different reporting demands. Multi-cycle outcome at scale · public-company CDFI · multi-year individual stakeholder tracking. All running on the same data layer.

Crossroads Impact Corp

Public-company CDFICapital deploymentLP & regulator reporting

From quarterly spreadsheet hell to portfolio reporting that runs.
$223M+

YoY increase in environmental and social loans, with the impact reporting infrastructure to defend it.

Crossroads needed portfolio reporting that could stand up to public-company scrutiny. They moved from quarterly cleanup spreadsheets to a workflow where every investee carries its own record forward, every quarter, with AI reading the documents that come in.

"Sopact Sense gave us the architecture to actually report what we already do."

Crossroads Impact Corp · impact team

Food4Education

Multi-cycle programMultilingualSupply chain intelligence

Outcome tracking at scale across multiple languages and supply touchpoints.
3+ cycles

Connected outcome data across multiple program years and supplier touchpoints, in multiple languages.

Food4Education runs a multi-stage program with intake, mid-cycle, and outcome touchpoints, plus an emerging supplier-side workflow for kitchen waste collection. Sopact carries the same record across every touchpoint, in the language each respondent speaks.

"The intelligence layer means we get results without waiting for the analyst."

Food4Education · program team

Boys to Men Tucson

Multi-year youth programTight nonprofit budgetFunder advocacy

Multi-year individual outcome tracking that secured funder buy-in.
Multi-year

Individual youth tracked across the full program arc, with outcome reporting that funders fund.

Boys to Men Tucson had no budget for an enterprise platform but every reason to need one. The connected record let them follow each young person across years, surface real outcomes, and present them to funders in a form that secured continued investment in measurement infrastructure itself.

"The case to fund what we do became a lot easier when the data finally connected."

Boys to Men Tucson · program leadership

Questions answered
Common questions, answered straight.

Most of what people ask after a demo. If something here isn't covered, the demo itself usually answers it.

Q.

What is Sopact Sense in plain language?

A.

A platform that runs the workflow connecting your data to your decisions. You collect data of any type. AI reads it on arrival. The same record carries forward across every cycle. You compose the workflow that fits how you already work.

Q.

How is this different from a survey tool?

A.

A survey is one moment in time. A survey tool collects that moment and stops. Sopact treats the workflow as the unit. The survey is one stage. The application document was an earlier stage. The follow-up audio is a later one. They all land in the same record, scored, connected.

Q.

Do I need to replace my existing system?

A.

Often no. Sopact is the intelligence layer, not the CRM. It connects to Salesforce, HubSpot, Affinity, QuickBooks, Snowflake, and others. The data layer underneath your existing workflow stays where it is. We add the analysis, the connected record, and the workflow on top.

Q.

How long does implementation take?

A.

We start by running ten records through your rubric. That takes a day or two. From there, you iterate the rubric, run another small batch, and promote when it works. Most teams have their first decision-ready output within a week. Full workflow build-out usually lands inside a quarter.

Q.

How does AI analysis work on documents and open-ended answers?

A.

You write the rubric. The program owner, not an analyst. AI reads the document or the open-ended answer at the moment of upload, scores it against your rubric, and shows the reasoning. The output lands in a column inside the same record. You can review, adjust, re-run.

Q.

Who owns the data and the AI prompts?

A.

You do. The data is yours. The rubric is yours. The prompts are yours. We don't train on your data. Export at any time. The platform is the layer; the content stays with you.

Demo length

60 minutes, focused on your workflow

What you'll see

Your rubric. Your data type. Live.

What it costs

Zero. The 60 minutes is the demo.

See it on your data
Bring one document or one survey. We'll show you the workflow.

Send us one resume, one essay, one investee report, one open-ended survey. We'll run it through your rubric live, in the platform. You'll see the architecture working on data you actually have.