Application data is clean. Outcome data is clean. The middle isn't.
Every accelerator collects the data. None of it connects. Application scores live in one platform. Mentor sessions live in Slack. Milestone check-ins live in Airtable. Outcome surveys come back twelve months later in a SurveyMonkey export, with no shared key back to anything that came before.
Adding a more rigorous outcome survey doesn't fix it. The survey instrument is fine. The architecture underneath is not.
The pattern repeats in every cohort, regardless of size or domain. Week one, structured intake data exists. Week six, the structured data stops accumulating and unstructured program reality takes over. Month twelve, the funder asks which interventions actually caused the outcomes.
Five tools. No shared founder ID. The reasoning chain was never built.
Intake
Application · week 0
Structured. Pitch deck, rubric, founder profile.
Application platform owns this layer.
Programming
Cohort to demo day · weeks 6 to 52
Fragmented. Mentor calls, Slack threads, milestone checks, advisor notes.
Five tools, no shared ID. Recoverable only by hand.
Outcome
Follow-up surveys · month 12+
Structured again. Revenue, fundraising, team size, retention.
Survey platform owns this layer. Disconnected from intake.
Five tools. No shared founder ID. No persistent record. No design for the causal question every sophisticated funder eventually asks. We call this gap the Cohort Cliff. It's the architecture layer that has to change, not the reporting tool.
The architectural argument