play icon for videos

Program Report: The Source Artifact (every funder's view)

A program report is the source. Grant reports, donor reports, board reports, annual reports are all filtered views of it. Examples and template inside.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 2, 2026
360 feedback training evaluation
Use Case

Perspectives · Reporting architecture

Stop writing five reports about one program. Write one. Filter five views.

Most nonprofits produce a grant report, a donor report, a board report, an impact report, and an annual report from the same program. They all describe the same activities, the same participants, the same outcomes. What if the program report was the source, and every audience read a filtered view?

SOURCE ARTIFACT The program report
Grant report (filtered to one funder)
Donor report (filtered to gift area)
Board report (filtered to governance)
Annual report (aggregated, public)

One source. Four filtered views. No parallel authoring cycles.

The argument

Most reports are not separate documents. They are the same evidence dressed for different audiences.

A nonprofit runs a 47-person workforce cohort. The same 47 people, the same outcomes, the same participant stories. By December the organization has produced five different documents about that cohort: a renewal report to the foundation, a stewardship update to major donors, a governance summary for the board, a public anniversary spread for the annual report, and a public impact summary on the website.

Five reports. One program. Five parallel authoring cycles. Five different staff hands. Five chances for the numbers to disagree. If you ask the executive director why this is the workflow, the answer is some version of "every audience needs something different." If you ask whether the underlying evidence is the same, the answer is yes.

That is the architectural insight worth refusing to ignore: when the evidence is the same, the report should be the same too. Pick one canonical artifact, the program report, and let every other audience read a filtered view of it. The grant report is the program report scoped to one funder. The donor report is the program report scoped to one gift area. The board report is the program report with governance commentary on top. The annual report is the program report aggregated across all programs.

The taxonomy

Every report a nonprofit publishes is a filtered view of one or more program reports.

The diagram below shows the relationship between the canonical program report and the five reports most nonprofits also publish. Each downstream report answers a different audience's question with the same underlying evidence.

SOURCE

Program report

One program. One dataset. Five sections.

Grant report

Foundation

Donor report

Major donors

Board report

Governance

Annual report

Public, IRS

Impact report

All stakeholders

Read the trunk first, the branches second. The trunk is what the program team actually built. The branches are how that work gets explained to five different audiences.

 

The report

Who reads it

The question it answers

01

Grant report

Filtered to one funder's scope

Foundation program officer, federal grant administrator, audit committee

Did the grant produce what the grant promised? Scoped to the activities the grant funded, mapped to the funder's required template (foundation narrative or federal SF-PPR).

02

Donor report

Filtered to one gift area

Major donors, donor-advised funds, mid-level recurring donors

What did my gift produce? Scoped to the program area the donor funded, with one or two participant stories and a forward-looking note. Sent inside the 90-day stewardship window.

03

Board report

Governance commentary on top

Board of directors, executive committee, finance and program committees

Should we continue, expand, or sunset this program? Same outcomes as the program report plus financial reconciliation and a strategic recommendation. Read at the next quarterly board meeting.

04

Annual report

Aggregated across all programs

Public, IRS, rating agencies, prospective supporters

What did the organization accomplish this year? Summarized roll-up across every program report, with a letter from leadership and the financial statements. The accountability artifact, not the renewal artifact.

05

Impact report

Outcomes only, all programs

Funders, board, public, peer organizations

What change did we produce? Outcome-focused subset of the annual report: who we served, what moved, what participants said. Most nonprofits publish this instead of, or alongside, the annual report.

Program report

The source artifact

Program team first; everyone else through filters

What did this one program do, and what did we learn? The complete five-section record of one program: who showed up, what changed, what participants said, what the team learned, and what the next cycle changes.

The template

A program report has five sections. Every other section is a variation on these.

The structure below is the template the article promised. Every audience the program serves is asking some version of the questions these five sections answer. Sector-specific metrics fit inside the template; the template does not change for sector.

01

The headline outcome

One number. One sentence. The change the program produced and the population it applies to. A board member or program officer should grasp the result in two minutes.

The headline rule: if you cannot pick one number from the program report to put on the cover, the program theory is unclear, not the data.

02

Who showed up

Demographic breakdown of participants captured at intake. Geography, sector, equity dimensions. The audience reads this section before the headline if equity is part of the program theory.

The intake rule: demographics belong on the first form, not on a four-week year-end reconciliation. Build the schema once.

03

What changed

Pre-post movement on the outcomes the program theory predicted would shift. Disaggregated by the demographics from section two. Real numbers, ranges and directions when ranges are unknown.

The matching rule: baseline and follow-up pair by persistent participant ID, not by name. Names break; IDs do not.

04

What participants said

Themed open-ended responses with citations to the source response. Two or three exemplary quotes attributed to a role. The qualitative section is where renewal decisions get made.

The voice rule: code open-ended answers as they arrive, not at year-end. Coding at the deadline is the section that gets cut when time runs short.

05

What we learned, what is next

Methodology in plain language: response rate, match logic, what surprised the team. Forward-looking note on what the next program cycle changes. The honesty in this section is what builds trust for the next funding ask.

The candor rule: reports that name what did not work get renewed more often than reports that paint everything green.

Live examples · No login

Four program reports, four audiences, one architecture.

Open any one. Each is a real Sopact program report rendered as a live URL. The first two were authored by the program team and filtered to a foundation funder. The third and fourth were authored by the funder and aggregate evidence across many programs. Same underlying architecture in every case.

The architecture

What the source-and-filtered model actually requires.

Treating the program report as the source artifact is appealing in theory and demanding in practice. Four architectural layers have to be in place before downstream filtering becomes a query rather than a rewrite. Each layer is decided upstream of any reporting tool.

01

Persistent participant IDs

Identity layer

Every participant gets a unique ID at intake. Every later response inherits it. Names change between waves; IDs do not. The audit trail starts here.
02

Structured demographics at intake

Disaggregation layer

Geography, sector, equity dimensions, federal categories tagged as fields at the first form. Cheap at intake, dramatically harder to retrofit at report time. Filtered views require this from day one.
03

Qualitative coding on collection

Voice layer

Open-ended responses themed by AI as they arrive, with citations back to the source response. Participant voice in the report by default, not a workstream that gets cut at the deadline.
04

Live URL as the canonical artifact

Delivery layer

The program report renders as a URL the audience revisits, not a PDF that goes stale. Filtered views are query parameters or saved views against the same URL, not separate authoring projects.

The compounding effect

Each layer is cheap on its own and difficult to retrofit later. The team that decides these four things upstream of the first program form spends the rest of the year exporting reports. The team that decides them after the program closes spends the rest of the year rebuilding the dataset.

The anti-patterns

Five program report mistakes that cost teams the most time.

The mistakes below are the ones that drive the four-to-six week annual report cycle most teams accept as normal. Each is an upstream decision, made before the program even runs, that shows up downstream as reporting cost.

01

Treating each audience report as a separate document

The most expensive program-report mistake is producing a parallel report per audience. Same evidence, four different authoring cycles, four chances for the numbers to disagree. Year-end becomes an assembly factory rather than a learning moment.

Instead One canonical program report. Filter the same dataset to each audience's required template. The narrative is written once; the numbers stay consistent.
02

Building demographics at report time

Beneficiary categories that look obvious at intake become guesswork at the reporting deadline. Numbers reported to the board and to the funder differ because they were inferred differently after the fact. Equity audits fail on this alone.

Instead Demographics as structured fields at the first form. Federal categories, foundation equity dimensions, and program-level cuts all derive from the same intake schema.
03

Cutting the qualitative section under deadline pressure

The participant voice section is the first thing dropped when time runs short. Coding hundreds of open-ended answers takes weeks if it starts in November. The story that would have moved the funder never gets surfaced. The renewal call goes poorly.

Instead Code open-ended responses as they arrive, at collection time. By the program close, themes are already ranked and citations already attached.
04

Reconciling pre and post by name and email

Sarah Johnson becomes S. Johnson at the second wave. Her email changes between the application and the post-program survey. The match falls out. An analyst rebuilds the join by hand and loses days. The report is delayed; the headline outcome stat becomes a guess.

Instead Persistent participant ID assigned at intake and inherited by every later response. Pre-post matching is a query against the ID, not a manual reconciliation across exports.
05

Shipping the report as a PDF that goes stale

The program officer reads the PDF once, files it, and asks for another at renewal. The board reads it once, files it, and asks for another at the next quarterly. The team produces the same artifact again the next cycle. The audience never gets to see the program continuing.

Instead Publish a live URL. The funder revisits across the year. The board sees the program continuing. Mid-year updates take minutes rather than days.

Frequently asked

Program report questions, answered.

Plain answers to the questions readers ask after the article. The structured versions of these answers also appear in this page's schema, so the same content shows up in search-result rich snippets and AI Overview citations.

01

What is a program report?

A program report is the structured artifact a program team produces about a single program: who participated, what activities ran, what outcomes those activities produced, what participants said about the experience, and what the team learned that changes the next cycle. Every other report a nonprofit publishes (grant report, donor report, board report, annual report) is a filtered view of one or more program reports.

02

What is the difference between a program report and an impact report?

A program report is scoped to one program: one cohort, one site, one funded activity. An impact report is organization-wide, covering all programs across an annual cycle. The impact report is built by aggregating multiple program reports. If your impact report numbers do not match your program report numbers, the architecture underneath is broken.

03

What are the five sections of a program report?

Section one is the headline outcome: one number plus the population it applies to. Section two is who showed up: demographic breakdown captured at intake. Section three is what changed: pre-post movement on the outcomes the program theory predicted. Section four is what participants said: themed open-ended responses with citations. Section five is what was learned and what is next: methodology and the forward-looking note.

04

How long should a program report be?

Long enough that a sophisticated reader can verify the claims and short enough that a busy program officer reads to the end. In practice that is six to twelve pages of an interactive report or a corresponding live URL. Most teams over-produce: the report grows because each audience requested an addition. The fix is one report, multiple filtered views, not one report with every section every audience ever asked for.

05

How long does it take to produce a program report?

Days, not weeks, when the data architecture is right. The traditional path takes four to six weeks because every cycle reconstructs the dataset from scratch. With persistent participant IDs assigned at intake, qualitative coding running on collection, and demographics tagged as structured fields, the report is ready when the program closes. The first cycle takes a day or two of configuration; subsequent cycles take minutes.

06

What is a program report template?

A program report template is the reusable five-section structure described in this article. The template stays stable across program types because the questions every audience asks are stable: who, how many, what changed, what did they say, what did we learn. Sector-specific metrics fit inside the template; the template does not change to accommodate sector. A workforce program and an environmental program use the same five sections with different numbers.

07

Who reads a program report?

The program team reads it first, to learn what worked. The board reads a filtered view to decide whether to continue the program. The funding foundation reads a filtered view scoped to the activities they funded. Major donors read a filtered view scoped to their gift area. The public reads a summarized view in the annual report. Five audiences, one source, no parallel authoring cycles.

08

Can a program report be a live URL instead of a PDF?

Yes, and live URLs outperform PDFs on every dimension that matters: the funder revisits across the year rather than reading once and filing it; the data refreshes as the program continues; the qualitative section drills back to the source response; the audit trail is visible. PDFs still have a place for board books and printed donor packets, but the canonical artifact is the live URL the report is generated from.

09

What metrics belong on a program report?

The outcomes the program theory predicts will move, plus the inputs needed to interpret them: sample size, response rate, demographic disaggregation, and the methodology used to match baseline to follow-up. Vanity metrics like attendance counts belong in the appendix or not at all. The rule across every sector: a few outcome metrics with baselines and disaggregation, plus participant voice that explains the numbers.

10

How does a program report support the grant report or impact report?

It supplies the data. A grant report is one program report filtered to the activities the grant funded and rendered into the funder's required template. An impact report is multiple program reports aggregated across the organization. A donor report is one program report filtered to the gift area. The program report is the source; downstream views are queries against it. Build the program report well and the rest become exports rather than authoring projects.

Continue reading

The five filtered views, in one click each.

The cards below take the source-and-filtered model from this article and walk through each downstream view as its own use-case page. The first two are the most-trafficked downstream filters; the rest cover the architecture upstream and the broader practice.

Bring your program data

See your next program report run in Sopact Sense, with your data.

A 60-minute working session. Bring an outcome export, a foundation report due next quarter, or a board reporting deadline you are trying to compress. We will build a program report against your data live and walk through which downstream views become exports rather than authoring projects.

Format

A working call, not a sales call. Camera optional, screen-share required.

What to bring

A program outcome CSV, a sample report from a prior cycle, or a one-paragraph description of the program report you need next.

What you leave with

A program report shape sketched against your data and a clear plan for the next reporting cycle.