play icon for videos

Training Feedback Survey: Templates, Questions, and What It Measures

A training feedback survey captures reaction at session end. See what it measures, what it cannot, and how to design one that feeds a real evaluation. Workforce training, PD, and grantee program examples.

US
Pioneering the best AI-native application & portfolio intelligence platform
Updated
May 6, 2026
360 feedback training evaluation
Use Case
FEEDBACK · KIRKPATRICK LEVEL 1
A training feedback survey captures reaction. A training evaluation tracks change. Most teams confuse what each one tells you.

A feedback survey is the short questionnaire participants fill out at the end of a session. It tells you how the experience landed. It cannot tell you whether anyone learned, whether anyone applied what was taught, or whether the program produced its intended result. This guide covers the full feedback-survey craft for workforce training, professional development, course delivery, and grantee training series. It also names the line where feedback ends and evaluation begins, so you know which instruments still need to come after.

On this page
The reaction layer in context
Definitions for every program shape
Six design principles
Choices that decide if it works
A worked example from a workforce cohort
Three program contexts compared
THE REACTION LAYER IN CONTEXT

A feedback survey is one of four layers

The Kirkpatrick model names four layers of training evaluation. A feedback survey lives at the bottom: it captures reaction. Each layer above answers a different question with a different instrument and a different cadence. A feedback survey on its own answers the first question. The remaining three need separate instruments tied to the same participants.

04
Results
Did the program move the metric the funder or board cares about: jobs placed, retention, throughput, revenue, behavior at scale.
Instrument: outcome data join · 6 to 12 months out
03
Behavior
Did the participant apply what they learned on the job. The trickiest layer because it lives off-platform in the participant's daily work.
Instrument: follow-up survey · 30 to 90 days post-program
02
Learning
Did the participant gain knowledge or skill. Measured by paired pre and post checks with identical wording so the delta is real.
Instrument: pre/post assessment · session start and end
01
Reaction
Did the experience feel relevant, clear, and worth the time. The feedback survey lives here. Captured at session end while the impression is sharp.
Instrument: feedback survey · session end
FEEDBACK SURVEY COVERAGE
A well-designed feedback survey owns Layer 1 completely and points the program at the next layer. It cannot fill in the layers above.
EVALUATION COVERAGE
A training evaluation runs all four layers and connects them through a participant identifier so each layer's data references the same person.
Source: Kirkpatrick (1959, refined 1994). The model is widely used in workforce development, corporate training, and grantee capacity-building programs. The feedback survey is the only artifact most programs actually run; the layers above are where reporting falls apart.
DEFINITIONS

The shape of a feedback survey, by program type

A training feedback survey runs in workforce programs, in workshops, in multi-week courses, in internal professional development, and in funder-led grantee training series. The wording shifts; the structure does not. Definitions below cover the four shapes searchers most often type into Google. Each definition matches the layer-1 model from the previous section.

What is a training feedback survey?

A training feedback survey is a short questionnaire participants fill out at the end of a training session. Five to ten questions is typical and two minutes is the right completion target. It captures reaction: how relevant the content felt, how clear the delivery was, what the participant would change, and whether they intend to apply anything covered. It is the Kirkpatrick Level 1 instrument. It runs at session end, not the next day, because reaction is sharpest in the moment.

The same instrument carries different names in different programs: training survey, post-session survey, end-of-session survey, satisfaction survey, smile sheet. The structure is the same. What changes is what the survey is paired with after it: a learning check, a behavior follow-up, or nothing. A feedback survey paired with nothing measures reaction and stops.

What is a course feedback survey?

A course feedback survey runs at the end of a multi-week course rather than at the end of a single session. It covers the same dimensions as a session feedback survey, plus questions that only make sense at the end of a longer run: which module was most useful, which module fell flat, how the pacing held across weeks, whether the participant would recommend the course to a colleague.

The end-of-course instrument should reference the per-module checks already collected during the course. Asking participants at the end to summarize a six-week experience without reminders produces a smoothed-over impression rather than specific reaction. The course template lives in the same template family as the session template; adjust the time framing, drop the per-module repetition, and add intent-to-apply questions tied to the role context.

What is a workshop feedback survey?

A workshop feedback survey runs at the end of a half-day or one-day workshop. The instrument is the same as a session feedback survey; the timing is one shot at the end of the day rather than at the end of a recurring session. Workshop participants have completed a contained experience, so a question about the day's overall arc ("which part of today will be most useful tomorrow") works in a workshop survey but rarely in a session survey embedded inside a longer program.

For multi-day workshops, run a brief check at the end of each day and a longer instrument at the close. The brief check protects recall of the specific day; the closing instrument protects the arc-of-the-week reaction.

What is a professional development feedback survey?

A professional development feedback survey is a feedback survey for internal professional development sessions, often run by an L&D team for staff. The participant returns to a known job context, which lets the survey ask sharper intent-to-apply questions: which specific work moment will use the new content, what would prevent the application, who else on the team should hear the same content. The point is to feed the next development cycle, not to score the session.

PD feedback also benefits from a manager-handoff field. Ask the participant to name a colleague who would benefit from the same session, and the L&D team can use that name to seed the next cohort. The PD context is the one feedback context where the response itself drives a known operational decision, which is why the response rate stays high.

Related but different

FEEDBACK · LEVEL 1
Training feedback survey
Captures reaction at session end. Five to ten questions. Two-minute completion. The starting layer of evaluation, not the whole evaluation.
LEARNING · LEVEL 2
Pre and post training assessment
Paired knowledge or confidence check at session start and session end. Measures the delta per participant. Different instrument from a feedback survey.
BEHAVIOR · LEVEL 3
Post-training follow-up survey
Survey to participant and manager at thirty to ninety days asking whether the content showed up in the work. The hardest layer to capture.
FULL EVALUATION
Training evaluation
All four layers run as one connected instrument set. Feedback is one component. See the full evaluation guide.
DESIGN PRINCIPLES

Six rules that make a feedback survey worth reading

Most feedback surveys ship as a list of questions copied from a template or a previous program. The list is fine. The connections are missing. Six rules below cover the connections that turn a list of questions into a survey that improves the next session.

01 · TIMING

Capture in the room, not the inbox

Reaction is sharpest in the first hour after a session.

Surveys delivered in the room at session end commonly land between 80 and 95 percent response. Surveys emailed after the participant has left drop to 20 to 40 percent. Memory of specific content fades fast. The first cohort to give you usable data is the one that filled out the survey before standing up to leave.


WHY IT MATTERS: a survey with a 35 percent response rate carries the views of the participants who chose to respond, which is rarely the median view of the room.

02 · LENGTH

Five to ten questions, two-minute target

Each item earns its place by feeding a decision.

Drop questions that do not change a decision. If the answer to "how did the room feel" cannot lead to a different room, the question is decoration. Two minutes is the threshold most participants tolerate when they want to leave. Longer surveys do not produce better data; they produce drop-off and rushed clicking on the items at the end.


WHY IT MATTERS: a six-question survey that every participant completes beats a fourteen-question survey that half the room finishes.

03 · PAIRING

Every rating gets one open-ended counterpart

A 4-out-of-5 rating without context is uninterpretable.

A satisfaction rating of 4.2 across two cohorts can hide a fixable problem in one cohort and a fundamentally different content gap in the other. The rating tells you the magnitude of the reaction; the paired open-ended prompt tells you what produced it. Pair every closed-ended scale with a one-line "what produced this rating" prompt.


WHY IT MATTERS: the open-ended response is what tells the next facilitator what to change before the next session.

04 · IDENTITY

One identifier, every instrument

Feedback that connects to the same person at every wave.

The feedback survey is one of three or four instruments the same participant fills out across the program. If reaction in the feedback survey cannot be linked to the same participant's pre/post delta and thirty-day follow-up, the program produces three unconnected reports. A single participant identifier carried through every instrument is what makes the cross-instrument view possible.


WHY IT MATTERS: unconnected instruments produce three reports that disagree, not one report that explains.

05 · INTENT

Ask about application, not satisfaction

Intent to apply predicts behavior change.

Satisfaction averages tell you participants did not hate the session. Intent-to-apply questions tell you which content has a chance of showing up in the work. Ask which work moment will use what was covered, what would prevent the application, and how confident the participant is in applying it. These questions also seed the thirty-day follow-up: you can ask whether the named work moment actually happened.


WHY IT MATTERS: the application question is the only feedback question that predicts Level 3 behavior.

06 · USE

Loop the result back into the next session

A survey nobody reads is a survey nobody fills out.

Participants notice when their feedback shows up in the next session. A short note at the start of the second session naming what changed based on the first cohort's feedback raises the response rate of the second feedback survey. Programs that collect feedback and never reference it train their participants to skip the survey. Use the feedback or stop running it.


WHY IT MATTERS: a survey that clearly affects the next session raises the response rate by 15 to 25 points.

DESIGN CHOICES

Six choices that decide if the feedback survey works

Most feedback surveys fail at one of six choices made before any participant fills out the first field. Each row below names the choice, the failure mode that survey teams fall into, the working version, and what the choice controls downstream. The choices compound: a wrong answer at row one makes the answer at row six harder to recover.

The choice
Broken way
Working way
What this decides
Where the survey is delivered
In the room or in the inbox.
BROKEN
Email link sent two hours after the session ends. Participants are home, on a phone, scrolling past it. Response lands at 30 percent within forty-eight hours and never recovers.
WORKING
QR code on the slide; in-room link on the projector; mobile-optimized form. Participants finish before standing up. Response lands at 85 to 95 percent.
The response rate for every other measurement that follows. The room is the only window.
How participants are identified
A real ID or "you can leave name blank."
BROKEN
Anonymous responses. The pre-survey, the feedback survey, and the thirty-day follow-up cannot be linked to the same participant. Three datasets, no joins, three reports that disagree.
WORKING
Personal link that pre-fills the participant's identifier. Reaction at session end sits next to the same person's pre/post delta and follow-up response in one record.
Whether the program can answer the cross-instrument question ever. Anonymous answers stop here.
Question count
Five to ten or fifteen plus.
BROKEN
Fifteen to twenty questions to "be thorough." Participants finish the first eight; click straight-through-five on the rest. Average completion time hides a partial-attention problem.
WORKING
Six to ten questions, each tied to a decision. Two minutes to finish. The data quality on the last item matches the data quality on the first.
Whether the answers at the end are real or noise. Drop-off is invisible until you check.
Open-ended pairing
Numbers alone or numbers with the why.
BROKEN
Eight Likert ratings, no open-ended prompt. The 4.2 average looks fine; nobody knows what produced it. Two cohorts with the same average had different problems and the report cannot tell.
WORKING
Every rating paired with one short open-ended prompt: "what produced this rating." The why arrives next to the score. Decisions become possible.
Whether the report can recommend a specific change for the next session. Numbers alone cannot.
Application question
Satisfaction or intent.
BROKEN
The whole survey lives at "did you like it." Average satisfaction tells you the room did not hate the session. The funder's question is whether anyone applied anything.
WORKING
"Which moment in your work this week will use what you learned" plus "what would prevent you from applying it." Both seed the thirty-day follow-up question set.
Whether reaction can predict behavior. Satisfaction does not; named application sometimes does.
What happens to the result
Filed or fed back.
BROKEN
Results live in a folder nobody opens. The next cohort gets the same session. Participants notice that their feedback never produced a change and stop responding.
WORKING
First slide of the next session names two changes made because of the previous cohort's feedback. Response rate climbs in the next survey.
Whether the program improves over cohorts. Filed feedback does not improve anything.
COMPOUNDING EFFECT

The first row controls every row that follows. A survey emailed two hours after the session lands at 30 percent response, which makes the identity question moot, which kills the cross-instrument view, which leaves the program with three disconnected reports. The choice that looks operational at row one decides whether the evaluation is possible at row six.

A WORKED EXAMPLE

A workforce training cohort that connected feedback to follow-up

A 240-participant workforce training program runs an eight-week cohort. Four sessions per week, ending with a credential. Funder reporting needs end-of-program reaction, learning delta, and ninety-day employment outcomes. The feedback survey is the smallest instrument in the design and the one that decides whether anything else can be reported.

We were running feedback after every session and we had stacks of it. Average satisfaction stayed at 4.3. We could not tell anyone what to change. The funder kept asking whether anyone was applying the content on the job, and our feedback survey did not have a single question that connected to that. We finally rewrote the instrument to cap at seven questions, pair every rating with one short why, and ask which work moment the participant would use the content in. The next survey we sent the thirty-day follow-up against, the response rate held at 71 percent because the participant had named the moment themselves.

Workforce program lead, mid-cohort cycle
What the redesigned feedback survey holds
CLOSED-ENDED
Five Likert ratings
Relevance to role, clarity of delivery, pace, usefulness of the most-covered topic, confidence in applying. Each on a 1 to 5 scale.
paired at collection
OPEN-ENDED
Two text prompts
"Which moment in your work this week will use what you learned today" and "what would prevent you from applying it." Two lines each.
SOPACT SENSE PRODUCES
A connected reaction-to-application record
Reaction tied to identity
Each participant's session-end ratings join the same record holding their pre-survey baseline and demographic context.
Application moment captured at end
The named work moment becomes the seed for the thirty-day follow-up, which asks whether the moment happened.
Open-ended responses coded once
The "what would prevent you from applying it" responses cluster into themes that feed the next session's design without manual coding.
Cohort-to-cohort comparison runs as a join
Reaction patterns across cohorts share a common ID schema. Drift in pacing or relevance shows up as a real signal, not a hand-aligned chart.
WHY TRADITIONAL TOOLS FAIL
A reaction silo nobody can connect
Anonymous responses by default
Generic form tools collect reaction without a participant identifier. Pre/post and follow-up sit in different files with no clean join.
Open-ended buried in a CSV
Two hundred forty open-ended responses sit in a column nobody reads. Manual coding takes a week per cohort and rarely happens.
Email blast to "all participants"
The follow-up survey is a generic email to everyone with the same link. Response rate drops below 25 percent because the link does not know who the recipient is.
Reporting is a manual rebuild
Six weeks of analyst time per cohort to align reaction data with pre/post and ninety-day outcomes. Reports are stale before they ship.

The feedback survey did not change in shape: seven questions, two minutes, in the room at session end. What changed was the layer underneath. Sopact Sense issued the participant identifier at enrollment and inherited it into every subsequent instrument, so reaction at the end of session four sat in the same record as the pre-survey baseline and the ninety-day employment check. The cohort report shipped in two days instead of six weeks because the joins were already done.

PROGRAM CONTEXTS

Three feedback-survey shapes from three program contexts

A workforce training cohort, a foundation-funded grantee training series, and an internal nonprofit professional development program all run feedback surveys. The wording is similar; the structure underneath the survey differs because the unit of analysis differs. Each block below names the shape, what tends to break, and what a working version looks like.

01

Workforce training cohort

Multi-week cohort. Per-session feedback rolling up to a credential.

Workforce training cohorts run a feedback survey at the end of every session. The same participant fills out the survey four to twelve times over the course of the program. The feedback signal is rich because the same person rates the same content across weeks; the noise is high because participants click straight through after the third repetition unless the survey is short.

What breaks: a fifteen-question survey repeated six times produces ninety items per participant, which produces drop-off, which produces missing data on the items that mattered. The pattern looks like an engagement problem; it is a survey-design problem.

What works: a five-question survey at session end with one rotating open-ended prompt. Total participant load stays under ten minutes across the program. Reaction trend across sessions becomes legible because each session's data is real, not rushed. Pair the per-session feedback with a single longer end-of-program instrument and a thirty-day employment follow-up.

A SPECIFIC SHAPE
An eight-week cohort of 240 participants. Five-question feedback survey at the end of each session, ninety-second completion target. End-of-program instrument at twelve questions. Ninety-day employment join through the same participant identifier. Per-cohort reporting cycle moves from six weeks to two days.
02

Foundation grantee training series

Cross-grantee series. Comparable feedback across organizations.

A foundation runs a training series for grantees: financial management, evaluation literacy, board governance. Twelve to twenty-five grantee organizations send participants. Each workshop runs once. Feedback has to be comparable across workshops and across grantee organizations because the foundation wants to know which workshops earned their place in next year's calendar.

What breaks: each workshop facilitator runs their own form. The wording drifts. The rating scales drift. Cross-workshop comparison becomes a manual recoding exercise that the program officer abandons after three rounds. Grantee voice never makes it up to the foundation's portfolio review.

What works: a foundation-owned feedback template that every facilitator uses, with two facilitator-specific items left open for customization. Identical core wording across workshops makes the cross-workshop comparison automatic. Grantee responses tag both the workshop and the grantee organization, so the foundation can see which workshops are landing differently for which kinds of grantees.

A SPECIFIC SHAPE
A learning series of eight workshops over a year. Eighteen grantee organizations participate. A six-question core instrument with two facilitator-customizable items. Cross- workshop reaction comparison ships in the foundation's annual learning memo without an analyst rebuild.
03

Professional development inside a nonprofit

Internal L&D. Manager handoff and intent to apply.

Internal professional development at a mid-size nonprofit runs sessions for staff: writing for funders, conflict resolution, data literacy, manager skills. The participant returns to a known job context with a known manager. The feedback survey can ask sharper questions than an external program survey because the answers feed a real operational decision: who else attends, what gets repeated, what gets retired.

What breaks: PD feedback runs through a generic form tool with no connection to the staff record system. Reaction sits in one tool, performance reviews in another, the L&D budget in a third. The question of whether PD is producing change goes unanswered for years.

What works: PD feedback that pairs reaction with intent to apply and a manager handoff field. The participant names a specific work moment that will use the content; the manager gets a copy. The intent record becomes the seed for a thirty-day check that asks both participant and manager whether the moment happened.

A SPECIFIC SHAPE
A 110-person nonprofit running quarterly PD sessions. A seven-question feedback survey including a manager-handoff field and a named work moment. Thirty-day check sent to both participant and manager. PD budget allocation in the next year reflects what worked, not what was scheduled.
A NOTE ON FEEDBACK SURVEY TOOLS
Google Forms SurveyMonkey Microsoft Forms Typeform Sopact Sense

Generic form tools collect a feedback survey well. The form renders, the QR code scans, the responses arrive in a spreadsheet. For programs that run only the feedback survey and stop, the gap is small. The architectural gap shows up when the same participant has to answer a pre-survey, a feedback survey, a learning check, and a thirty-day follow-up. None of those tools issue a participant identifier by default; the four instruments live in four files with no clean way to connect them to the same person.

Sopact Sense issues the identifier at first contact and inherits it across every instrument. The feedback survey at session end sits next to the same participant's pre-survey baseline and ninety-day follow-up in one record. The reporting that takes weeks of analyst time elsewhere runs in two days because the joins were already done at collection.

FAQ

Training feedback survey questions, answered

Fourteen questions covering what the feedback survey captures, what it cannot, and where it sits inside a real evaluation. Each answer is mirrored verbatim in the page schema so search engines and AI Overviews have the same wording readers see.

Q.01

What is a training feedback survey?

A training feedback survey is a short questionnaire given to participants right after a training session to capture reaction. Five to ten questions is typical. It covers how useful the content felt, how clear the delivery was, and what the participant would change. It measures Kirkpatrick Level 1 only. It cannot prove learning, behavior change, or program results. Pair it with a learning check, a behavior follow-up, or a results review when the question is whether the program worked.

Q.02

What should a training feedback survey include?

Six items cover most needs: relevance to the participant's role, clarity of delivery, pace, the most useful piece of content, what to drop or shorten, and intent to apply. Keep each item to one idea. Mix a five-point rating with one open-ended counterpart that asks why the rating landed where it did. Add one identity question that ties the response to the same person across pre-survey, end-of-session feedback, and follow-up so reaction sits next to the rest of the data.

Q.03

What is the difference between a training feedback survey and a training evaluation?

Feedback measures reaction. Evaluation measures change. A feedback survey runs at session end and captures how participants felt about the experience. A training evaluation tracks the same participants from pre-survey through post-survey and follow-up to measure learning, behavior change, and results. A feedback survey is one component of an evaluation. Running feedback alone produces reaction data but no evidence of learning or transfer.

Q.04

What questions should I ask after a training session?

Ask what the participant will use first, what they would drop, what felt unclear, and how confident they feel applying the content. The strongest feedback survey questions for training tie each item to a decision: what to change next time, what to keep, who to follow up with. Avoid generic satisfaction. A four-point or five-point rating works for clarity and usefulness; an open-ended prompt works for the rest. Cap the survey at six to ten questions to protect the response rate.

Q.05

How long should a training feedback survey be?

Five to ten questions, completable in two minutes. Longer surveys produce drop-off without producing better data. Two-minute completion is the threshold most participants tolerate at the end of a session when they want to leave. If you need more depth, split the instrument: feedback at session end, learning check the next day, follow-up at thirty days. Three short instruments outperform one long one because each one collects in the moment its question is alive.

Q.06

What is a good response rate for a training feedback survey?

Surveys delivered in the room at session end commonly land between 80 and 95 percent. Surveys emailed after the participant has left drop to 20 to 40 percent within forty-eight hours. Rates climb back when the link is personalized to the participant's record rather than a generic blast. The biggest single lever is timing: capture before the participant leaves the room or closes the tab.

Q.07

What should a course feedback survey include?

A course feedback survey runs at the end of a multi-week course. It covers overall course quality, the most and least useful modules, instructor delivery, pacing across the course, and intent to apply. Pair the end-of-course survey with the per-module checks already collected during the course so the end-of-course view can reference what was already said. Keep the end-of-course instrument at eight to twelve questions even though the course is longer than a session.

Q.08

What is the difference between a workshop feedback survey and a session feedback survey?

Structurally they are the same instrument. The difference is timing. A session feedback survey runs at the end of a single one-hour or two-hour session inside a larger program. A workshop feedback survey runs at the end of a standalone half-day or one-day event. Workshop surveys can ask a slightly broader question about the day's arc because the participant has finished a contained experience. Session surveys focus on the single topic covered.

Q.09

How do I write a professional development feedback survey?

Professional development feedback can ask a sharper intent-to-apply question because the participant is returning to a known job context. Ask which specific work moment the new content applies to, what would prevent the application, and who else on the team should hear the same content. The point of a PD feedback survey is to feed the next development cycle, not to score the session. Pair the rating with one open-ended prompt and one named-person prompt for the manager handoff.

Q.10

Can I use Google Forms or SurveyMonkey to run a training feedback survey?

Both can collect responses. Both store the response without a persistent participant identifier by default. Reaction at session end then sits in one tool, learning check in another, and follow-up in a third, with no clean way to connect the three to the same person. The tool is fine for a single instrument; the gap shows up at evaluation time when the cross-instrument view has to be assembled by hand. A feedback survey alone is a fit; a feedback survey plus learning plus follow-up needs a shared identity layer.

Q.11

How does Sopact handle training feedback surveys?

Sopact Sense issues a participant identifier at first contact and inherits it across every subsequent instrument. Feedback at session end, learning check the next day, follow-up at thirty days, and any open-ended response inside any of those instruments are linked to the same record. The end-of-session reaction sits next to the post-survey learning delta and the thirty-day behavior signal in one view. The tool absorbs the same feedback questions a Google Form would carry; the difference is what happens to the answer after collection.

Q.12

What is a training feedback survey template?

A starter set of six to ten questions covering relevance, clarity, pace, most useful content, what to change, and intent to apply. A template is the starting point; the actual survey adds a participant identifier, ties each item to the decision it feeds, and pairs ratings with one open-ended counterpart. A template that ships as a list of questions without those three connections is a list, not a survey. Treat the template as scaffolding and customize before sending.

Q.13

When should I run a training feedback survey?

At session end, before the participant leaves the room. The reaction window closes inside the first hour after a session. Memory of specific content, of what worked and what did not, is sharpest in the moment. Surveys delivered later capture an averaged impression rather than a specific reaction. For a multi-day workshop, run a brief check at the end of each day and a longer instrument at the close of the workshop.

Q.14

Is a training feedback survey enough to prove a training program works?

No. A feedback survey captures reaction at session end. It cannot tell you whether the participant learned the content, whether they applied it on the job, or whether the program produced the result the funder or the board cared about. To answer those questions you need a learning check, a behavior follow-up at thirty to ninety days, and a connection to the same participant across all three instruments. The feedback survey is one piece of that picture, not the whole picture.

A WORKING SESSION

Bring your feedback survey. See it connect.

Most teams already have a feedback survey running. The question is what happens to the data after collection. Bring the form you currently send, the question bank you copy from, or the funder report template you have to fill in. The session walks through how the same participant's reaction, learning delta, and follow-up sit in one record. No procurement decision required.

FORMAT
Forty-five minutes, screen-share, two of your forms in front of us.
WHAT TO BRING
Your current feedback survey and any pre-survey or follow-up form.
WHAT YOU LEAVE WITH
A connected version of your instrument set, with the joins drawn out.