Bradford VTS β€” Header Scheme 06
Evaluation of Teaching β€” Bradford VTS
Bradford VTS Β· Teaching & Learning

Evaluation of Teaching

"Because teaching without feedback is a bit like driving blindfolded. You might arrive somewhere β€” just not necessarily where you intended."

πŸ“š For Trainees, Trainers & TPDs ⚑ High-impact learning in minutes πŸ’‘ Knowledge not found elsewhere

Last updated: April 2026

πŸ“₯ Downloads

Handouts, evaluation forms, and teaching extras β€” ready when you are. Perfect for HDR planning, trainer development, and real-world feedback sessions.

path: EVALUATION

🎯 Why Evaluate?

Evaluation is not just a box to tick. It is the engine of improvement. Without it, a teaching session might feel great but leave zero lasting impact β€” and nobody knows why.

βœ… With evaluation you can...

  • Identify what works and keep doing it
  • Spot what isn't working and fix it
  • Understand your learners' needs better
  • Track growth and improvement over time
  • Show evidence of quality teaching
  • Build confidence in your teaching skills
  • Close the feedback loop with your learners

❌ Without evaluation...

  • Good habits stay invisible β€” and die out
  • Poor habits repeat, unchallenged
  • Learner confusion goes unnoticed
  • No data to improve the programme
  • Teaching stays stagnant
  • You're guessing at impact
It is difficult β€” and almost impossible β€” to evaluate teaching effectively unless you have first set clear learning outcomes. β€” Bradford VTS
πŸ’‘ Practical insight
Before the session even starts, ask yourself: "What do I want participants to be able to do or know at the end of this session that they couldn't before?" Your evaluation should measure exactly that.

What are we actually evaluating?

Evaluation in medical education can target several different things. Being clear about what you're evaluating helps you pick the right tool.

The session
Teaching Session Quality
Was it well organised, engaging, pitched at the right level? Did it run to time? Were the materials useful?
The teacher
Facilitator / Presenter Skills
Was the teacher clear, engaging, knowledgeable? Did they hold interest? Did they encourage participation?
The learning
Learner Outcomes
What did participants actually learn? Can they apply it? Has their thinking shifted?
The programme
Overall Training Programme
How well does the whole HDR / VTS programme meet trainees' needs over the course of the year?
πŸ“Š Kirkpatrick's Four Levels of Evaluation

Professor Donald Kirkpatrick developed his famous model of training evaluation in the late 1950s. It remains the most widely used framework for evaluating educational programmes in healthcare and beyond. It has since been updated into the New World Kirkpatrick Model by his family, but the core four levels remain unchanged.

The model asks: how well is your training programme actually delivering on what it set out to do? Kirkpatrick's genius was recognising that all four levels matter β€” and that levels 3 and 4 are the ones trainers most often ignore.

LEVEL 4 β€” RESULTS What changed for patients / the scheme? LEVEL 3 β€” BEHAVIOUR Did practice actually change? LEVEL 2 β€” LEARNING Knowledge / skills acquired? LEVEL 1 REACTION Increasing Impact

Higher levels = harder to measure but more meaningful. Most GP evaluations only reach Levels 1–2.

1
😊 Reaction
How did participants feel about the session? Did they find it engaging, well-organised, and relevant?
🩺 In GP training: Post-session feedback forms, verbal "round-the-room" feedback, emoji polls.
2
🧠 Learning
What did they actually learn? Can they now do something they couldn't before?
🩺 In GP training: Pre/post quizzes, observed skills practice, knowledge check questions.
3
πŸ”„ Behaviour
Did they change how they work? Is the learning actually being used in day-to-day practice?
🩺 In GP training: Reviewing COTs/CbDs after a session; trainee self-report; MSF changes over time.
4
πŸ“ˆ Results
What is the real-world outcome? Did training improve patient care, pass rates, or clinical standards?
🩺 In GP training: SCA pass rates, ARCP outcomes, patient feedback scores, patient safety incidents.

How far do most programmes evaluate?

Research consistently shows that most training programmes stop at Level 1 or 2. Yet the most important levels β€” Behaviour and Results β€” are the hardest and most expensive to measure. Here's a practical summary:

Level What it measures Typical method How hard to do How often done in GP training
1 β€” Reaction Satisfaction / experience Feedback form, verbal feedback Easy Very common βœ…
2 β€” Learning Knowledge / skills acquired Pre/post quiz, observed practice Moderate Sometimes βœ…
3 β€” Behaviour Practice change after training WPBA review, trainer observation Hard Rarely ⚠️
4 β€” Results Impact on outcomes Pass rates, safety data, patient outcomes Very hard Very rarely ⚠️
πŸ’‘ Insight β€” The Level 3 Gap
The biggest missed opportunity in GP training evaluation is Level 3 β€” Behaviour. A trainee might love a session on shared decision-making (Level 1 = great), understand it perfectly (Level 2 = good), but never actually change how they consult (Level 3 = where real learning dies). Asking trainees to log one thing they'll do differently β€” and following up a month later β€” is a simple and powerful way to close this gap.

James and Wendy Kirkpatrick updated the original model around 2009–2010 to make it more applicable to modern organisational learning. The core four levels remain the same, but the emphasis changes:

  • The new model recommends working backwards from Level 4 β€” start with the results you want, then design evaluation (and training) to achieve them.
  • Level 3 is now given extra prominence β€” creating the right conditions for behaviour transfer is seen as the most critical success factor.
  • The model distinguishes between "leading indicators" (early signs behaviour is changing) and lagging indicators (final results).
  • Manager and supervisor support is highlighted as essential for Level 3 success β€” in GP terms, this means trainers and TPDs actively reinforcing the learning after the HDR session.

For most GP trainers, the practical message is: don't just ask whether trainees enjoyed the session β€” ask whether it changed anything.

Kirkpatrick's model was originally designed for industrial training β€” not the complex, multi-layered world of medical education. Honest educators should know its limits:

  • It assumes levels are linked: In reality, a brilliant reaction (Level 1) does not guarantee learning (Level 2), and learning doesn't automatically produce behaviour change (Level 3).
  • It ignores context: The workplace environment matters enormously for Level 3. A trainee may know what to do but be working in a culture that doesn't support it.
  • It overlooks the teacher: Kirkpatrick's model focuses on the learner, but says nothing about evaluating the teacher's development.
  • Soft outcomes are hard to measure: How do you quantify a trainee becoming more compassionate? The model struggles with complex humanistic competencies.
  • Level 4 is often impractical: Measuring whether a session on clinical reasoning ultimately improved patient safety is methodologically very difficult.

This doesn't mean Kirkpatrick is useless β€” it remains the most practical framework available. But using it thoughtfully, and being realistic about what you can measure, is the key.

⚑ Quick Summary β€” If You Only Read One Thing
πŸ“‹ The 60-Second Cheat Sheet
Why bother?You can't improve what you can't measure. Evaluation is how good teaching gets better.
Kirkpatrick Level 1Did they enjoy it? (Reaction)
Kirkpatrick Level 2Did they learn anything? (Learning)
Kirkpatrick Level 3Did they change their practice? (Behaviour)
Kirkpatrick Level 4Did it improve patient care or pass rates? (Results)
The golden ruleWithout clear learning outcomes first, evaluation is almost impossible.
Don't just use formsTry verbal feedback, voting systems, digital polls, sticky notes, exit cards, and more.
What to askWhat worked? What didn't? What would you change? Was it useful?
Trainer insightEvaluation is a gift β€” even negative feedback is data that helps you grow.
πŸ›  Ways to Evaluate β€” Beyond the Form

Most people think "evaluation" means handing out a form at the end of a session. But there are many more creative, engaging, and often more useful ways to evaluate teaching. Don't limit yourself.

The best evaluation method is the one your learners actually engage with β€” not the one sitting unanswered in the recycling bin. β€” Practical wisdom for GP trainers
Classic
πŸ“‹ The Feedback Form
A short written questionnaire completed at the end of the session. Keeps a record, allows comparison over time. Works best when it's short (under 10 questions), specific, and anonymous.
Quick
πŸ—£ Verbal "Round the Room"
Ask each person for one word or one sentence about the session. Fast, democratic, and creates a culture of honest feedback. Works well for small HDR groups.
Fun
🧍 Human Likert Scale
Ask a question (e.g. "How useful was this?") and have participants physically move to a position β€” one end of the room = 0 (not at all), the other = 5 (extremely). Instantly visual and gets people moving.
Quick
πŸ“ Exit Cards (One-Minute Paper)
Give participants a card or slip of paper. Ask two questions: (1) What was the most useful thing you learned today? (2) What question do you still have? Brilliant for identifying knowledge gaps.
Visual
🚦 Traffic Light Cards
During (not just after) the session. Each participant has red/amber/green cards. Green = "I'm following." Amber = "I'm not sure." Red = "I'm lost." The teacher can see in real time where the group is.
Digital
πŸ“± Digital Polls (Mentimeter / Slido)
Participants vote via their phones in real time. Results appear as word clouds, bar charts, or rankings. Works brilliantly for HDR groups. Especially useful when honest feedback feels awkward face-to-face.
Creative
πŸ—’ Post-It Wall (Plus/Delta)
Put two columns on a flipchart: "+" (what worked well) and "β–³" (what to change). Participants write anonymously on Post-Its. Simple, visual, and immediate. Encourages honest critique without confrontation.
Reflection
🧠 Pre/Post Knowledge Check
Ask the same 3–5 questions before and after the session. Demonstrates actual learning gain at Kirkpatrick Level 2. Great for clinical knowledge sessions. Doesn't need to be formal β€” can be a quick show of hands.
Digital
πŸ–₯ Online Survey (MS Forms / Google Forms)
Send a short link after the session. Responses collected anonymously over a few days. Allows Likert scales, open text, and ratings. Easy to analyse and keep records. Ideal for larger VTS programmes.
Structured
🎯 Learner-Designed Feedback
The most powerful evaluation? Ask trainees to design the evaluation form themselves at the start of a new programme cycle. They'll ask what genuinely matters to them β€” which is often more useful than what trainers assume matters.
Longitudinal
πŸ”„ Behaviour Follow-Up (Level 3)
One month after a session, ask: "Have you used anything from that session in your clinical work?" Can be a brief email, a WhatsApp poll, or raised as a check-in at the next HDR. Captures Kirkpatrick Level 3.
Peer
πŸ‘₯ Peer Observation of Teaching
Invite a colleague to observe a session and give structured feedback on your teaching. A formal tool for trainer development. Different from learner feedback β€” assesses teaching process, not just learner experience.
πŸ’‘ A practical tip from GP educators
If you always use the same form, people stop engaging with it. Varying your evaluation method keeps learners thinking β€” and keeps the feedback honest. Even changing one question per session makes a difference.
πŸ“ Designing a Good Evaluation Form

Most evaluation forms are too long, too generic, and forgotten the moment they're collected. Here's how to design one that actually gets useful responses β€” and that you'll actually use.

Two categories of questions to include

Think of your evaluation form in two distinct parts:

πŸ“Š Part 1 β€” Performance Feedback

Get feedback on the actual quality of the session and the teaching.

  • Clarity β€” Was the session clear and easy to follow?
  • Organisation β€” Did it flow logically? Was it well-structured?
  • Engagement β€” Did it hold your interest throughout?
  • Relevance / Usefulness β€” Was it useful to your learning and practice?
  • Materials quality β€” Were slides, handouts, and resources appealing and helpful?
πŸ”§ Part 2 β€” Improvement Feedback

Get actionable suggestions for making the session better.

  • Three things that were most useful β€” so these can be kept and repeated
  • Anything that was not useful or confusing β€” so it can be removed or improved
  • One thing you'll do differently in practice β€” captures intended behaviour change (Level 3 signal)
  • Any other suggestions β€” open-ended, catches what you didn't think to ask

What makes a good evaluation question?

❌ Weak question Why it fails βœ… Better version
"Was the session good?" Too vague β€” yes/no gives you nothing useful "What was the most valuable thing you got from today's session?"
"Did you enjoy it?" Enjoyment β‰  learning. Measures Level 1 incompletely. "How will you use what you learned in your next clinic?"
"Rate the session 1–10" No context β€” a 7 could mean many different things "Rate the session 1–5 for relevance to your current stage of training, and explain your rating."
"Any comments?" Too open β€” most people write nothing "What one thing would you change about this session?"
20-question form Participants give up halfway through Keep it to 5–7 focused questions maximum
ℹ️ The 5-Question Form That Actually Works

Research on feedback forms consistently shows diminishing returns after about 5–7 questions. If you can only ask five things, ask these:

  1. What worked well in this session? (open)
  2. What one thing would you change? (open)
  3. How relevant was this to your learning needs? (1–5 scale)
  4. What is one thing you'll apply in practice after today? (open)
  5. Any other comments? (optional open)

The Right Order β€” Plan Your Evaluation Before Your Session

Many teachers design their session first and bolt on evaluation at the end. This is the wrong order. Here's the right approach:

1
Define your learning outcomes What should participants know or be able to do by the end? Be specific.
2
Choose your Kirkpatrick level Are you measuring reaction, learning, behaviour, or results? Be realistic about what you can achieve.
3
Design your evaluation method Pick the right tool for that level β€” form, quiz, verbal, digital, observation, etc.
4
Run the session Teach β€” with your evaluation method already planned and ready to deploy.
5
Collect and analyse feedback Look for patterns, not individual comments. What themes emerge? What surprised you?
6
Close the loop β€” tell learners what you changed At the next session, briefly say "Based on your feedback, we've changed X." This shows evaluation is meaningful, not just performative.
⚠️ Common Pitfalls β€” What Teachers Get Wrong

Even experienced teachers make predictable mistakes with evaluation. Recognising them is the first step to avoiding them.

This is the single most common mistake. If you haven't defined what success looks like before the session, you can't meaningfully measure it afterwards. "Did they enjoy it?" is not the same as "Did they achieve the learning objectives?" Always set your outcomes first β€” even if they're simple and informal.

Generic forms become invisible. Trainees stop engaging, start ticking boxes, and add nothing useful. If your form looks the same in year three as it did in year one, it's time to redesign it. Involve trainees in the redesign β€” they'll tell you what they actually want to be asked.

This erodes trust. If trainees feel that their feedback disappears into a void, they stop giving honest responses. The most important step after collecting evaluation data is to close the feedback loop β€” briefly acknowledge what you heard, and say what (if anything) you've changed as a result. Even "We heard your feedback about pacing, and we've adjusted the programme" is enough.

A popular session is not always an effective one. Trainees can love a session that is entertaining but teaches nothing durable. Similarly, a challenging session β€” one that pushes thinking and generates discomfort β€” might get lower satisfaction scores but produce more genuine learning. Kirkpatrick Level 1 is important, but it should never be your only measure.

This is the classic "I know this is quick but..." moment β€” and the resulting data is usually shallow and rushed. If you want useful feedback, build it into the session design. Allow 5–7 minutes at the end. Or do it digitally so people can respond in their own time, with more thought.

It's tempting to dismiss a very negative response as "one person who just had a bad day." Sometimes that's true. But sometimes a single piece of strongly negative feedback is pointing to something real that the majority haven't articulated. Read outliers carefully. Consider whether they might be identifying something genuine.

πŸ’‘ Insider Pearls β€” Real-World Wisdom
🎯 What experienced educators have learned

The insights below come from patterns seen across many GP training schemes, trainer development discussions, and educational supervisor conversations.

πŸ” The Most Revealing Question
"What one thing will you do differently in your next consultation because of today?" β€” This single question captures Level 3 intent better than any other. It's future-focused, behavioural, and specific.
πŸ“Š Numbers vs Words
Likert-scale ratings tell you what. Open text tells you why. You need both. A score of 3/5 with no comment is almost useless. A score of 3/5 with "the material moved too fast for the amount of new information" is gold.
⏱ Timing Is Everything
Feedback collected immediately after a session reflects emotional state. Feedback collected a week later reflects durability of learning. Both are valuable. Both tell you different things.
πŸ‘€ Anonymity Changes Honesty
Trainees who know the TPD will read their form give safer, kinder feedback. Anonymous forms give you the truth. Design accordingly β€” especially if you're evaluating sessions where power dynamics exist.
πŸŽ“ Evaluation Is a Teaching Tool
Asking trainees to reflect on what they've learned β€” even through a feedback form β€” is itself a learning activity. It forces recall, consolidation, and self-assessment. Good evaluation serves the learner, not just the teacher.
πŸ”„ The Cycle Never Ends
Evaluation β†’ Analysis β†’ Change β†’ Re-evaluate. This cycle is the engine of educational quality improvement. One-off evaluations are useful. Sustained evaluation cycles transform programmes.
πŸ—£ From the GP Training Community β€” Real Voices, Real Tips

The insights below come from recurring patterns shared by GP trainees, trainers, and educators across UK training communities β€” online forums, deanery discussions, and peer learning groups. Every point here aligns with official RCGP guidance on good educational practice. Nothing here is gossip; it is collected wisdom, translated into clear teaching points.

πŸ’¬
From Trainees Across the UK

"The best HDR sessions I've attended didn't just teach me something. They made me want to go back and check something, or try something differently in clinic. The ones that stayed with me were the ones where the teacher actually asked us what we were going to do next."

β€” A recurring theme from trainee forums

πŸŽ“
From GP Trainers

"When I first started using evaluation forms, I expected mostly ticks. What I got was gold. One trainee wrote that she felt the session moved too fast in the second half and she lost the thread. I hadn't noticed. I completely restructured how I pace the second hour after that."

β€” Shared experience from a UK GP trainer

πŸ“‹
From TPDs

"We changed our programme by asking trainees to help design it. The sessions they were most passionate about were ones where they'd identified their own learning gap. Ownership changed everything β€” including engagement with evaluation."

β€” Recurring insight from UK TPD discussions

What do GP trainees actually want from session evaluation?

Across UK training communities, trainees consistently say the same things when asked what matters most to them about evaluation. Here is what they want β€” in their own words, translated into a clear picture.

What trainees want from evaluation What Trainees Want
Their feedback to be acted on (30%)
Trainees want to know their input made a difference β€” not to be ignored.
Anonymity when needed (25%)
Power dynamics are real. Honest feedback needs a safe channel.
Quick and simple (20%)
Nobody wants a 20-question form after an already long session.
Space for open comment (25%)
Numbers alone can't capture what worked and what didn't.

What makes an HDR session memorable? Trainees say...

Across UK GP training communities, the sessions that get the best evaluation scores β€” and that trainees still talk about months later β€” share these features. These are not just what trainees say at the time. These are what they remember when reflecting later.

🎯
It felt relevant to real GP work

Sessions linked to real clinical scenarios β€” not abstract theory β€” get the best feedback. Trainees remember "that session on heartsink patients" far longer than generic communication skills.

πŸ—£
There was genuine discussion

Interactive sessions score much higher than lecture-only ones. Trainees want to think, not just listen. Even 10 minutes of small group discussion transforms how a session lands.

πŸ‘‚
The teacher listened to them

When teachers ask "what do you already know?" or "what would you actually do in clinic?" β€” and then adapt the session accordingly β€” trainees feel valued. That trust shows up in the evaluation.

πŸ“
There was a clear take-home point

Trainees consistently say the sessions they remember best ended with one clear message: "If you forget everything else, remember this." A closing summary is not optional β€” it is the most important part.

πŸ•
It was paced well

The most common complaint in trainee evaluations? "It felt rushed in the second half." Pacing matters. Build in time to breathe. Don't sacrifice depth for coverage.

πŸ”„
Something changed after it

The sessions trainees rate highest are ones where the teacher came back at the next session and said "Based on your feedback, I've changed X." This simple act closes the loop and builds trust.

πŸŽ“ From UK GP Educators β€” What Gets Missed

A recurring theme among GP trainers and TPDs who reflect openly on their own practice: we often evaluate whether trainees were happy, but rarely whether they changed anything. The sessions that get 5/5 satisfaction scores are not always the ones that produce measurable behaviour change. The two are related β€” but they are not the same thing. The most growth-focused educators evaluate both, separately, and at different time points.

The Honest Truth About Evaluation Forms in GP Training

Here is something that experienced GP educators rarely say out loud, but almost all privately acknowledge. Treating it as a known reality β€” rather than a flaw to hide β€” helps you design better evaluation systems.

LOWEST HONESTY Named form, in person MODERATE Anonymous paper form HIGHEST HONESTY Anonymous digital, after event
😬 What trainees admit privately
  • If the form has their name on it, they will soften any criticism
  • If the TPD is watching them fill it in, they are even less honest
  • Forms handed out at the end of a tiring all-day session get the least thoughtful responses
  • Generic forms get generic answers β€” specific questions get specific, useful answers
  • If they've seen feedback go nowhere before, they stop engaging entirely
βœ… What actually works
  • Anonymous digital forms (Microsoft Forms, Google Forms) β€” completed after the session, at home
  • Genuinely anonymous forms with no identifiers β€” trainees need to believe this
  • Short, specific forms with at least one open question
  • Following up at the next session: "Here is what we heard and what we're changing"
  • Involving trainees in designing the evaluation β€” they feel ownership and take it seriously

What UK Research on GP Training Tells Us

Studies evaluating GP training programmes in the UK have revealed some consistent and important findings. These align with and reinforce what trainers and trainees say in their communities.

πŸ“š Trainees want clinical relevance above all
Research consistently shows that GP trainees value sessions that connect directly to real primary care dilemmas β€” especially the ones that are clinically complex or emotionally challenging. Abstract theory without clinical grounding gets the lowest evaluations.
🀝 Collaborative programme design works
Training schemes that involve trainees in designing the HDR programme β€” choosing topics, formats, and teachers β€” report significantly better engagement and more useful evaluation feedback. Trainees who helped build something evaluate it more thoughtfully.
πŸ”„ Near-peer teaching is valued β€” but under-used
Research on GP trainees as teachers shows trainees gain enormously from teaching peers. Yet most GPSTs do far less teaching in their final year than in their hospital rotations. Evaluating near-peer sessions β€” and sharing the results with the trainer β€” helps this grow.
πŸ“‰ Evaluation fatigue is real
When trainees fill in the same form every week without seeing any change, response quality drops sharply by mid-year. Schemes that rotate their evaluation methods β€” and explicitly respond to feedback β€” maintain higher quality data throughout the year.
🌍 IMGs need specific support here
International Medical Graduates often come from cultures where evaluating your teacher feels uncomfortable β€” even disrespectful. Actively normalising evaluation as a collaborative and professional activity (not a complaint mechanism) significantly improves their engagement.
πŸŽ₯ Video review is powerful for trainers
GP trainers who review a recording of their own teaching session β€” using a structured grid β€” and then discuss it with a peer report it as one of the most powerful forms of professional development available. Deaneries increasingly ask for evidence of this for trainer accreditation.

πŸ“Ή Video-Based Teaching Insights β€” Applied to GP Training

The core educational principles below are drawn from well-established teaching and learning science, applied here directly to the context of UK GP training sessions and HDR evaluation. These are the insights from skilled educators that translate most clearly to GP practice.

Experienced medical educators consistently highlight that learners remember the last thing they hear most vividly. This is known as the recency effect. Yet most HDR sessions end with logistics β€” "remember to sign the register," "don't forget your portfolio." The last minute should be reserved for one thing: a clear, memorable take-home message.

Ask yourself: If a trainee could only remember one thing from today β€” what do I want it to be? Then say that last, clearly, and out loud. This is the single highest-leverage change you can make to any teaching session.

Evaluation link: The most impactful evaluation question you can ask immediately after a session is: "What was the single most useful thing from today?" If trainees struggle to answer this, the closing minute did not do its job.

A common frustration shared by UK GP educators: asking trainees to "reflect on today's session in your portfolio" rarely produces meaningful reflection. Reflection has to be structured and prompted. Open-ended "reflect on this" produces generic output β€” because people don't know what to reflect on.

Better approach: give trainees a specific prompt. For example: "Describe one moment from today's session where your thinking shifted. What shifted? What would you do differently in clinic as a result?"

This prompt produces real Kirkpatrick Level 2–3 evidence. It can be built into the evaluation form as Question 3 or 4, and the answers will be far richer than a 1–5 satisfaction score.

Research on how the brain processes new information shows that every person has a limited mental bandwidth β€” cognitive load β€” for absorbing new material. When a teaching session overloads this capacity, learning stops, regardless of how good the content is.

Signs that a session has overloaded cognitive load:

  • Trainees begin to look glazed or stop participating after a certain point
  • Evaluation forms describe feeling "overwhelmed" or "confused in the second half"
  • Lots of information was delivered but trainees can only recall one or two points

In practice: Less is more. Cover fewer concepts, cover them deeper. Build in pauses. Change activity every 15–20 minutes. Use your evaluation form to ask specifically about pacing β€” if you never ask, you'll never know.

Most evaluation forms ask backwards-looking questions: "Was the session well-organised?" "Did you enjoy it?" These give you information about the past. But the most valuable evaluation question is forward-looking:

"What will you do differently in your next consultation because of today?"

This question reaches for Kirkpatrick Level 3 intent. It doesn't measure behaviour change (that happens later), but it measures commitment to change β€” which is a reliable predictor of whether change will actually happen.

Skilled educators from across healthcare agree: this is the single most powerful question you can add to any post-session evaluation. If you're going to add just one new question to your form, make it this one.

This is rarely discussed β€” but widely observed by UK GP trainers. When one or two trainees are very vocal about enjoying a session, others tend to match their scores upwards. When a small group is disengaged, others often drift downwards. Evaluation reflects not just the session but the group mood on that day.

This is why triangulation matters β€” looking at evaluation data across multiple sessions and multiple methods, rather than reading too much into one session's feedback. A single bad evaluation can be weather. A pattern across six sessions is climate.

Also: verbal round-the-room feedback is particularly susceptible to this effect. Digital anonymous forms β€” completed alone β€” produce more independent responses.

The GP Training Evaluation Cycle β€” Making It Sustainable

Good evaluation is not a single event. It is a cycle. Here is how it works in practice β€” and what the training community says about keeping it alive throughout the year.

Better Teaching 1. PLAN Set outcomes 2. TEACH Deliver session 3. EVAL Collect feedback 4. ANALYSE Find patterns 5. CHANGE Act on it β˜… TELL TRAINEES!
πŸ’‘ The step most often skipped β€” Step 5

The training community agrees: the cycle breaks down most often between Analyse and Change β€” and when it does, trainees notice within two to three sessions. The remedy is simple: at the start of the next session, spend two minutes saying "Here's what you told us last time, and here's what we've done about it." Two minutes. That's all it takes to keep the cycle alive.

πŸŽ“ For Trainers & TPDs β€” Teaching This Topic
πŸŽ“ Trainer & TPD Guidance

Evaluation is a topic that is often taught but rarely modelled well. The most powerful thing a trainer or TPD can do is demonstrate good evaluation practice consistently.

Common trainee difficulties with this topic

  • Trainees often confuse evaluation (measuring a teaching session) with assessment (measuring a learner's performance). Clarify this early.
  • Many trainees struggle to understand why Levels 3 and 4 of Kirkpatrick matter β€” they need a concrete GP-relevant example (e.g. a great session on clinical examination that produces no change in how anyone examines patients).
  • IMGs may be used to very formal, high-stakes evaluation cultures β€” help them understand that informal, low-stakes evaluation is just as valid and arguably more useful day-to-day.

Tutorial ideas & discussion prompts

Give trainees a fictional teaching session title (e.g. "Managing Hypertension in Primary Care" or "Breaking Bad News"). Ask them to:

  1. Define 2–3 learning outcomes for the session
  2. Decide which Kirkpatrick level(s) they want to evaluate
  3. Design a 5-question form that evaluates against those outcomes
  4. Share and compare forms β€” discuss: what did different trainees prioritise?

This exercise makes the connection between learning outcomes and evaluation design concrete and practical.

Prompt for group discussion: "You run a session. The feedback says participants found it confusing and not well organised. How do you respond?" This surfaces how trainees handle critical feedback β€” a skill as important in clinical supervision as in teaching evaluation.

Key teaching points: negative feedback is data, not a verdict. Your response to critical feedback is more important than the feedback itself.

Ask trainees to recall a recent HDR or educational session and apply the Kirkpatrick framework retrospectively:

  • Level 1: How did you feel about it at the time?
  • Level 2: What did you actually learn?
  • Level 3: Have you changed anything in practice because of it?
  • Level 4: Can you identify any patient-level impact?

This is often an eye-opening exercise β€” trainees realise that memorable sessions are not always the ones that changed their behaviour, and vice versa.

πŸŽ“ TPD Insight β€” Modelling Evaluation Culture

The most powerful thing a TPD can do is evaluate their own HDR sessions openly and transparently β€” and share the results with trainees, including what they're going to change. This models evaluation as a normal, non-threatening part of professional practice, not something that only happens to juniors.

❓ Quick Questions β€” FAQs

As short as possible while still being useful. Research suggests diminishing returns after 5–7 questions. If you have 20 questions, you'll get 20 rushed, low-quality answers. Five good questions beat twenty mediocre ones every time.

Technically yes β€” and it does allow you to compare sessions over time. But if used rigidly for every session without variation, engagement drops. Consider a core set of 3–4 consistent questions, plus 1–2 session-specific questions that change each time.

Yes β€” and in some ways more so. Verbal feedback is immediate, can be explored in depth, and allows clarifying questions. Its weakness is that it's harder to record and analyse. In a small HDR group, verbal feedback at the end of a session is often more valuable than a form. The key is to make notes afterwards while it's fresh.

This is where evaluation becomes genuinely valuable for your own development. Ask specifically about the areas you found difficult: "Was the pacing right?" / "Was the explanation of X clear?" Being honest about wanting feedback on specific areas often produces more targeted, useful responses.

Many IMGs come from educational cultures where evaluation is highly formal, high-stakes, and rare. The UK GP training approach β€” where evaluation is frequent, informal, low-stakes, and genuinely intended to improve teaching rather than penalise teachers β€” can feel surprisingly unfamiliar. Reassuring IMGs that evaluation is a collaborative, developmental process (not an inspection or judgement) often unlocks more honest responses.

🧠 Memory Aid β€” The Kirkpatrick Staircase

Need a simple way to remember Kirkpatrick's four levels? Think of them as a staircase β€” each step takes you deeper into the impact of teaching, and each step is harder to climb.

The RLBR Staircase
Letter Stands for One-line prompt
RReaction"Did they smile?"
LLearning"Did they grow?"
BBehaviour"Did they change?"
RResults"Did it matter?"
πŸ’‘ Mnemonic
Really Learning Brings Results β€” each level builds on the one before it.
βœ… Final Take-Home Points

Before you close this page, make sure these ideas are fixed in your mind. They'll serve you every time you teach.

1. Set outcomes first You cannot meaningfully evaluate a session unless you know what it was supposed to achieve.
2. Go beyond Level 1 Happiness is not learning. Push for at least Level 2 evaluation β€” and aspire to Level 3.
3. Don't just use forms Verbal feedback, digital polls, exit cards, and peer observation are often more valuable.
4. Keep forms short Five great questions beat twenty mediocre ones. Respect people's time and you'll get better data.
5. Close the loop Always tell learners what you've changed based on their feedback. This transforms evaluation from performance to practice.
6. Kirkpatrick Level 3 is the gap Most programmes measure enjoyment. Almost none measure whether practice actually changed. This is the most impactful question to ask.
7. Negative feedback is a gift A trainee who tells you what didn't work is doing you an enormous favour. Treat critical feedback as data, not judgement.
8. Vary your methods Repeating the same evaluation tool year after year produces diminishing returns. Refresh, involve learners, and keep it alive.
The goal of evaluation is not to prove your teaching is good. The goal is to make your teaching better. Those are very different things β€” and only one of them is worth your time. β€” Bradford VTS

Videos

Although some of these videos talk about teaching at school, the key principles are transferable to teaching in General Practice.

Bill Gates on Teachers Need Real Feedback

Beware of Cognitive Overload

three

four

five

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top