Bradford VTS — Header Scheme 06
Assessment, Competence & Capability — Bradford VTS
Bradford VTS · Teaching & Learning

Assessment, Competence & Capability

Because knowing stuff and actually doing it well are two very different things — and your assessors are watching for both.

For Trainees, Trainers & TPDs High-impact learning in minutes Knowledge not found elsewhere
Last updated: April 2026 · Always verify current RCGP requirements at rcgp.org.uk

🌐Web Resources

A hand-picked mix of official guidance and real-world GP training resources. Because sometimes the best pearls are not hiding in the official documents.

🏛 RCGP — Official
RCGP WPBA Capabilities Framework
How the 13 professional capabilities are assessed through WPBA. Essential reading.
🏛 RCGP — Official
RCGP Curriculum — How It's Structured
The five areas of capability and how they connect to training and assessment.
🏛 RCGP — Official
RCGP WPBA Overview
The official landing page for Workplace Based Assessment. Start here for authoritative guidance.
📚 Bradford VTS
Bradford VTS — Professional Capabilities
Dr Ram's detailed guide to all 13 RCGP professional capabilities with practical notes.
📚 Bradford VTS
Bradford VTS — WPBA Overview
Everything about WPBA in one place — tools, requirements, and portfolio guidance.
📚 Bradford VTS
Bradford VTS — Aims, Objectives & ILOs
The companion page — understanding Intended Learning Outcomes and how they link to assessment.
🔬 Academic
Competence & Capability — A New Look
Academic paper exploring the distinction between competence and capability in professional training.
🔬 Academic
Revisiting Miller's Pyramid (PMC)
Peer-reviewed paper examining Miller's framework and its limitations in assessing diagnostic reasoning.
🔬 Academic
Work-Based Assessment — BMJ Classic
Norcini's seminal BMJ article on work-based assessment and Miller's pyramid. A must-read for educators.
🎓 GP Training
Greenwich VTS — Portfolio & WPBA Guide
Practical, trainee-friendly breakdown of FourteenFish portfolio requirements and WPBA tools.
📖 Wikipedia
Constructive Alignment — Wikipedia
Clear overview of Biggs' constructive alignment principle — accessible and well-referenced.
📖 Wikipedia
Four Stages of Competence — Wikipedia
The conscious competence learning model explained clearly. Useful background for trainers.
Quick Summary & Core Concepts

⚡ One-Minute Recall — The Big Picture

  • Competence = performing well in a specific, familiar setting
  • Capability = performing well across varied and unfamiliar settings — with adaptability
  • GP training is explicitly capability-based, not just competence-based
  • Assessment must be aligned to what you actually want people to be able to do
  • Miller's Pyramid: Knows → Knows How → Shows How → Does (always aim for "Does")
  • Conscious Competence Model: all learners move through 4 stages as they develop any skill
  • Constructive Alignment (Biggs): outcomes → learning → assessment must all point the same way
  • Reliability = consistency of a measure. Validity = it's measuring the right thing
  • RCGP uses 13 Professional Capabilities — assessed through WPBA, AKT & SCA
  • FourteenFish ePortfolio is where your capability evidence lives throughout training
Competence vs Capability — In Depth

What Is Competence? What Is Capability?

Two words that sound similar but carry very different meanings in medical education. Knowing the difference matters more than you might think.

COMPETENCE (from Latin: competentia — agreement) Performing satisfactorily in a specific setting 🏠 One environment 📌 Fixed standard reached 🔒 "I've achieved this" CAPABILITY (from Latin: capabili — able to take in) Performing satisfactorily in varied settings 🌍 Multiple environments 📈 Always room to grow 🔓 "I can adapt to new situations"

💡 The Key Difference in One Sentence

A competency says you're good at something in this setting. A capability says you're good at adapting whatever setting you find yourself in. GPs need capability — their patients are endlessly varied, and no two consultations are the same.

A Concrete Example

Scenario Competence Capability
Communication skills Communicating effectively with the adult patients they see in one setting Communicating effectively with adults, children, elderly patients, those with hearing loss, those with limited English — whoever walks through the door
Clinical decision-making Making safe decisions for common conditions in a familiar GP setting Making safe, justifiable decisions in unfamiliar presentations, with uncertainty, in out-of-hours or different practice settings
Prescribing Prescribing correctly for a familiar, uncomplicated patient Prescribing safely across complex, multi-morbid patients, adjusting for renal function, interactions, and patient preferences
Who gets hired? Group 2 — capability — every time. Because real GP patients are not textbook patients.

📌 One More Useful Distinction — Limits of Competence

When we say someone is competent, we imply they've reached a defined state — a threshold has been crossed. It sounds final. When we say someone is capable, we acknowledge that a minimum exists, but there is always space for further growth. A capable doctor at year 3 of training is not the same as a capable doctor 10 years into practice — and both are correct uses of the word.

📊 Why Do We Need Frameworks?

Competency and capability frameworks exist for very good reasons — and understanding those reasons helps you engage with them more meaningfully.

COMPETENCY / CAPABILITY FRAMEWORK Clarity Clear performance expectations for all Flexibility Balances detail with adaptability Alignment Links individual & organisational goals Fairness Consistent, transparent standards for everyone
  • They make the standard visible — trainees know what they are working towards, and trainers know what to assess against.
  • They create a shared language between trainee and trainer — both speaking the same educational vocabulary.
  • They prevent both over-assessment (ticking boxes that don't matter) and under-assessment (ignoring things that do).
  • They allow progression to be tracked — from novice to qualified GP, in a structured, evidence-based way.
  • They keep training inclusive — a good framework avoids being overly prescriptive so that doctors with different backgrounds can still demonstrate capability in their own way.

⚠️ The Danger of a Poor Framework

A poorly designed framework can actually harm training. If it is too rigid or prescriptive, it encourages tick-box behaviour — trainees learn to perform for the assessment rather than genuinely develop. The RCGP has deliberately moved away from purely competency-based language for exactly this reason, preferring capability-based assessment that values growth, adaptability, and professional judgement.

Weaver's 6 Cs of Capability

🌟 Weaver's Six Cs of Capability

Described by Toby Weaver in the context of John Stephenson's capability framework. Six qualities that together define a genuinely capable professional.

C
Culture
Understanding and respecting the diverse cultural backgrounds of patients, colleagues, and organisations — and adapting accordingly.
C
Comprehension
Deep understanding — not just surface knowledge. Knowing the "why", not just the "what". Able to explain and adapt, not just recall.
C
Competence
The base-level ability to perform effectively in familiar settings. A necessary ingredient — but only one of six. Competence alone is not enough.
C
Communion
The ability to connect — with patients, colleagues, and teams. Effective relationships are at the heart of GP practice and cannot be reduced to a checklist.
C
Creativity
Bringing fresh thinking, problem-solving, and adaptability when the standard approach doesn't fit the patient in front of you. Essential in primary care.
C
Coping
Managing uncertainty, ambiguity, and emotional demand — without falling apart. GP medicine involves daily uncertainty, and coping is a genuine professional skill.

💡 Why All Six C's Matter in GP Training

Notice that only one of the six is "competence" — the ability to perform in familiar settings. A trainee who has competence but lacks coping, communion, or creativity will struggle in the complex, uncertain, highly relational world of general practice. This is why GP training uses capability frameworks, not just competency checklists. All six qualities are assessed, in one form or another, across the MRCGP assessment system.

Key Models in Assessment

🧠 Key Models You Need to Know

These theoretical models underpin how GP training is designed, how you are assessed, and how you should think about your own learning.

Model 1

Miller's Pyramid of Clinical Competence (1990)

Proposed by George Miller in 1990. The foundation of modern clinical assessment worldwide.

KNOWS Basic knowledge — "I can recall it" KNOWS HOW Applying knowledge — "I can explain how" SHOWS HOW Performance in a simulated setting DOES Real performance MCQ / AKT Essay / Problem-solving OSCE / SCA WPBA / CbD / COT

💡 Why Miller's Pyramid Matters for GP Training

The AKT tests the bottom two levels — Knows and Knows How. The SCA tests Shows How. But WPBA (CbDs, COTs, Learning Logs) targets the apex — Does — because that is where real professional performance lives. A trainee can pass the AKT and SCA and still struggle with WPBA if they cannot translate their knowledge into consistent, authentic clinical practice.

🎓 For Trainers — Teaching Pearl

Ask your trainee: "You clearly know the guideline — but how would you apply it to Mrs Khan, who has three other conditions and is already on seven medications?" That's the move from Knows to Does. It's the most important question you can ask in any tutorial.

Model 2

The Conscious Competence Model (Burch, 1970s)

Also called the Four Stages of Competence. The model is widely attributed to Noel Burch at Gordon Training International in the 1970s, though earlier versions exist. Now used extensively in medical education.

Stage 1 Unconscious Incompetence "I don't know what I don't know" Stimulus to learn needed Stage 2 Conscious Incompetence "I know I need to get better at this" Most productive stage Stage 3 Conscious Competence "I can do it, but I have to think hard" Most trainees are here Stage 4 Unconscious Competence "I do it naturally without thinking" The experienced GP → Direction of learning & development →

💡 Why This Is Useful for Trainees — And Trainers

  • Stage 1 is the dangerous one. A trainee who doesn't know what they don't know cannot recognise their own learning needs. One of the jobs of assessments is to move trainees from Stage 1 into Stage 2, where real learning begins.
  • Stage 2 is uncomfortable — but important. Feeling like you're struggling is not a sign of failure. It's the most productive learning state.
  • Stage 4 can be a risk. Unconscious competence is the goal — but experts who do things automatically can sometimes find it hard to teach those things. The best teachers are often at Stage 3, where they still consciously engage with the process.

🔗 Constructive Alignment

Developed by Professor John Biggs. The single most important principle for understanding why GP training is designed the way it is.

Intended Learning Outcomes (ILOs) "What should they be able to DO?" Teaching & Learning Activities (TLAs) "What activities will get them there?" Assessment Methods "Does it measure what we actually care about?" All three must align — this is the principle of Constructive Alignment

Constructive alignment, developed by Professor John Biggs, is built on one powerful idea: if you want someone to be able to do X, you need to teach X and then test X. It sounds obvious — but it's surprisingly common for training programmes to teach one thing and assess another.

In GP training, the ILOs are the 13 Professional Capabilities. The teaching activities are tutorials, clinic sessions, and half-day release. The assessments are the WPBA tools — CbDs, COTs, MiniCEX, MSF, PSQ, and so on. They are designed to align. When they do, trainees learn what matters, not just what's easiest to test.

⚠️ When Alignment Fails

If a knowledge test (like an MCQ) is used to assess whether someone can communicate empathetically — that's misalignment. The test doesn't measure the thing it's meant to. This is why the MRCGP uses three different assessment types: AKT for knowledge, SCA for simulated performance, and WPBA for real-world capability. Each assesses a different level of Miller's Pyramid, and all three are needed for a complete picture.

🎓 Biggs' Insight About Backwash

Biggs observed that students reliably study what they think will be assessed — not everything they are taught. He called this "backwash." Constructive alignment turns this into an advantage: if the assessment genuinely tests the ILOs, students will engage in exactly the learning activities that produce a good doctor. The RCGP's WPBA system is designed with this principle in mind.

Reliability, Validity & Key Educational Terms

📖 Key Educational Terms — Explained Clearly

These terms appear throughout GP training literature, ARCP feedback, and educational supervisor conversations. Know them well.

Knowledge
Foundation

A body of information that can be applied to performing a function. Necessary — but not sufficient. Knowing the guideline does not mean you can use it well.

Skill
Observable

A learned, observable ability to perform a task — physical or cognitive. Taking a blood pressure, suturing a wound, or structuring a safety net are all skills.

Attitude
Values-Based

The personal values and behaviours that shape how you apply knowledge and skills. A doctor can know everything and still behave poorly. Attitudes matter deeply in GP training.

Competence
Setting-Specific

Satisfactory performance of a function in a specific context. A threshold has been reached. No implied further growth. Useful but limited as a concept for GP training.

Capability
Adaptable

Satisfactory performance across a variety of contexts, including unfamiliar ones. Implies ongoing growth. This is the language GP training now uses for good reason.

Reliability
Consistency

The consistency of an assessment tool. A reliable assessment gives similar results for individuals at the same level of ability, regardless of who is doing the assessing or when.

Validity
Accuracy

Whether the assessment is measuring what it is meant to measure. A test that asks about rare tropical diseases is a valid test of tropical disease knowledge, not of GP capability.

Formative Assessment
For Learning

Assessment designed to support learning and provide feedback. No pass or fail — the purpose is development. Most WPBA tools are primarily formative.

Summative Assessment
Of Learning

Assessment that makes a final judgement — pass or fail, satisfactory or not. The ARCP is summative. The CCT award is summative. These determine whether you progress or qualify.

Constructive Alignment
Design Principle

Biggs' principle: outcomes, teaching activities, and assessment methods must all point in the same direction. If they don't, the training system is broken.

ILO
Intended Learning Outcome

A precise statement of what a learner should be able to do by the end of a learning experience. ILOs drive curriculum design and assessment in modern GP training.

Professional Judgement
The Gold Standard

The ability to make holistic, balanced, and justifiable decisions in situations of clinical complexity and uncertainty. This is what CbDs are specifically designed to assess.

Reliability vs Validity — The Classic Analogy

Not Reliable Not Valid Reliable but Not Valid Reliable AND Valid ✓ ❌ Inconsistent & Off-target ⚠️ Consistent but Off-target ✅ Consistent & On-target We want this one ↑
Reliability

Consistency

  • Same result for same level of ability
  • Doesn't matter who does the assessment
  • Doesn't matter when it's done
  • Tests must give similar scores to trainees at the same stage
  • Low reliability → unfair to trainees
Validity

Accuracy

  • Measuring what it's supposed to measure
  • Testing rare diseases doesn't assess GP capability
  • A driving test that only tests parking isn't valid
  • Valid assessments align with what doctors actually do
  • Low validity → useless data, even if scores are consistent
Why This Matters in GP Training

🩺 Why Does This Matter in GP Training?

Understanding the theory isn't just an academic exercise — it shapes how you think about your own development as a doctor.

💡 The Big Idea

GP training is not like a driving test, where you prove you can park on a quiet road and then get let loose on the motorway. It's more like becoming a seasoned navigator — you need to perform well on roads you've never seen before. That's capability. And that's why GP training is designed the way it is.

🔵 Competence-Based Thinking

What it looks like

  • Tick-box approach to learning
  • Performs tasks well in known settings
  • Fixed endpoint — "I've done that"
  • Mirrors how some hospital training works
  • Good for procedural or technical skills
  • Less effective for complex GP scenarios
🟢 Capability-Based Thinking

What it looks like

  • Growth mindset — always room to improve
  • Performs well across diverse, novel situations
  • Open endpoint — ongoing development
  • Mirrors how GP training is designed
  • Essential for managing complexity and uncertainty
  • What patients actually need from their GP

🎯 The Practical Point for Trainees

When your trainer asks "why did you do that?" — they're not questioning your decision. They're checking whether you have capability: can you explain your reasoning, adapt it to a different patient, and apply it when the guideline doesn't quite fit? That is what your assessors are looking for.

Assessment in GP Training — Practical Application
Common Pitfalls & Trainee Traps

⚠️ Common Pitfalls & Trainee Traps

Things that regularly catch people out — in assessments, in portfolio reviews, and in understanding their own development.

This is classic Stage 1: Unconscious Incompetence. Many trainees go through their first 6–12 months without fully engaging with the 13 capabilities. They complete WPBAs but don't connect them to the broader framework. Take an hour early in training to read the RCGP capability descriptors — it makes every subsequent assessment more meaningful.

The most common error. Completing the minimum number of CbDs and COTs without reflection or engagement defeats the purpose of formative assessment entirely. WPBA is not about collecting stamps — it's about a structured record of professional growth. Assessors and ARCP panels can tell the difference very quickly.

Trainees often write: "I can do X." What assessors want to see is: "I can do X across a range of patient types and clinical contexts, including unfamiliar ones." One is competence, the other is capability. When writing your learning logs or ESR self-ratings, always frame your evidence in terms of breadth and adaptability, not just individual performance.

It's natural to select cases where you performed well. But training requires evidence across all 13 capabilities — not just the ones you're comfortable with. ARCP panels specifically look for gaps. Deliberately seek out cases that stretch you in weaker areas. Discomfort is data — it tells you where to grow.

The AKT tests the bottom of Miller's Pyramid. WPBA tests the top. Many trainees invest heavily in exam preparation and underinvest in portfolio quality. A strong AKT score does not compensate for weak WPBA evidence. The MRCGP requires all three components. Allocate your learning time accordingly.

UK GP training places significant emphasis on patient autonomy, shared decision-making, and the psychosocial dimensions of illness — more so than in many other countries' systems. These are assessed as capabilities. Coming from a more paternalistic or biomedical-focused system does not make you a lesser doctor, but it does mean there may be specific capabilities where adjustment is needed. Engage with this early, not in your final year.

Trainer & Trainee Guidance

🎓 Guidance for Trainers & Trainees

Different questions, different needs — but both benefit from understanding this topic deeply.

🟣 For Trainers

Teaching This Topic Well

  • Ask trainees to plot themselves on the Conscious Competence ladder for different skills — a revelatory exercise for many.
  • Use Miller's Pyramid to frame tutorial questions: "You know it. Now — how would you do it with a complex patient?"
  • When designing teaching sessions, always start with ILOs and work backwards to the activity and assessment. Demonstrate constructive alignment in action.
  • Point out when trainees shift from "competent in this setting" to "capable across settings" — name it explicitly so they understand the growth they are experiencing.
  • Remember: a trainee stuck at Conscious Incompetence is not failing — they're in the most productive learning state. Support, don't pathologise.
  • Discuss validity and reliability when debriefing WPBAs — especially when a trainee questions a grade. Help them understand that multiple data points improve reliability.
🔵 For Trainees

Making the Most of Assessment

  • Feeling uncertain or confused is not a sign of failure — it's Stage 2 of the Conscious Competence model. It means you've recognised a learning need. That's progress.
  • When completing a WPBA, think consciously: "Which capability is this demonstrating? Which level of Miller's Pyramid am I operating at?"
  • Don't just collect WPBAs — use them. Each one is a formative tool designed to help you grow, not to judge you.
  • Keep your FourteenFish portfolio up to date throughout the year. Doing it all in the last week before ARCP is neither enjoyable nor educational.
  • When you get challenging feedback, resist the urge to dismiss it. Ask: "Is this moving me from unconscious incompetence into conscious incompetence?" If yes — it's a gift.
  • IMGs in particular: the 13 capabilities include cultural and communication dimensions that may differ from your home country's approach. Engage with these early and honestly.
💎 Insider Pearls — What Trainees & Educators Actually Say

💎 Insider Pearls

Hard-won wisdom from GP educators, deanery assessors, and trainees — the kind of practical insight that rarely makes it into official guidance. All cross-checked against RCGP and GMC standards.

📖 Theme 1 — Your Portfolio Is a Story, Not a Form

💡 The Single Most Missed Point

Most trainees treat the FourteenFish portfolio as a form-filling exercise. They focus on the numbers: how many CbDs, how many COTs, how many logs. But the ARCP panel is not counting entries — they are reading a story. They want to see a narrative of growth: a doctor who started with gaps and uncertainty, recognised those gaps, worked on them, and emerged by ST3 with broad, confident capability. The numbers are the minimum. The story is what passes you.

Evidence of Competence Are you getting there by ST3? Evidence of Learning Released regularly, not all at once Evidence of Reflection Honest, insightful, not just descriptive A Story of Growth Trajectory matters more than perfection This is what ARCP panels are actually looking for — in that order

🎯 What One Deanery Assessor Actually Asks

A senior ARCP assessor described their approach like this: at every review they ask four questions about each trainee's portfolio. Are they safe — do they recognise risk and seek help when they should? Is there a visible trajectory — are early weaknesses being addressed? Is there self-awareness — do they write honestly about struggles, or does everything look suspiciously perfect? And finally — are they covering the breadth of capability across all 13 areas, not just the comfortable ones?

⏱ Theme 2 — Start Early, Go Often, Write Honestly
📅
Start Day One
ARCP panels can see the date every log entry was added. A flurry of entries in the final two weeks before a review tells its own story — and it is not a flattering one. Trainees who have been asked to repeat a post purely due to insufficient portfolio evidence exist. Don't be one of them. Aim for three Clinical Case Reviews per month as a rhythm.
✍️
Little and Often
Set aside 20 minutes at the end of each clinic session to write up one entry while the case is fresh. It takes far less effort than reconstructing a consultation from memory three weeks later. Trainees who build a weekly portfolio habit consistently produce better-quality, more authentic reflections than those who batch-write at the end of a post.
😬
Write About the Difficult Stuff
A portfolio where every entry shows "competent" or "excellent" from day one of ST1 raises a red flag. Assessors know this is not realistic. Writing honestly about a consultation that went badly, a decision you are unsure about, or a patient you found challenging is not weakness — it is exactly what deep, credible reflection looks like. It is also the most educationally valuable entry you can write.
🔍
Read Your Supervisor's Comments
Supervisors take time to comment on shared entries. Reading and acting on those comments — updating your PDP, seeking a tutorial, changing your practice — and then documenting that you did so is one of the clearest demonstrations of the learning cycle in action. Ignoring supervisor comments is one of the most common signs of a trainee who is struggling to engage meaningfully.
🪞 Theme 3 — What Good Reflection Actually Looks Like

This is where most trainees struggle most. Not because reflection is hard — but because they default to description when reflection is asked for. Here is the difference, shown as a simple before-and-after.

❌ Descriptive (Not Reflection) ✅ Reflective (What's Actually Needed)
"A 68-year-old patient came in with chest pain. I took a history, examined them, and referred to cardiology." "I found this consultation difficult because the patient's chest pain had features of both cardiac and musculoskeletal origin. I initially leant towards the cardiac explanation but paused to reconsider. This made me think about how quickly I default to the most serious diagnosis. I will work on broadening my differential before settling on a plan."
"I saw a patient with depression. I prescribed an antidepressant and arranged follow-up." "I struggled to balance this patient's clear wish for medication with my uncertainty about whether the threshold for treatment had been reached. I realised I had not fully explored their ideas about treatment, which felt like a missed opportunity. I spoke to my trainer afterwards and will try to spend more time on the patient's perspective before moving to management."
"I attended a safeguarding training session today." "The safeguarding training highlighted a case pattern I had not previously recognised — a child with repeated minor injuries across different family members presenting at different appointments. I had not thought about cross-referencing presentations across a family before. I will review my knowledge of the RCGP safeguarding guidance and discuss this with my trainer."

💡 The Reflection Formula That Works

Strong reflections answer four questions. What happened — and what made it interesting or difficult? How did it make you feel, and why? What did you learn — including anything that surprised you? And what will you do differently? You do not need to answer them in order, and you do not need to write a dissertation. A focused, honest 200-word entry that answers all four questions is far more powerful than a 600-word description of the consultation.

🔗 Use the Gibbs Reflective Cycle if You're Stuck

Gibbs' Reflective Cycle (description → feelings → evaluation → analysis → conclusion → action plan) is one of the most useful structures for learning log entries. It is not the only model, but it maps naturally onto the Clinical Case Review form in FourteenFish. Many trainees find that simply keeping the six headings visible while they write stops them drifting into pure description.

🎯 Theme 4 — Cover All 13 Capabilities — Including the Uncomfortable Ones

Most trainees naturally gravitate towards the same two or three capabilities in their log entries — usually communication skills and data gathering, because those come up in almost every clinical consultation. The capabilities below are the ones most commonly under-evidenced at ARCP. Deliberately seek them out.

🏢 OML — Organisation, Management & Leadership
Often under-evidenced

Many trainees think this only applies to big leadership roles. It doesn't. Managing your admin, organising a follow-up system, leading a complex patient's care coordination, attending a practice meeting — all of these count. Don't overlook the small examples.

⚖️ EA — Ethical Approach
Often under-evidenced

Ethics comes up in almost every GP consultation — consent, autonomy, confidentiality, patient refusal. Trainees often act ethically without writing about it. When you face an ethical dilemma or tension in a consultation, that is exactly the entry to write. Name the ethical principle; relate your actions to it.

🌍 CHES — Community Health
Hardest in hospital posts

Community orientation is genuinely harder to demonstrate in a hospital post — but it is not impossible. A patient's social circumstances, access to services, health inequalities you observed, a department campaign — all can link to CHES with the right framing. Plan for this capability deliberately in your first GP post.

🛡️ FtP — Fitness to Practise
Misunderstood by many

This is not just about whether you are clinically good enough. It is about your awareness of anything — in yourself or others — that could put patients at risk. This includes your own health, the consequences of burnout, a colleague whose performance concerned you, or a near-miss that made you recognise a system risk.

📈 PLT — Performance, Learning & Teaching
Easy to miss

Did you teach a medical student today? Did you reflect on feedback you received? Did you discuss a case with a colleague and change your approach as a result? All of this is PLT — and it is more accessible than trainees realise. The key word is "contributing to others' learning" as well as your own.

🤝 TW — Team Working
Easy to overlook

Trainees who work well in teams often do so automatically and never write about it. Did you refer effectively to another professional today? Did you delegate safely? Did you respond constructively to a difficult team dynamic? If yes — write about it.

⚠️ The Under-Coverage Trap

Reaching the end of ST3 with little or no evidence for two or three capabilities is one of the most common reasons for an ARCP developmental outcome. It is almost entirely preventable. From the very start of ST1, look at your portfolio's capability coverage screen every six weeks. If a circle is still grey, ask yourself: when did I last actively seek a case or entry that would evidence this?

✅ Theme 5 — "Needs Further Development" Is Not Failure
❌ What Many Trainees Do

Pressure their assessor to upgrade an NFD

  • Ask consultants to change grades after the fact
  • Feel embarrassed by developmental grades
  • Try to present only perfect cases
  • Result: a suspiciously polished portfolio that ARCP panels distrust
  • This behaviour actually makes your portfolio weaker
✅ What ARCP Panels Want to See

NFD followed by growth

  • An honest NFD in ST1 followed by a PDP entry
  • Followed by a tutorial or targeted learning
  • Followed by a later entry showing improvement
  • This is the learning cycle made visible in your portfolio
  • An NFD that generates learning is more valuable than a "competent" that generates nothing

💡 The "Needs Further Development" Reframe

A grading of "needs further development" is not a negative judgement. It is the assessor saying: "You are not there yet — and that is completely normal at this stage of training." What matters is that you receive the feedback, recognise the learning need, act on it, and document that you did. That sequence — feedback → recognition → action → evidence — is precisely what the WPBA system was designed to produce.

🗣️ Theme 6 — Assessments Are Conversations, Not Tests

💡 The Most Valuable 10 Minutes of Any CbD or COT

The grade from a CbD or COT matters less than most trainees think. What matters most is the 10-minute feedback conversation that follows. This is where the real learning happens — where you hear what your assessor actually noticed, what they would have done differently, what they thought was genuinely strong. Trainees who approach every assessment as an opportunity for a genuine learning conversation — rather than a hurdle to clear — consistently grow faster and build richer portfolios.

🎯 Before Every CbD — Do This

Map your case to three capabilities before the session — not during it. Read the RCGP word pictures for those three capabilities so you know what "competent" and "excellent" look like. Then, at the end of the CbD, ask your assessor specifically: "Was there anything in how I handled the X capability that I could do differently?" That single question generates better feedback than most trainees receive in an entire post.

The Learning Cycle in Your Portfolio Assessment CbD, COT, MiniCEX etc. Feedback Honest, specific, acted upon Reflection Log entry with real insight PDP Entry SMART plan to address learning Evidence of Improvement Later log showing change This full cycle — visible in your portfolio — is what ARCP panels call "evidence of the learning cycle"
🏥 Theme 7 — Making Hospital Posts Work for Your GP Portfolio

📌 The Hospital Post Challenge

Hospital posts in ST1 and ST2 can feel disconnected from GP training. Some capabilities — particularly Community Health (CHES) and Organisation/Leadership (OML) — are genuinely harder to demonstrate in a hospital setting. But hospital posts offer rich opportunities for clinical skills, medical complexity, team working, and ethical challenges — if you approach them with a GP lens. The key shift is this: don't just do the work — ask yourself, constantly, "How would this look from a GP's perspective?" and write that up.

  • At the start of every hospital post, do a placement planning meeting with your clinical supervisor and agree which RCGP capabilities this post is best placed to develop. Document this in your portfolio.
  • In hospital, your assessors need a free FourteenFish account to complete your WPBAs. Send them the link proactively and brief them before the assessment — do not assume they know what is expected.
  • When in a non-primary care post, for most WPBA tools you are benchmarked against other GP trainees at the same stage — not against the independent GP standard. Note: for COT assessments undertaken during a primary care placement in ST1/ST2, the standard is that of an independent GP, even at that early stage. Check the RCGP assessor guidance for the specific tool being used.
  • Hospital posts are prime territory for CEPS (Clinical Examination and Procedural Skills). The five mandatory intimate examinations must be completed by the end of training. Plan these early — it is very stressful to try to arrange them in ST3.
  • Any Quality Improvement Activity (QIA) can be completed in a hospital setting. Find a department improvement project, contribute meaningfully, document your role, and link it to OML.
  • When you see a complex patient with multiple specialists involved and nobody coordinating, that is a classic Medical Complexity (MC) entry waiting to happen. Step in, coordinate, and write about it.
📋 Theme 8 — ARCP: What the Panel Actually Looks At

💡 From the Assessors Themselves

A senior RCGP ARCP assessor described the panel approach simply: "We are objective reviewers of evidence in the portfolio — only. We look for sufficient evidence to show competence, evidence of learning released regularly, evidence of reflection entered regularly, and completion of all mandatory requirements. We check whether assessments were spread across the training year or bunched at the end. We look through reports and ESRs and all assessments, and we agree an outcome. The process is meant to be fair and unbiased."

ARCP Outcome What It Means What Happens Next
Outcome 1 Achieving progress at the expected rate Continue training — all is well
Outcome 2 Specific development needed — no extra time required Targeted action plan; next review will check progress
Outcome 3 Inadequate progress — additional training time required Extended training; increased supervision and support
Outcome 4 Released from programme — unsatisfactory progress Rare; formal process required; right of appeal exists. Seek advice from your deanery and consider medical defence organisation support.
Outcome 5 Incomplete evidence — panel cannot make an assessment of progression Avoidable. Typically caused by missing mandatory evidence. Usually leads to a request for further information or an interim review; may result in additional training time.
Outcome 6 Gained all required capabilities for completion of training CCT application can proceed — you are done! 🎉

⚠️ Outcome 5 Is Entirely Preventable

An Outcome 5 (incomplete evidence — cannot assess) is not a clinical failure — it is an administrative one. It means your mandatory evidence was not in place before the panel date. The RCGP's Mandatory Evidence Summary Sheet lists everything that is required. Download it, fill it in, and upload it to your portfolio before every ARCP. Do not rely on memory or assumptions about what you have completed.

🌍 Theme 9 — Specific Tips for International Medical Graduates (IMGs)

If you trained outside the UK, some aspects of GP training's assessment culture may feel unfamiliar. These are the patterns that come up most often.

Many doctors from outside the UK trained in systems where clinical notes are factual and impersonal. UK GP portfolio entries are deliberately personal — they ask how you felt, what surprised you, and what you would do differently. This is not self-indulgence; it is evidence of self-awareness, which is one of the core qualities of a capable GP. Practice writing one honest, personal reflection per week. It will feel uncomfortable at first. It becomes natural quickly.

Patient autonomy and shared decision-making are deeply embedded in UK general practice culture — and in the RCGP capability framework. They are not just communication techniques; they reflect a philosophical approach to the doctor-patient relationship. When you demonstrate genuine shared decision-making in a consultation, write about it explicitly in your log. When you find it challenging, write about that too — it is rich material for the Communication and Consulting capability.

In many countries, medical training uses competency-based frameworks — a list of specific skills to achieve. UK GP training uses capability-based assessment, which goes further: it asks whether you can adapt and perform well across varied, unpredictable, and unfamiliar situations. A competency says you can examine a chest. A capability says you can examine a chest, adapt your approach for a child, a patient with anxiety, a deaf patient, and a patient who refuses — and explain your thinking in each case. Start practising this kind of flexible, contextualised thinking in your log entries from day one.

The RCGP assesses your professional capabilities — not your accent or grammatical perfection. Your log entries do not need to be literary masterpieces. They need to be clear, honest, and reflective. If you are finding the written English difficult, discuss this with your educational supervisor early — they are there to help, not to judge. Some trainees find that writing bullet points first and then expanding them into prose is a helpful strategy.

🌍 A Note for IMGs on Fairness and Support

The 13 professional capabilities describe qualities that are relevant to doctors everywhere — they are not about British culture or accent. However, it is important to be aware that research has consistently shown differential attainment in MRCGP assessments for doctors from some minority ethnic backgrounds and those who qualified outside the UK. This is a recognised challenge that the RCGP and deaneries are actively working to address. If you are an IMG and you experience difficulties, you are entitled to seek support from your educational supervisor, TPD, or your deanery's educator support team. Raising concerns early is the right thing to do.

Practical Checklists & Quick-Reference Tools

✅ Practical Checklists

Ready-to-use quick checks for trainees and trainers. Print, save, or photograph — whatever works.

📋 Before Every ARCP — Check This List

⚠️ Important: Numbers Change — Always Verify

The numbers below reflect requirements at the time of writing. The RCGP updates mandatory evidence requirements periodically. Always check the current RCGP Mandatory Evidence Summary Sheet at rcgp.org.uk before your ARCP. Less than full time (LTFT) trainees have pro-rata adjustments for most assessments.

MANDATORY NUMBERS

  • 36 Clinical Case Reviews per training year
  • At least 4 CbDs per year (ST1 & ST2); 5 CATs in ST3
  • COTs completed including at least 1 audio COT
  • MSF completed twice in ST1 (once in each half of the year) and once in ST3
  • PSQ completed in ST3
  • QIA in every training year; QIP in ST3 (GP post)
  • Prescribing Assessment in ST3
  • 5 mandatory intimate CEPS completed by end of ST3
  • CPR/AED updated every 12 months
  • Safeguarding certificates plus annual knowledge update
  • Form R completed annually

QUALITY & COVERAGE

  • All 13 capabilities evidenced in every 6-month review period
  • All Clinical Experience Groups covered across training
  • Entries spread across the year — not bunched at the end
  • ESR completed no later than 2 months before ARCP, no less than 2 weeks
  • PDP active with SMART entries — not just "pass AKT"
  • Placement planning meeting log done at start of each post
  • At least 1 LEA/SEA per training year
  • Supervisor comments read and responded to
  • Mandatory Evidence Summary Sheet completed and uploaded
  • Compliance Passport complete

✍️ Before Submitting a Log Entry — Ask Yourself

  • Have I described what happened — briefly, but with enough context?
  • Have I written about how the situation made me feel, and why?
  • Have I reflected on what I found difficult or what surprised me?
  • Have I identified a specific learning need — not just "I need to learn more about X"?
  • Have I written what I will do about it — with a timeline?
  • Have I linked this to the right capabilities — with a brief justification?
  • Is this entry about real clinical learning, or is it mainly descriptive?
  • Would a future version of me reading this back understand what I learned and how I grew?

🗣️ Before Every CbD — Prepare Properly

  • Choose a case you managed independently — not one where you sought advice and followed someone else's plan
  • Choose a case with genuine complexity — ethical tension, diagnostic uncertainty, communication challenge, or multimorbidity
  • Map the case to 3 capabilities in advance — share with your assessor at least 3 days before
  • Read the RCGP word pictures for those 3 capabilities before the session
  • Prepare a 2–3 minute summary of the case — what happened, what you did, and why
  • After the CbD, ask your assessor one specific question about a capability you want to understand better
  • Write the log entry within 24 hours while the feedback is still fresh
🎓 For GP Trainers — Teaching This Topic Well

🎓 For GP Trainers & Supervisors

How to use assessment, competence, and capability concepts in your everyday teaching — and how to help your trainee build a portfolio that actually tells a story.

🟣 Teaching the Concepts in a Tutorial

Most trainees have never been taught explicitly what competence and capability mean, or why the RCGP shifted from one language to the other. A 15-minute tutorial early in ST1 — walking through Miller's Pyramid and the Conscious Competence model using your trainee's own clinical examples — pays dividends across the entire three years. It gives them a vocabulary for self-reflection and a framework for understanding feedback.

Tutorial Trigger Questions

To develop capability thinking

  • "You handled that well in that consultation — how would you approach it differently with a patient who spoke no English?"
  • "You knew the guideline. How did you decide when — and how much — to deviate from it for this particular patient?"
  • "What level of Miller's Pyramid were you operating at in that consultation — and what would it look like to move up one level?"
  • "Where would you place yourself on the Conscious Competence ladder for managing multimorbidity right now? What would it take to move one step up?"
  • "If you had to teach this to a medical student, what would you say? What would be the hardest bit to explain?"
  • "What aspect of this case felt like genuine uncertainty — not just lack of knowledge, but real clinical ambiguity?"
Portfolio Coaching Tips

How to coach better reflections

  • When a trainee shows you a thin log entry, don't just say "it needs more reflection." Ask: "What was the hardest moment in that consultation? Write about that moment." Specificity unlocks reflection.
  • After a CbD, before giving your feedback, ask: "What do you think went well? What felt uncertain?" The trainee who can identify their own gaps is demonstrating exactly the self-awareness that capability frameworks require.
  • Regularly check the capability coverage screen in FourteenFish with your trainee. Point to any grey circles and ask: "What are you planning to do about that one?" This trains them to monitor their own breadth actively.
  • Praise honest, uncomfortable reflections explicitly. Say: "This entry shows real self-awareness. That is exactly what good portfolio work looks like." Normalise difficulty — it builds the psychological safety to write honestly.
  • Help trainees reframe hospital experiences through the GP lens. After a hospital post, ask: "What did you see there that you will do differently as a GP?"

⚠️ Common Trainer Mistakes — And How to Avoid Them

This is one of the most well-documented patterns in GP training. A trainee receives an NFD and asks their assessor (or supervisor) to upgrade it because it feels embarrassing or will "look bad." The assessor, wanting to be kind, agrees. But this actively harms the trainee's portfolio. A portfolio that shows only "competent" and "excellent" from day one is implausible and raises reliability concerns with ARCP panels. An honest NFD followed by a PDP entry and subsequent improvement is far more compelling evidence than a falsely upgraded grade followed by nothing. Be kind — but be honest. Explain this to your trainees explicitly.

Feedback like "good consultation" or "needs more reflection" is common — and nearly useless as a learning tool. Effective feedback identifies a specific moment, names what happened, explains why it matters, and suggests a concrete alternative. Instead of "your communication was good," try: "When the patient became tearful, you paused and acknowledged their emotion before moving forward — that was the right instinct and built real rapport." Instead of "needs more reflection," try: "You described what you did, but I want to understand your thinking. What was going through your mind when you decided not to examine? Write about that uncertainty."

Many tutorials drift towards clinical knowledge review — "what are the causes of X, what is the first-line treatment for Y." This is useful, but it only addresses the bottom two levels of Miller's Pyramid. A balanced tutorial should also explore reasoning, professional judgement, communication challenges, ethical tensions, and the trainee's own development. Aim for at least one question per tutorial that operates at the "Does" level: "How does this actually play out in your consultations?"

If you notice a trainee's portfolio is thin, their entries are purely descriptive, or certain capabilities are never evidenced — say so at the next tutorial, not at the ESR six weeks later. Early, informal feedback is always more useful and less stressful than a formal review conversation about inadequate evidence. Check your trainee's portfolio briefly every month. It takes five minutes and prevents a lot of anxiety for both of you.

📝 Before Every CbD You Conduct as an Assessor

  • Has the trainee sent you the case and mapped it to 3 capabilities at least 3 days in advance?
  • Have you read the RCGP word pictures for those capabilities so you know the standard?
  • Are your questions exploring reasoning and judgement — not just testing clinical knowledge?
  • Are you grading honestly against the standard — not against what you imagine the trainee hoped for?
  • Is your written feedback specific, developmental, and actionable?
  • Have you asked the trainee to self-reflect before you give your feedback?
  • Have you agreed a specific learning action or PDP entry to follow up from this assessment?
💡 What Experienced GPs Say — Practical Wisdom

💡 Practical Wisdom — What Experienced GPs & Educators Say

Distilled insights from GP educators, training leads, and experienced clinicians — the kind of advice that is hard to find in formal guidance documents.

The Spectrum from Novice to Expert — Where Are You Right Now? NOVICE Follows rules rigidly. Needs close guidance. ADVANCED BEGINNER Recognises context. Applies guidelines but not yet flexibly. COMPETENT Prioritises well. Handles complexity with effort. PROFICIENT / EXPERT Intuitive. Adapts naturally. Sees the whole picture. Most ST1 trainees start between Novice and Advanced Beginner. Most ST3 trainees should be around Competent. Expert GPs take years beyond CCT.
💡 On Reflection

"Most trainees describe what happened. The best trainees explore why they did what they did — and whether they'd do it differently. That is the difference between a log entry that documents a case and one that demonstrates learning. One of those is far more valuable."

💡 On Assessment

"WPBA is not designed to catch people out. It is designed to help them grow. The trainee who treats every CbD as an opportunity for a genuine conversation with their supervisor gets far more from the process than the one who treats it as a test to be passed."

💡 On Competence vs Capability

"A competent doctor does the right thing when the situation is familiar. A capable doctor does the right thing — or something close to it — even when the situation is not. That flexibility is what general practice demands, every single day."

💡 On The Portfolio

"Think of your portfolio as a professional autobiography — not a tick-box form. By the end of ST3 it should tell the story of a doctor who started as a hospital-trained registrar and became a GP: curious, reflective, capable of uncertainty, and safe to practise independently."

💡 On NFD Grades

"I always tell trainees: an NFD followed by a PDP entry followed by a later entry showing improvement is the learning cycle made visible. That sequence is more impressive to an ARCP panel than a string of 'competent' grades that generated no visible learning."

⚠️ Warning

"The single biggest mistake I see trainees make is leaving their portfolio until the end of the post. ARCP panels can see when entries were made. A burst of 30 log entries in the final week tells its own story — and it is not a reassuring one."

The Dreyfus Model — Why Expertise Is More Than Experience

The Dreyfus brothers (1980) described five stages of skill development that map closely onto GP training progression. The key insight is that the way a person thinks changes fundamentally at each stage — not just what they know.

Stage Thinking Style In GP Training What They Need
Novice Rule-following. Rigid. Context-free. Early ST1 — follows guidelines literally Clear rules; close supervision; permission to ask
Advanced Beginner Recognises patterns. Begins to see context. Mid ST1 / early ST2 — spots common presentations Increasing exposure; reflection on what differs
Competent Prioritises deliberately. Handles complexity. Late ST2 / early ST3 — manages uncertainty with effort Challenge; responsibility; complex cases
Proficient Sees the whole picture intuitively. Mid–late ST3 — approaching CCT readiness Breadth; autonomy; reflection on edge cases
Expert Intuitive, fluid, holistic. Rarely needs rules. Years beyond CCT — the experienced GP Ongoing CPD; mentoring others; self-challenge

💡 Why This Matters for Trainees

Understanding where you are on the Dreyfus scale helps you understand why you sometimes feel like you "should" know something but still feel uncertain. Moving from Competent to Proficient is not just about gaining more knowledge — it is about a fundamental shift in how you process clinical situations. That shift takes time and experience. It cannot be rushed, and it cannot be faked in a portfolio. But it can be accelerated by deliberate reflection — which is exactly what WPBA is designed to support.

Constructive Alignment in Action — A Worked Example

Here is how Biggs' constructive alignment works in real GP training — using one specific capability as an example.

Example: Medical Complexity (MC) Capability INTENDED LEARNING OUTCOME "Can manage patients with multimorbidity, uncertainty and risk — across varied settings" TEACHING & LEARNING ACTIVITY Clinic sessions with complex patients; tutorials on risk; debriefs on uncertainty ASSESSMENT METHOD CbD on a complex patient; log entries on managing uncertainty; MC self-rating at ESR All three point at the same outcome → This is constructive alignment working correctly

🎯 The "Test for Alignment" Question

For any capability, ask yourself: "If I wanted to demonstrate this capability, what would I do in clinic? And is my assessment actually testing that thing?" If the answer to the second question is no — the system is misaligned. Good assessment is always aligned to what you actually want people to be able to do.

Memory Aids & Cheat Sheets

🧠 Memory Aids & Cheat Sheets

Quick frameworks to recall the key concepts under pressure — in tutorials, ARCP prep, and educational discussions.

The "KICKER" Framework — What Assessment Should Do

LetterStands ForWhat It Means
KKnow your ILOs firstAlways start with what you want people to be able to DO
IIdentify the right assessment typeMatch Miller's level — knowledge test ≠ capability assessment
CCheck for validityIs this measuring what you actually care about?
KKeep reliability highMultiple assessors, multiple occasions, multiple tools
EEnsure alignmentBiggs — outcomes, teaching, and assessment must all match
RRemember capability > competenceGP training is about performance across varied settings

Miller's Pyramid — In One Line Per Level

LevelOne-Line SummaryTested By
KNOWS"I can recall the information"AKT, written exams, MCQs
KNOWS HOW"I can explain how to apply it"Problem-solving, extended matching questions
SHOWS HOW"I can demonstrate it in a controlled setting"SCA (simulated consultation); OSCEs in other contexts
DOES"I actually do it, consistently, with real patients"CbD, COT/MiniCEX (also spans Shows How → Does), Learning Logs

🔑 The One Analogy That Makes Constructive Alignment Stick

Imagine training someone to be a great chef. Your intended outcome is: they can cook excellent food for real diners. Your teaching should involve real cooking practice in a real kitchen (not just lectures about food). Your assessment should involve them actually cooking — not a written test on the history of cuisine. That's constructive alignment. When all three line up, people actually learn what you want them to learn.

Frequently Asked Questions

❓ Frequently Asked Questions

Formative assessment is assessment for learning — it gives feedback and supports development, without a pass/fail outcome. Most WPBA tools (CbDs, COTs, MiniCEX) are primarily formative. Summative assessment is assessment of learning — it makes a final judgement. The ARCP is summative. The award of your CCT is summative. Both are necessary — formative assessment without summative checkpoints gives no quality control; summative without formative gives no developmental support.

Because GPs need to perform well across enormously varied and unpredictable situations — which is exactly what capability describes. Competency implies a fixed threshold reached in a familiar setting. But no two GP consulting sessions are the same. The RCGP made a deliberate shift to capability-based language to reflect the real demands of general practice — adaptability, professional judgement, and the ability to manage complexity and uncertainty.

Yes — and this is the more dangerous type of poor assessment. An MCQ test that consistently gives all trainees similar scores (high reliability) but only asks about conditions they will never see in GP practice (low validity) is reliably measuring the wrong thing. It gives false reassurance. A good assessment must be both reliable and valid — consistently measuring what actually matters.

The ARCP is the formal summative checkpoint. It sits at the top of the process: an ARCP panel reviews your portfolio of WPBA evidence and makes a judgement about whether you have demonstrated sufficient capability progression to move to the next stage of training (or to qualify). It is explicitly capability-based — the panel assesses evidence across all 13 professional capabilities, not just exam results or procedural skills.

By the end of ST3, you need to demonstrate "competent for licensing" in each of the 13 professional capabilities — as defined by the RCGP's progression point descriptors. This doesn't mean you're at the top of your game in everything — it means you have demonstrated a sufficient level of capability across all areas to practise safely and independently as a GP. There is also an "excellent" category above this. The RCGP's progression point descriptors (available on the RCGP website) set out exactly what is expected.

No — the capabilities are about professional qualities that are universally valuable, such as communication, clinical reasoning, ethical decision-making, and teamworking. However, some capabilities have UK-specific dimensions: for example, understanding the NHS system, UK prescribing guidance, UK safeguarding frameworks, and UK consultation norms. These are not about being British — they are about being safe and effective within the system you're working in. The sooner you actively engage with these, the more natural they become.

Take-Home Points

🏁 Final Take-Home Points — The Complete Picture

  • 🟢Competence ≠ Capability. Competence is performing well in a known setting. Capability is performing well across varied, unfamiliar settings. GP training demands capability — because real patients are endlessly varied.
  • 🧠Miller's Pyramid. Know it, own it. AKT = bottom two levels. SCA = Shows How. WPBA = the apex. All three are needed. None is sufficient alone.
  • 🔗Constructive Alignment (Biggs). Outcomes → teaching → assessment must all point in the same direction. When they do, trainees develop what matters. When they don't, training produces tick-boxes, not capability.
  • 📊Reliability + Validity both matter. Reliable without valid = consistently measuring the wrong thing. Valid without reliable = accurate but inconsistent. Good assessment needs both.
  • 🌟Weaver's 6 Cs. Competence is just one of six. Culture, Comprehension, Communion, Creativity, and Coping matter equally — especially in general practice.
  • 😬NFD is not failure. It is a learning opportunity made visible. NFD → PDP → action → improvement is the learning cycle in action. That sequence impresses ARCP panels.
  • 📱Your FourteenFish portfolio is your story. Write it early, write it often, write it honestly. ARCP panels can see when entries were made. They are looking for a trajectory — not perfection.
  • 🎯Cover all 13 capabilities. Especially the uncomfortable ones: OML, EA, CHES, FtP. Check your coverage screen every six weeks. Don't drift toward the same two or three.
  • 💬Assessments are conversations. The 10-minute feedback discussion after a CbD is worth more than the grade. Trainees who engage genuinely grow faster.
  • 🌍For IMGs: the capabilities are universal qualities. Shared decision-making, patient autonomy, and reflective practice may feel unfamiliar if you trained elsewhere — but they are core to UK GP practice and fully learnable.

💬 A Final Thought

"The goal of training is not a doctor who can perform well when someone is watching. It is a doctor who performs just as well — or better — when nobody is watching, with a patient they have never seen before, in a situation they have never encountered. That is capability. That is what your training is for."

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Scroll to Top