Workplace-Based Assessment
— WPBA Explained
"Because watching you think is sometimes more revealing than watching you tick boxes."
Jump to a section
📥 Downloads
Handouts, summaries, and teaching extras — ready when you are. Download, print, or save for last-minute rescue revision.
🌐 Web Resources
A hand-picked mix of official guidance and real-world GP training resources. Because sometimes the best pearls are not hiding in the official documents.
🏛 Official RCGP Guidance
🏥 FourteenFish & Portfolio Tools
📚 Bradford VTS — Individual Tool Pages
🏫 GP Training Scheme Resources
If You Only Read One Thing
Everything you need to know about WPBA in 60 seconds.
⚡ WPBA in a nutshell
- WPBA is one of three parts of the MRCGP — alongside the AKT (knowledge test) and SCA (simulated consultation)
- It tests your real-world professional practice — the things AKT and SCA can't measure
- Evidence is collected across all three years of training and recorded in your FourteenFish ePortfolio
- You are assessed against 13 professional capabilities from the RCGP curriculum
- There are multiple assessment tools — CbDs, COTs, MiniCEX, CEPS, MSF, PSQ, QIP/QIA, Learning Logs, and more
- There is no pass/fail in individual assessments — they are developmental and formative
- Your portfolio is reviewed at six-monthly ESRs and at least annually at ARCP
- You are responsible for organising your assessments — not your trainer
- NFD ("Needs Further Development") grades in ST1 are completely normal and expected
- Minimum numbers are minimums — do more where you can
This page gives you the big picture overview of WPBA. For detailed guidance on each individual tool (CbD, COT, MSF etc.), Bradford VTS has dedicated pages for each one — links are in the Web Resources section below.
The Three Pillars of MRCGP
MRCGP is a single integrated qualification — but it has three distinct components, each testing different things.
Applied Knowledge Test (AKT)
A computer-based MCQ examination testing your knowledge of clinical medicine, evidence-based practice, and NHS/health policy. Taken at a Pearson test centre.
Simulated Consultation Assessment (SCA)
A 12-case simulated consultation examination testing your ability to integrate clinical, professional, and communication skills. Taken remotely via video.
Workplace-Based Assessment (WPBA)
Ongoing assessment of your real-world professional practice across all three years of training. Recorded in your FourteenFish portfolio and reviewed at ARCP.
The AKT tests what you know. The SCA tests how you perform in a simulated setting. WPBA tests what you actually do — day in, day out, across different settings, under real clinical pressure, with real patients. These three together give a far richer picture of a doctor than any single exam ever could.
📚 AKT — what ST1 needs to know
The AKT is a 3-hour computer-based MCQ exam of applied GP knowledge, covering three domains: clinical knowledge (the majority of marks), evidence-based practice and statistics, and NHS organisation, legislation and prescribing. Most trainees sit it in ST2 or early ST3.
In ST1, your job is:
- Get comfortable using NICE CKS and the BNF for real patients in clinic — the AKT expects guideline-correct, primary-care-appropriate answers, not what your current hospital firm does
- Start noticing GP patterns: first-line versus second-line treatments, red-flag features, referral thresholds
- Consider doing a small number of MCQ-style questions per week later in ST1 (e.g. Passmed, Pastest) to get used to the question style and spot knowledge gaps — not full-on revision yet
- After each clinic, pick one or two presentations and quickly review guideline-based management — this "micro-learning" builds AKT knowledge without feeling like exam prep
Ask yourself after every consultation: "What is the safest, guideline-based GP answer here?" — that's the AKT mindset, built from day one.
👉 Full AKT guidance is on the Bradford VTS AKT page
🎭 SCA — what ST1 needs to know
The SCA (Simulated Consultation Assessment) is the consulting exam — 12 simulated GP cases testing integrated clinical reasoning, data gathering, management, and professional communication. Most trainees sit it in ST3.
In ST1, your job is:
- Build a consultation structure from day one: open → explore ICE → focused history/exam → signposted explanation → shared plan → safety-net
- Practise both face-to-face and remote (telephone/video) consultations — remote feels different and you will be assessed on simulated remote cases too
- Use COTs and feedback to identify early habits to change — common ST1 mistakes include jumping to solutions before understanding the patient's concerns, and giving vague safety-netting
- "Make your reasoning visible" — briefly explain what you are thinking and why you are or are not worried, so the patient can follow your logic (this is exactly what examiners reward in SCA)
Every real consultation in ST1 is SCA preparation if you consult well. Safe, patient-centred, and able to manage risk appropriately — that is exactly what SCA samples.
👉 Full SCA guidance is on the Bradford VTS SCA page
All three MRCGP components are mapped to the same GP curriculum. This means your day-to-day clinical work, your Learning Log reflections, and the feedback from your WPBAs all directly build both AKT knowledge and SCA consulting ability at the same time. Almost everything you do in training can count twice — if you capture it properly. See MRCGP as part of your daily work, not a bolt-on extra.
🌱 Starting ST1 — Mindset, Expectations & Practical Priorities
What to expect when you begin, how to think about your role, and the practical habits that experienced trainees and educators say make the biggest difference.
✅ The right mindset for ST1
- You are not expected to perform like a CCT'd GP. Safe, systematic, and willing to ask for help is the target.
- A motivated learner who asks questions and reflects is valued far more highly than a trainee who pretends to know everything.
- Longer appointments (20–30 minutes) are completely normal early on — use the extra time to look things up and debrief, not to rush.
- Normalise saying "I'm going to check that guideline before I decide" — it's a sign of safe practice, not weakness.
- Debrief tricky consultations the same day whenever possible, while detail is fresh. Brief debriefs build more learning than long end-of-week reviews.
❌ Common ST1 mindset traps to avoid
- Trying to replicate the speed and confidence of your trainer from week one — that takes years, not weeks
- Seeing "I don't know" as failure rather than the start of learning
- Thinking portfolio work is separate from clinical work — they are the same thing if you approach them correctly
- Waiting to be told what to do — ownership of your training starts now
- Benchmarking yourself against other trainees — development trajectories vary enormously and comparison is rarely helpful
| Priority | What to do | Why it matters |
|---|---|---|
| 📁 Portfolio from week 1 | Start Learning Log entries immediately — even 2 short entries per week builds an unforced, credible record | ARCP panels can see entry dates; a sudden cluster before review looks exactly like what it is |
| 🔴 Red flags early | Learn urgent and 2-week-wait referral criteria for the most common cancer pathways and acute presentations | Reduces anxiety significantly; keeps you safe while you're still slow with everything else |
| 📖 NICE/BNF habit | After every uncertainty in clinic, look it up — NICE CKS, BNF, or Bradford VTS pages | Builds AKT knowledge base organically and normalises evidence-based practice |
| ⚡ Micro-learning | Between patients, spend 2–3 minutes reviewing one clinical question that came up in the last consultation | Over months, builds substantial, case-linked knowledge without formal study time |
| 👥 MSF planning | Identify your 10+ raters (minimum 5 clinical) within the first few weeks of each post | Leaving MSF to the final weeks of a post guarantees weaker, less considered responses |
| 🤝 Support network | Join or form a local or online study group with fellow trainees | Accountability, shared resources, and a safe space to debrief difficult cases |
| 😌 Wellbeing | Comfortable shoes, regular breaks, food, hydration, and clear boundaries about extra commitments | GP training is a marathon not a sprint; avoiding early burnout is a clinical safety issue as much as a personal one |
⚡ The micro-learning technique — how it actually works
Micro-learning means spending a small, focused amount of time on a single clinical question immediately after it arises in practice — rather than waiting for a study session that may never happen. The principle is simple:
During surgery, patient has AF → you're unsure about rate vs rhythm control in primary care → after that patient, spend 3 minutes on NICE CKS for AF → note the key threshold → carry that forward.
Over 3 years of training, this approach builds an enormous, well-retained, clinically anchored knowledge base. It outperforms reading textbooks in isolation because the learning is connected to a real patient — and connected learning sticks far better than abstract reading.
It also doubles as AKT preparation: every clinical question you look up in NICE/BNF is the exact type of knowledge the AKT tests. And it doubles as SCA preparation: the habit of explaining your reasoning to a patient becomes automatic when it's something you've just confirmed with evidence.
One question, two to three minutes, every gap between patients. That's it. Don't try to learn a whole topic — just answer the one question that actually came up. Stack this habit from week one and it compounds dramatically over three years.
😌 Wellbeing — why it's part of your training, not separate from it
Wellbeing isn't a soft add-on. It appears in the RCGP curriculum under Fitness to Practise, and experienced GPs consistently say that sustainable working habits are one of the most important things to establish in ST1 — when patterns are still forming.
Specific and repeated practical advice from experienced GPs and trainers:
- Footwear — genuinely important. Hospital and GP environments involve a lot more walking and standing than desk-based jobs. Investing in supportive shoes from day one prevents a surprising amount of fatigue and discomfort.
- Regular breaks — a short break between surgeries is not laziness; it's how you maintain cognitive quality through a full clinic day. Errors increase with fatigue.
- Food and hydration — skipping lunch routinely is common among juniors and routinely counterproductive. Keep a water bottle at your desk and something to eat at hand.
- Boundaries around extra commitments — it is normal to feel pressure to say yes to everything in a new job. Set reasonable limits early. You are in a training programme, not a service post, and your development matters.
- Using your support network — your TPD, educational supervisor, and fellow trainees are all there to help. Reaching out when you're struggling is professionalism, not weakness.
What is WPBA, exactly?
A practical guide to what it is, why it works the way it does, and what it's trying to achieve.
✅ What WPBA IS
- Assessment of real clinical practice across your training
- A formative, developmental process — designed to drive learning
- Evidence collected from multiple sources using different tools
- A structured way to identify strengths and developmental needs
- Ongoing across all three years of specialty training
- Reviewed holistically at each ARCP
- A record of your professional journey to independent practice
❌ What WPBA is NOT
- A series of pass/fail hurdles to jump through
- Something your trainer should organise for you
- Something to rush at the end of a post
- A box-ticking exercise (though boxes do exist)
- About scoring "excellent" in everything from day one
- Separate from your learning — it is your learning
- Only for GP posts — hospital assessments count too
🔍 The concept of triangulation
You might wonder why there are so many different assessment types. Part of the answer is coverage — different tools test different things. But there's another reason that's equally important: triangulation.
Triangulation means collecting evidence about your performance from multiple different sources and perspectives — not just one. A single assessment can give a skewed picture. But when your trainer (CbD), your patients (PSQ), your colleagues (MSF), and a direct observation (COT) all tell a consistent story, that creates a reliable, multi-dimensional picture of who you are as a doctor. Think of it like triangulating a location using three GPS signals — each one adds accuracy.
🧰 The WPBA Assessment Tools
An overview of all the tools you'll encounter across your training. Each has a dedicated Bradford VTS page — this is your map of the landscape.
The RCGP now uses the term CATs (Care Assessment Tools) as an umbrella term for several assessments including CbD and COT. You may see both terms used. Always check the RCGP WPBA pages for the most current terminology and minimum numbers.
| Assessment Tool | What it tests | Setting | Bradford VTS Page |
|---|---|---|---|
| CbD — Case-based Discussion CAT | Clinical reasoning & decision-making discussed with a supervisor using a real case you've managed | Any post | CBD page → |
| COT — Consultation Observation Tool CAT | Direct observation of your face-to-face consultation skills in real GP surgery | GP posts | COT page → |
| Audio COT CAT | Observation of telephone/remote consultation skills using a recorded audio consultation | GP posts | Audio COT → |
| Random Case Review CAT | Structured review of randomly selected cases — tests breadth of clinical practice & documentation | GP posts | RCA page → |
| Prescribing Assessment CAT | Review of your prescribing — safety, appropriateness, and reasoning | GP posts | Prescribing → |
| Referrals Review CAT | Review of a sample of your referral letters — quality, appropriateness, clinical reasoning | GP posts | Referrals → |
| Other CAT types | Lab/radiology results review; document management; duty/triage sessions; digital consultations review; leadership CAT | GP posts | RCGP guidance → |
| MiniCEX — Mini Clinical Evaluation Exercise | Direct observation of a focused clinical encounter (history, examination, communication) — primarily for hospital posts | Hospital posts | CEX page → |
| CEPS — Clinical Examination & Procedural Skills | Observed performance in specific clinical examinations and procedures (e.g. cervical smear, joint injection, fundoscopy) | Any post | CEPS page → |
| MSF — Multi-Source Feedback | Anonymous feedback from colleagues (nurses, receptionists, other doctors) about your professional behaviour & teamwork | Any post | MSF page → |
| PSQ — Patient Satisfaction Questionnaire | Anonymous feedback from patients about the quality of your consultations and interpersonal skills | GP posts only | PSQ page → |
| QIP — Quality Improvement Project | A structured audit cycle in primary care — identify a problem, implement change, re-audit | Primary care (ST1 or ST2) | QI page → |
| QIA — Quality Improvement Activity | Lighter-touch quality improvement engagement (minimum 2 across training) | Any post | RCGP QIA → |
| Learning Log (CCRs — Clinical Case Reviews) | Your personal reflective record — the most common entry type; reflect on cases against the 13 capabilities across different patient groups | All posts | RCGP Portfolio → |
| LEA / SEA — Learning Event Analysis / Significant Event Analysis | Structured reflection on a significant or learning event in practice — important for patient safety culture | Any post | SEA page → |
| PDP — Personal Development Plan | Your self-identified learning plan — demonstrates self-awareness, goal-setting, and reflection on progress | All posts | RCGP PDP → |
| Leadership Activity & Leadership MSF | Evidence of leadership in practice (teaching, leading a project, QI, representing a group) plus 360° feedback specific to leadership | Any post | NOE / Leadership → |
| CSR — Clinical Supervisor's Report | Your clinical supervisor's structured assessment at the end of each post — essential at every post sign-off | End of each post | CSR page → |
| ESR — Educational Supervisor's Review | Six-monthly structured review of your overall portfolio evidence by your educational supervisor — informs ARCP | Every 6 months | RCGP WPBA → |
You won't encounter all of these at once. Different tools appear at different stages, and some are specific to certain posts (e.g. PSQ is GP posts only; MiniCEX is primarily for hospital posts). Your trainer will guide you — but you are responsible for making it happen.
🎯 The 13 Professional Capabilities
Everything in your portfolio — every CbD, every Learning Log, every assessment — maps to these 13 capabilities from the RCGP curriculum. By the end of ST3, you need evidence across all of them.
13 capabilities can feel like a wall of labels. This grouping won't appear in official RCGP documents, but it's a genuinely useful memory aid — especially early in training.
Safe Doctor
Fitness to practise
Maintaining an ethical approach
Good Consulter
Communication & consulting
Data gathering
Clinical examination & procedural skills
Decision-making & diagnosis
Clinical management
Whole-Person GP
Medical complexity
Holistic practice, health promotion & safeguarding
Community health & environmental sustainability
Professional Team GP
Working with colleagues & teams
Performance, learning & teaching
Organisation, management & leadership
All 13 capabilities at a glance
When you write a Learning Log or choose a case for a CbD, map it to one or more capabilities deliberately. Aim for breadth across all 13 over time — especially the under-evidenced ones like leadership, teaching, ethics, and community orientation. Think of colouring in a map: your goal by the end of training is to have covered the whole picture. For a plain-English breakdown of what each capability actually means in practice, see the Capability Cheat Sheet below.
📋 What do "progression point descriptors" mean?
Each of the 13 capabilities has progression point descriptors in the RCGP curriculum. These are detailed descriptions of what a doctor at CCT-level looks like in that capability, including:
- Indicators of good practice — behaviours that demonstrate the capability
- Indicators of potential underperformance (IPUs) — early warning signs that something needs addressing
- Indicators of excellence — behaviours that go beyond the expected standard
Your educational supervisor will use these descriptors when reviewing your portfolio and providing feedback at each ESR. Knowing them yourself helps you understand what "good" looks like — and what gaps in your portfolio you need to fill.
📖 Capability Cheat Sheet — What They Actually Mean
Each capability translated into plain English, with examples of good evidence, common trainee mistakes, and practical tips. Especially useful if you're an IMG encountering these terms for the first time.
Do not try to "collect all 13 capabilities" like Pokémon. Your real job is to show, over time, that you are becoming a safe, reflective, organised, patient-centred GP. These descriptions are here to help you understand what each one means — not to be ticked off one by one.
1. Fitness to Practise
What it really means: Are you safe, professional, and honest — and do you recognise when your own health, conduct, or a colleague's might put patients at risk?
What supervisors are looking for: Self-awareness about your own wellbeing, honesty about limitations, and willingness to raise concerns about others.
Good evidence looks like:
- A reflection on recognising you were overwhelmed and sought support appropriately
- A log entry where you identified a colleague whose behaviour was concerning and considered your professional duty
- Any reflection showing honest self-appraisal about your own competence in a challenging situation
Common trainee mistake: Only linking this capability to big formal fitness-to-practise events. It also applies to smaller everyday situations — recognising fatigue, seeking help appropriately, or flagging a safety concern.
Include at least one log entry per rotation showing genuine self-awareness about your own health and performance. It doesn't need to be dramatic — recognising that you were tired and made a decision differently because of that is meaningful evidence.
2. Maintaining an Ethical Approach
What it really means: Can you navigate ethically complex situations — consent, confidentiality, capacity, competing interests — thoughtfully and in line with GMC standards?
What supervisors are looking for: Evidence that you think beyond the clinical problem to the ethical dimensions — patient autonomy, best interests, disclosure, duty of candour.
Good evidence looks like:
- A capacity assessment you conducted or contributed to
- A consent discussion that was genuinely complex (patient refusing recommended treatment, a Gillick competence situation)
- A confidentiality dilemma — and how you thought it through
- A case involving competing interests (e.g. patient welfare vs. their expressed wishes)
Common trainee mistake: Avoiding ethical cases in the portfolio because they feel risky to discuss. Honestly working through a complex ethical situation — even one where you were uncertain — demonstrates far more capability than a smooth straightforward case.
3. Communication and Consultation Skills
What it really means: Can you communicate effectively with patients, adapt your style, explore ICE (ideas, concerns, expectations), and build a therapeutic relationship?
What supervisors are looking for: Active listening, empathy, appropriate language, shared decision-making, and consultation structure.
Good evidence looks like:
- A COT with debrief notes showing how you adapted to a particular patient's needs
- A log about a consultation where communication was the main challenge (language barrier, health literacy, difficult news, angry patient)
- Feedback from MSF or PSQ showing communication strengths or areas for development
Common trainee mistake: Always defaulting to this capability — it becomes the fallback for almost every entry, leaving others empty. It's important, but ration it deliberately.
4. Data Gathering and Interpretation
What it really means: Can you take a focused, relevant history, use appropriate examination, and interpret investigations intelligently in context?
What supervisors are looking for: Knowing what to ask and what to leave out. Recognising what test results actually mean for this patient in this situation.
Good evidence looks like:
- A case where you targeted your history to the most likely differentials and avoided unnecessary investigations
- A reflection on interpreting an unexpected result and how you managed the uncertainty
- A case where you recognised the limits of investigations and managed without rushing to test everything
Common trainee mistake: Describing investigations ordered without explaining the clinical reasoning. The why matters far more than the what.
5. Clinical Examination and Procedural Skills (CEPS)
What it really means: Can you perform the clinical examinations and procedures required for independent GP practice safely and competently?
What supervisors are looking for: Observed competence in mandatory examinations (including all five mandatory intimate examinations by end of ST3) and a range of GP-relevant skills.
Good evidence looks like:
- Formally assessed CEPS entries covering the seven system-focused GP categories
- Reflections on performing procedures — especially ones that didn't go entirely as expected
Common trainee mistake: Leaving CEPS to the final months of training. Some procedures — especially intimate examinations — require specific clinical opportunities that may not arise on demand. Plan ahead from ST1.
By the end of ST3, you must have evidence for all five mandatory intimate examinations and a range of GP-relevant CEPS. This is a CCT requirement. Start ticking these off early — don't assume opportunities will appear naturally in ST3.
6. Making Diagnoses and Decisions
What it really means: Can you make sensible clinical decisions using evidence, probability, and judgement — especially when you're not certain?
What supervisors are looking for: Appropriate, reasoned decision-making — not just the right answer, but the right reasoning process.
Good evidence looks like:
- A CbD where you explain clearly why you chose one management path over another
- A log entry showing how you handled diagnostic uncertainty — without rushing to a premature label
- A case where you safety-netted because the diagnosis was not yet firm
Common trainee mistake: Writing "I suspected viral illness" with no explanation of why red flags were not concerning. The reasoning — not just the conclusion — is what demonstrates this capability.
In GP, good decision-making often means tolerating uncertainty safely rather than rushing to certainty. A reflection that honestly explores what you didn't know yet — and how you kept the patient safe despite that — is strong evidence for this capability.
7. Clinical Management
What it really means: Can you manage patients collaboratively, safely, and proportionately — using self-care, services, and follow-up appropriately?
What supervisors are looking for: Supported self-care, appropriate use of other professionals, urgent care when needed, and management that respects patient autonomy.
Good evidence looks like:
- A management plan with clear rationale — not just "referred to X"
- A case where you chose watchful waiting plus safety-netting rather than over-referring
- A reflection on balancing treatment burden, risk, and patient preference
Common trainee mistake: Either over-referring everything or holding too much in primary care without a safe back-up plan. Both extremes reflect poor management judgement.
Use each case to demonstrate several capabilities simultaneously. A well-written management case can evidence clinical management, decision-making, communication, and ethics all at once — far more efficient than thin evidence scattered across many separate cases.
8. Managing Medical Complexity
What it really means: Can you care for people with multiple problems, long-term conditions, competing priorities, and genuine uncertainty?
What supervisors are looking for: Personalised care that balances multiple conditions together, manages uncertainty well, and coordinates care across different systems.
Good evidence looks like:
- A frailty case or polypharmacy review
- A multimorbidity consultation where one guideline had to be weighed against another
- A case where social context changed what was clinically realistic
Common trainee mistake: Writing complex cases as though they were single-disease textbook problems — ignoring the competing priorities, social context, or the patient's own goals.
Complexity is not just "very sick patient." It includes multimorbidity, uncertainty, safeguarding, language barriers, social vulnerability, poor engagement, or conflict between a patient's wishes and ideal care. These everyday complexities are all valid evidence for this capability.
9. Holistic Practice, Health Promotion, and Safeguarding
What it really means: Do you think like a real GP rather than a narrow disease technician? Do you consider the whole person and protect vulnerable people?
What supervisors are looking for: A generalist mindset, personalised support, and meaningful engagement with safeguarding — not just certificates.
Good evidence looks like:
- A case where mental health, family stress, finances, beliefs, literacy, or work situation changed your management
- A safeguarding reflection linked to an actual patient encounter — not just a certificate upload
- Preventive advice in context, rather than tokenistic "lifestyle counselled"
Common trainee mistake: Adding one sentence about smoking or exercise and calling it holistic care. Also: having safeguarding certificates in the portfolio but no reflective application to cases. ARCP panels look for evidence of applied knowledge, not just completion certificates.
10. Community Health and Environmental Sustainability
What it really means: Do you understand the population and system you work in — and can you think beyond the individual consultation?
What supervisors are looking for: Understanding local health service structures, your role within them, and awareness of population health, inequalities, and sustainability.
Good evidence looks like:
- A reflection on access issues, deprivation, cultural barriers, or local service gaps
- A case where knowledge of the local system improved care
- A project or QIA with population relevance
- A reflection on prescribing stewardship or the environmental impact of clinical choices
Common trainee mistake: Ignoring this capability because it feels abstract. Think of it as the "why general practice matters in the real world" capability — it's about contextual awareness, not just individual clinical encounters.
11. Working with Colleagues and in Teams
What it really means: Can you function effectively within a multi-professional team, coordinate care, and contribute positively to team dynamics?
What supervisors are looking for: Being a reliable, communicative, respectful team member — and progressively someone who helps coordinate care rather than just receiving it.
Good evidence looks like:
- A case involving district nurses, pharmacists, social prescribers, safeguarding leads, or secondary care liaison
- A reflection on handover quality — what made it safe or less safe
- MSF feedback from non-medical colleagues (nursing, reception, admin)
Common trainee mistake: Only writing about what they did personally — forgetting to mention who else was involved. Team-working evidence requires the team to actually appear in the entry.
Asking for help early, learning local systems quickly, and communicating clearly across professional boundaries are the behaviours GP educators most consistently highlight as markers of a safe, effective team player. These same behaviours appear directly in MSF feedback and CbD discussions.
12. Performance, Learning, and Teaching
What it really means: Do you learn from your practice, improve the quality of care, and contribute to others' development?
What supervisors are looking for: Continuous improvement, evidence-informed quality improvement, and willingness to support colleagues' education.
Good evidence looks like:
- A reflection showing a genuine change in future behaviour — not just acknowledging a gap
- Teaching delivered to students, nursing staff, or colleagues — even informal bedside teaching counts
- A QIA or QIP that is simple but meaningful
Common trainee mistake: Reflections that are descriptive but not developmental. The difference is significant:
❌ Weak:
"Interesting case. I learned the importance of communication."
✅ Strong:
"I realised I delayed discussing red flags because I was too focused on symptom relief. Next time I will surface red flags earlier in the consultation and explain to the patient why I'm asking about them."
13. Organisation, Management, and Leadership
What it really means: Can you work effectively within NHS systems, use leadership appropriately, and contribute to the running of a safe and effective practice?
What supervisors are looking for: Medical generalism, practical leadership, sensible use of data and systems, and awareness of the organisational aspects of general practice.
Good evidence looks like:
- A QI or service-improvement piece
- A reflection on triage, workflow, prescribing systems, or improving admin safety
- Teaching or coordinating a small project within a team
- Using data to identify and address a quality gap in practice
Common trainee mistake: Assuming leadership only means "big management project." Small leadership counts just as much.
Organising a teaching session, improving a handover process, spotting a safety problem and escalating it, or running a small audit — these are all genuine leadership evidence. Look for opportunities that arise naturally in your day-to-day work and reflect on them deliberately.
When you finish a case, ask yourself: What happened? What was difficult or interesting? Which 2–3 capabilities does this genuinely show? What did I learn? What will I do differently next time? That simple sequence produces better reflective entries than any template.
📁 The FourteenFish ePortfolio
Your FourteenFish ePortfolio is where all of your WPBA evidence lives. Think of it as your professional development diary — and your proof of progress.
📁 What goes in your portfolio?
- All completed WPBAs (CbDs, COTs, MiniCEX, CEPS, MSF, PSQ etc.)
- Your Learning Log entries (Clinical Case Reviews)
- Your Personal Development Plan (PDP)
- Learning Event Analyses (LEAs) and SEAs
- Quality Improvement Project (QIP) and QIAs
- Your Educational Supervisor Reviews (ESRs)
- Clinical Supervisor Reports (CSRs) from each post
- Mandatory requirements (safeguarding, BLS, Out-of-Hours)
- Leadership activities and any naturally occurring evidence
🔑 Key portfolio facts
- Accessible via fourteenfish.com
- Supervisors and assessors need a (free) FourteenFish account to complete your assessments
- Evidence is reviewed at 6-monthly ESRs and annually at ARCP
- Your TPD and ARCP panel can view your portfolio
- It is your portfolio — you own it and manage it
- RCGP also maintains its own Trainee Portfolio section on their website with guidance documents
- Download the RCGP's Requirements and Mandatory Evidence Summary Sheet to track your progress
Some hospital consultants — especially in ST1 and ST2 — will be less familiar with the FourteenFish system than your GP trainer. They need a free account to complete your WPBAs. Proactively send them the link, explain what you need, and brief them on the grading system before the assessment. Don't assume they'll know what to do. Helping them helps you.
📋 The ARCP — Your Annual Review
ARCP stands for Annual Review of Competence Progression. It happens at least once a year and is essentially the formal checkpoint on your progress towards CCT.
WPBAs, Learning Logs, QIP/QIA, PDP, mandatory requirements — all uploaded to FourteenFish as you go.
Your educational supervisor reviews your portfolio evidence and rates your progress across the 13 capabilities. Both you and your supervisor complete ratings before meeting to discuss. A Personal Development Plan (PDP) is updated to focus learning for the next period.
A panel (including your TPD and others) reviews all your portfolio evidence, ESRs, CSRs, and mandatory requirements. They make a judgement on whether you're progressing satisfactorily.
Usually Outcome 1 (satisfactory progression). Other outcomes indicate areas needing attention, additional training time, or in rare cases, a concern requiring formal action.
When all three years are complete and all MRCGP components are passed, you receive your Certificate of Completion of Training (CCT) and can apply to the Performers List as an independent GP.
ARCP panels review your overall portfolio picture holistically. They're looking for evidence of progress across all 13 capabilities, engagement with the process, and professional development. An incomplete or thin portfolio is the most common reason for a less-than-ideal ARCP outcome.
💡 Key Concepts Every Trainee Must Understand
These concepts underpin the entire WPBA system. Understanding them changes how you approach your training.
📊 The grading system — what do the grades actually mean?
Individual WPBA assessments use a developmental grading framework. The exact descriptors vary by tool, but broadly reflect a scale from below expectations through to above expectations for your stage of training.
The key principle: you are not being compared to a fully qualified independent GP throughout training. You are being assessed against what's expected at your current stage. An ST1 performing at ST1 level and progressing is exactly what the system wants to see.
In your first posts you will likely receive some developmental grades. Please do not be alarmed. If your assessments in ST1 were all at the top of the scale, your educational supervisor would actually be concerned — because it would suggest either unreliable grading, or that you have no learning needs, which would make the entire training programme pointless. NFD grades are a gift: they tell you exactly where to focus your energy.
🎯 What standard am I being judged against?
Throughout all three years of training, the underlying benchmark is the standard expected of a GP competent for independent practice at CCT level. However, this does not mean you are expected to meet that standard in ST1.
The progression point descriptors in the RCGP curriculum provide guidance on what behaviours are expected at each stage. In ST1 and ST2, your portfolio will naturally show gaps and developmental needs — that is the design of the system. The assessments in these years help identify what needs to be worked on, so that by ST3 you are building towards full competence across all 13 capabilities.
Think of it like a flight path: you're not expected to be at cruising altitude when you've just taken off. But you should be climbing steadily.
🔢 Minimum numbers — the rule about minimums
The RCGP publishes minimum numbers for each type of WPBA in each training year. These are published on the RCGP website and in the Mandatory Evidence Summary Sheet (downloadable from the RCGP WPBA page).
Critically: minimum means minimum. These are the floor, not the target. If you just hit the minimum numbers, you're unlikely to have generated a rich enough body of evidence to clearly demonstrate your capabilities across the breadth of the curriculum. Your educational supervisor may well ask for more, and at ARCP, a thin portfolio of minimum-number assessments may raise questions.
If you are training less than full time, the same number of assessments are required per training year — but your training year will be longer. For example, a trainee on 50% hours will take 2 calendar years to complete 1 training year. Check the RCGP guidance for your specific situation.
📋 How does evidence get assessed holistically?
Because WPBA uses triangulation, poor performance in one element doesn't automatically mean failure. The assessors look at the overall picture. If one assessment suggests a weakness in a capability, they'll look across the rest of the portfolio to see if that weakness is confirmed or contradicted elsewhere.
This is why it matters to use different cases, different assessors, and different tools to demonstrate each capability. A portfolio that shows the same supervisor assessing the same type of case, over and over, provides a weak picture — even if all the grades are good.
🙋 Your Responsibility as a Trainee
This is one of the most important things to understand about GP training — and one of the biggest mindset shifts for many new trainees.
The single most important rule about WPBA
YOU are responsible for organising your assessments. Not your trainer. Not your clinical supervisor. You.
Your trainer won't tap you on the shoulder and say "shall we do a CbD today?" If you reach the end of a post without enough assessments, that responsibility sits entirely with you. This design is intentional — as a future independent GP, you'll need to manage your own appraisal, CPD, and professional development without anyone reminding you. WPBA starts building that habit from day one.
Download the RCGP Mandatory Evidence Summary Sheet and keep it somewhere visible. Know what you need and when you need it by.
Approach your trainer or clinical supervisor with a specific request and a suggested date: "Could we do a CbD on that case I saw with Mrs Khan? Would next Tuesday work?" Give adequate notice — not the morning of.
Don't leave everything to the last few weeks. Supervisors dislike being asked to rush through multiple assessments in the final fortnight, and it shows poor self-management at ARCP.
Don't just pick the same type of case every time. Deliberately choose cases that will help demonstrate different capabilities — ethical challenges, clinical complexity, team communication, uncertainty management.
A Learning Log entry that says "this went well" tells an assessor almost nothing. Genuine reflection on what you did, why, what you'd do differently, and what you've learned is what builds a rich portfolio.
Hospital consultants haven't been through GP training — they've been through specialty training. Many will be less familiar with the WPBA system and what grades mean. Brief them, send them the FourteenFish link, and help them help you. A whole row of "excellent" grades from an ST1 hospital post isn't reassuring — it's a reliability flag.
⚠️ Common Pitfalls & Trainee Traps
These are the mistakes that trainees make repeatedly. Learning them second-hand is much cheaper than discovering them yourself.
Leaving all your assessments to the last 2–3 weeks of a post. This is the single most common WPBA mistake. Supervisors hate it. It looks bad at ARCP. And the feedback quality suffers when everyone's rushing. Spread your assessments across the whole post from week one.
Picking only straightforward, uncomplicated cases because they feel "safe." The whole point of WPBA is to capture the breadth of your practice. Choose cases that reflect ethical complexity, clinical uncertainty, multimorbidity, difficult communication, or safeguarding — because that's what being a GP looks like.
"I saw a patient with chest pain. I took a history and examined them. I referred them to cardiology. Outcome: good." That's a clinical note, not a reflection. Good Learning Log entries explore your thinking, your feelings, what challenged you, what you'd do differently, and what it means for your development.
The trainees who do best in WPBA are the ones who genuinely engage with the feedback from every assessment and update their PDP accordingly. Using WPBA purely to tick numbers will give you a thin, unconvincing portfolio that ARCP panels can spot immediately.
Persuading a consultant to change an NFD to "competent" because it feels embarrassing. This actually makes your portfolio worse — not better. A portfolio that shows only "competent" or "excellent" grades from the start of ST1 raises serious questions about the reliability of the assessments, not the quality of your performance.
Reaching the end of training with no evidence for certain capabilities because you never deliberately sought to cover them. Keep a running check of your capability coverage across your portfolio — especially the ones that feel uncomfortable, like leadership, teaching, or ethical complexity.
💎 Insider Pearls
Things trainees consistently wish they had known earlier. Distilled from trainee experience across the years.
💎 What trainees say about WPBA
- Starting your portfolio early — even with just one or two entries a week — creates a rich, natural-looking evidence base. Starting late creates a portfolio that looks artificially assembled. Panels can tell the difference.
- The best CbD cases are often the ones that didn't go perfectly — where you were uncertain, or where you got the diagnosis wrong, or where you made a management decision you later questioned. Honest reflection on a difficult case is worth ten reflections on a straightforward one.
- Supervisors give better feedback in unhurried settings. Asking for a CbD with a cup of tea in a quiet tutorial slot gets you far better quality feedback than a rushed five minutes at the end of surgery.
- The MSF is taken much more seriously at ARCP than many trainees expect. Choose your raters thoughtfully — and include a spread of roles: nurses, receptionists, allied health professionals, not just doctors.
- Writing a good PDP isn't about listing everything you want to learn. It's about identifying a small number of specific, prioritised, realistic learning needs — and showing evidence that you've actually addressed them at each review.
- Your FourteenFish Learning Log is visible to your supervisor, your TPD, and your ARCP panel. Write it as if an intelligent, interested educator is going to read every word. Because they will.
- Don't confuse "naturally occurring evidence" with "evidence that appears by magic." You need to actively upload it and link it to capabilities. It doesn't sort itself.
WPBA is not really about the assessments. It's about the conversations those assessments generate. The most valuable thing is the feedback discussion that follows a CbD or COT — not the grade itself. Trainees who use every assessment as an opportunity for a genuine learning conversation with their supervisor gain far more from the process than those who treat it as a series of hurdles to clear.
🧠 How supervisors actually think when they read your portfolio
Supervisors are not counting entries or tallying grades. At every review they are asking four fundamental questions:
Are they recognising risk, seeking help appropriately, and not working outside their competence unsupported?
Is there a visible trajectory? Are early developmental needs being addressed? Is the portfolio telling a story of growth?
Do they acknowledge difficulties? Do they say "I struggled with X" when they did? Or does everything look suspiciously smooth?
Is there a credible trajectory towards the capability standard? Are gaps being actively addressed — not just noted?
One weak CbD is not a problem. A pattern of weak reasoning across multiple assessments that is never acknowledged or addressed — that is. One honest reflection about struggling is not a problem. Repeated defensive entries that never admit difficulty — that is. Honesty, over time, scores higher than apparent perfection.
🔄 The WPBA Mental Loop — the simplest model there is
WPBA is only meaningful when all four steps are present. Missing the "IMPROVE" step — acknowledging a learning point but not changing anything — is the most common reason portfolios feel thin despite having lots of entries.
💬 What the Trainee Community Says
Practical advice gathered from GP trainee forums, online communities, and trainee-written resources across the UK. Only included where it aligns with official RCGP guidance and GP educator advice — no speculation, no outliers, only things that trainees repeatedly find valuable.
📝 The Learning Log — what the community has learned the hard way
The Learning Log is consistently described as the most time-consuming and most misunderstood part of WPBA. Here's what trainees consistently wish they had been told at the start.
✅ What good looks like
- Bottom-heavy reflections — spend most of your words on what you learned and what you'll do differently, not on describing the case. Aim for roughly 20% description, 80% reflection.
- "I have done X" not "I will do X." Show completed actions, not intentions. "I will do more reading" is one of the most frequently noted irritants for educators — it demonstrates nothing.
- Link capabilities at the time of writing — don't leave it to your supervisor to guess. Make a clear justification. If you use the actual wording from the RCGP's detailed capability descriptors, supervisors find it much easier to agree with your linkage.
- One case, multiple entries — a single complex clinical encounter can legitimately produce two, three, or even four separate log entries, each exploring a different capability. This is efficient and demonstrates depth of reflection.
- Check your emotions — some of the best prompts for a log entry are the cases that stick in your mind at the end of the day, or the ones that made you feel uncomfortable. That emotional response is a signal that learning happened.
❌ What to avoid
- Writing entries that read like clinical notes ("I saw a patient. I examined them. I prescribed X. They improved.") — this scores nothing.
- Leaving all entries to the final week before your ESR review. ARCP panels can see the dates entries were created — a cluster of entries all dated the same week tells its own story.
- Always linking to the same two or three capabilities (usually Communication and Data Gathering). Leadership, Teaching, Community Orientation, and Fitness to Practise are consistently under-evidenced in trainee portfolios — target them deliberately.
- Writing entries that are so generic they could apply to any doctor at any stage — "I learned more about diabetes management" tells an assessor very little. Be specific about your learning and your behaviour change.
- Sending entries to your supervisor without having linked them to capabilities first. It puts all the interpretive work on them, and they may not validate to the capability you actually intended.
The informal expectation from most deaneries and educational supervisors is around 2–3 Learning Log entries per week — roughly 36 Clinical Case Reviews per training year. Writing little and often (ideally within 24–48 hours of the encounter while details are fresh) is consistently better than attempting to batch-write 20 entries in one sitting before a review. When writing is timely, the reflection is richer, more specific, and more useful for both learning and assessment.
📝 Reflection quality — before and after examples
This is what the difference between a weak and a strong reflection actually looks like. Read these once and you'll know instinctively which one your portfolio currently contains.
❌ Weak reflection (avoid this):
"I saw a patient with unexplained headaches. I took a history and arranged a referral to neurology. I learned the importance of thorough history-taking and communication."
Why it fails: Purely descriptive. Generic learning point. No mention of what was actually difficult, how you felt, or what you'll specifically do differently next time.
✅ Strong reflection (aim for this):
"I missed the patient's anxiety initially — I focused on clinical questions and only recognised it when they became tearful. I had been so focused on ruling out sinister causes that I failed to explore how much the headaches were affecting their daily life. Next time, I will explore the impact of symptoms and emotional cues earlier — before moving into investigation mode. This case has also made me reflect on my own tendency to default to clinical problem-solving under time pressure, at the cost of the relational part of the consultation."
Why it works: Specific about what went wrong and why. Shows self-awareness about a personal pattern. Clear, actionable change for next time. Extends beyond the case to broader professional insight.
💬 Useful phrases for reflective writing
Not scripts — just framings that help when you're staring at a blank entry box.
Describing the learning moment:
- "What I did well was…"
- "What I would do differently next time is…"
- "This case highlighted a gap in my knowledge around…"
- "I realised I was focusing on X and missed Y…"
- "This has changed how I approach similar patients by…"
Showing you've acted on it:
- "In response to this, I [completed / read / discussed]…"
- "I subsequently reviewed the NICE guidance on X and found…"
- "I discussed this in my next tutorial and learned…"
- "I applied this learning in a subsequent case — specifically…"
🎯 Capability coverage — the "capability map" mindset
One of the most useful mental frameworks trainees describe is treating the 13 capabilities like a map that you need to colour in. At every review, your supervisor will look at your capability coverage and ask: are there blank areas? The trainee community has identified which capabilities tend to be most neglected:
🔴 Consistently under-evidenced in trainee portfolios:
- Organisation, management and leadership
- Teaching, mentoring and clinical supervision
- Community orientation
- Fitness to practise
- Maintaining an ethical approach
🟢 Typically well-evidenced (but don't over-rely on these):
- Communication and consultation skills
- Data gathering and interpretation
- Making diagnoses / decisions
- Clinical management
Think of your portfolio like building a digital photograph. Each log entry and each WPBA is a pixel. Individual pixels contribute to the overall image. Not every pixel needs to be perfect — but you need enough pixels, spread across the whole image, for a clear picture to emerge. Some entries add many pixels (a rich, detailed reflection on a complex case covers multiple capabilities). Some add only a few. The goal isn't a certain number of entries — it's a clear, detailed picture across the full capability map.
🗣️ CbD (Case-based Discussion) — practical community tips
- Send your case notes at least a week in advance — not the morning of the session. A trainer who has had time to read and think about your case will give you far better, more structured feedback. Sending the case the night before shows poor preparation.
- Choose a genuine balance of case types — across the training year, aim for: complex multimorbidity, children, older adults, mental health, end of life, ethical complexity, clinical uncertainty, and communication challenges. Trainees who repeatedly choose similar cases end up with a portfolio that looks thin at review.
- The case must be one you managed independently — not a case where you received advice from another doctor before acting. The CbD is assessing your clinical reasoning, not your ability to relay someone else's decision-making.
- Hospital CbDs can be assessed by any ST4+ doctor — you're not limited to your named clinical supervisor. Many trainees don't realise this and miss opportunities to get good CbDs done. Your named clinical supervisor should complete at least one, but others can contribute too.
- Prepare the CBD prep sheet with care — supervisors notice the difference between a thoughtfully completed prep sheet and a hurried one. The prep sheet is part of the evidence.
- Map to capabilities before the discussion — you should have identified up to three capability areas for the case before you sit down with your supervisor. This focuses the discussion and helps both of you use the time well.
👁️ COT — tips from trainees who've done them well
- You don't need an exceptional case to get a useful COT — many trainees wait for a dramatic or unusual consultation to "use" for a COT. In reality, an ordinary 10–15 minute consultation that shows good ICE, clear explanation, and appropriate safety-netting often provides more teaching value than a high-drama case where you were just reacting.
- A mix of consultation types counts — COTs should include a mix of face-to-face, telephone, and video consultations across your training. Audio COTs (telephone) count towards your total and are often more achievable during busy GP posts.
- Consent is required for recorded consultations — if the consultation is audio or video recorded, you need the patient's consent. Always have a consent form ready. Your practice will have a process for this — ask your trainer early.
- Brief consultations usually won't do — very short, simple consultations are unlikely to give enough scope to demonstrate your abilities across the COT domains. Equally, very long consultations can lose structure. A focused 8–15 minute consultation is typically the sweet spot.
- Use the debrief conversation — the COT debrief is often where the real learning happens. Come prepared with your own reflections: what you think went well, what you'd do differently. Don't just wait for your supervisor's verdict.
👥 MSF — what the trainee community knows about making it meaningful
- Timing matters for MSF — request MSF when you've been in a post long enough for people to have a genuine, formed impression of you. Requesting it in week two of a six-month post means most respondents won't have much to work with. The second half of a post usually generates richer feedback.
- Plan your raters early — don't leave it to the final weeks — you need a minimum of 10 raters (at least 5 clinical) per MSF cycle. Identifying names in the first few weeks of a post gives you time to build working relationships and send reminders without panic. Trainees who leave MSF to the last fortnight routinely describe stress and thin responses.
- Choose a genuine mix of roles — trainees who submit MSFs with responses only from doctors often get less rich feedback. Include nurses, healthcare assistants, receptionists, pharmacists, physiotherapists — whoever you genuinely work with. In non-primary care posts where five non-clinician respondents are unavailable, you may use more clinicians, but the minimum of 10 total still applies.
- The MSF is taken seriously at ARCP — many trainees underestimate this. A pattern of developmental feedback across multiple MSF respondents is meaningful evidence. Conversely, strong MSF responses demonstrating professionalism, teamworking, and communication are genuinely impressive evidence for the capabilities panels struggle to see elsewhere.
- The Leadership MSF is separate — your ST3 Leadership MSF specifically asks about leadership behaviours and should be completed after you have done your Leadership Activity, so respondents have something specific to reflect on.
💛 Dealing with developmental feedback — advice from the trainee community
One of the topics that generates the most discussion in trainee communities is what to do when you receive feedback that feels unfair, unexpected, or developmentally graded. The collective wisdom is consistent:
When faced with negative WPBA feedback, the instinct is often to find ways to counteract it — rushing to get lots of positive assessments from other people. That's understandable, but it misses the point. The most powerful response to developmental feedback is to engage with it directly: reflect on it honestly in your portfolio, have a conversation with the assessor to understand the specific concerns, and then show through subsequent evidence that you've actually changed something. ARCP panels are experienced at reading portfolios — a pattern of genuine growth following challenging feedback is impressive. Defensive avoidance is not.
A single low-graded assessment is not a problem. ARCP panels look at the totality of evidence, not individual data points. A developmental grade that is followed by clear evidence of reflection, learning, and improvement actually tells a positive story. What concerns panels is a persistent pattern of the same developmental needs across multiple assessments with no evidence of addressing them — or conversely, a portfolio with nothing but high grades that looks too smooth to be credible.
Multiple trainees and trainers note the same thing: if you're more worried about how something "looks" at ARCP than about whether you're actually performing safely and developing well — that's the real issue to address. WPBA is a developmental process. Prioritising learning over appearances is not just philosophically correct — it also, paradoxically, produces a better portfolio.
📋 ESR preparation — what trainees wish they had done from the start
- Self-rate your capabilities before your ESR meeting — the ESR process asks both you and your supervisor to rate your progress across the 13 capabilities. Do your self-ratings thoughtfully, using the RCGP's capability word pictures and detailed descriptors. Use them to guide your supervisor towards your strongest evidence — signpost specific log entries, CbDs, and COTs that demonstrate each capability.
- Cross-reference your capability coverage regularly during the post — don't wait until ESR preparation to discover that you have nothing for three capabilities. Check your coverage every 2–4 weeks and deliberately seek cases or situations that address the gaps.
- Comment on patterns, not individual entries — when preparing your ESR narrative, your supervisor wants to see how your evidence tells a story of progression. Don't list individual pieces of evidence; show how a cluster of evidence demonstrates development. For example: "Over this rotation, I've addressed my PDP goal around leadership by completing my QIA, leading a team meeting, and reflecting on team dynamics in three log entries."
- Tag and link your evidence to capabilities as you go — don't leave capability tagging to a late-night portfolio session the week before your ESR. Tagging at time of writing means your evidence is always ready to present, and you'll be able to spot gaps much earlier.
- Form R timing — complete and submit your Form R at the right time. Completing it too early can cause problems (it becomes out of date by the time of review). Check your deanery's specific instructions and deadlines carefully. Failing to submit on time is one of the more avoidable ARCP hiccups.
🏥 Hospital posts — getting WPBA right in non-primary care settings
Hospital rotations present particular challenges for WPBA — the environment is different, the assessors are less familiar with GP-specific assessment frameworks, and some WPBA tools (COTs, PSQs) simply don't apply. Here's what the trainee community consistently finds useful:
- Your primary tool in hospital posts is the MiniCEX — use it regularly and across different types of clinical encounter (history-taking, examination, emergency assessment, outpatient review). Variety is what demonstrates breadth.
- Get creative about capability coverage — leadership and teamworking are often easier to demonstrate in busy hospital environments than in GP posts. Use ward rounds, handovers, MDT meetings, and teaching juniors as sources of reflective entries. A log entry about how you managed a deteriorating patient and coordinated the team can cover multiple under-represented capabilities at once.
- Approach your named clinical supervisor proactively in week one — arrange your placement planning meeting immediately. Ask them explicitly to complete at least one CbD and one MiniCEX during the rotation, and agree dates in advance. Don't leave this conversation to week eight.
- Brief your assessors on the grading system — the most common source of unreliable hospital WPBA grades is assessors who don't understand that a GP training "Needs Further Development" grade in ST1 is developmentally appropriate, not a failing grade. A short, tactful briefing makes all the difference.
- Use the hospital to cover Clinical Experience Groups you won't easily get in GP — think about which patient groups your rotation specialises in (e.g. children in paediatrics, older adults in medicine for the elderly) and make sure your Learning Logs reflect learning specific to those groups. This is strategic planning that pays off at ESR.
⚡ Practical ST1 survival tips — from trainees and educators
Things that trainees and UK GP educators consistently highlight as genuinely useful in the first year of training — not covered in official guidance, but consistently endorsed by it.
🔴 Learn red flags and 2WW criteria early
Trainees consistently say that learning the urgent and 2-week-wait referral criteria for common cancer pathways and acute presentations in the first weeks of each post reduces anxiety significantly. You won't be fast at everything, but knowing when something needs same-day action protects patients and protects you.
Good starting point: NICE guidance on Suspected Cancer Recognition and Referral (NG12) and your local 2WW criteria.
⚡ Micro-learning: one question per gap
Between patients, spend 2–3 minutes looking up one question that genuinely came up in the last consultation. Not a whole topic — just that one question. Over months, this builds an enormous, clinically anchored knowledge base that directly feeds AKT knowledge and SCA reasoning.
After clinic: UTI → check NICE CKS first-line → done. That's it.
🤝 Study groups and peer support
Many trainees find local or online study groups invaluable — for accountability, shared resources, debriefing difficult cases confidentially, and simply knowing that others are finding things hard too. The social normalisation of difficulty is genuinely protective.
Your VTS Half Day Release group is a built-in peer network — use it actively outside of teaching too.
😌 Wellbeing is a clinical safety issue
Experienced GPs repeatedly flag the basics: supportive shoes, food and hydration through the day, regular breaks between surgeries, and clear boundaries about extra commitments. Cognitive errors increase with fatigue. Protecting your capacity is not self-indulgence — it's part of safe practice and appears in the RCGP curriculum under Fitness to Practise.
Be safe. Seek supervision early. Use the curriculum and portfolio actively. Build evidence-based clinical habits from week one. See MRCGP as part of your daily work rather than a separate exam. Almost everything you do in training can count twice — for WPBA evidence, for AKT knowledge, and for SCA skill — if you approach it with that mindset.
🎓 For Trainers & TPDs
Teaching points, tutorial prompts, and practical guidance for supporting trainees with WPBA.
🧠 Common trainee blind spots on WPBA
- Not understanding the developmental grading philosophy — and feeling ashamed of NFD grades
- Selecting only "safe" or straightforward cases for CbDs and COTs
- Treating the Learning Log as a clinical summary rather than a genuine reflection
- Failing to spread assessments across the post
- Not appreciating the strategic importance of capability coverage — leaving major gaps
- Over-reliance on a single assessor across multiple assessments
- Not updating or acting on their PDP between reviews
💬 Useful tutorial discussion prompts
- "Looking at your portfolio — which of the 13 capabilities do you have the least evidence for? What cases might help fill that gap?"
- "Tell me about an assessment where the grade surprised you. What did you take from the feedback?"
- "If the ARCP panel saw your Learning Log today, what would they think about the depth of your reflection?"
- "What's the most challenging case you've seen this week that we haven't yet reflected on?"
- "How are you going to demonstrate leadership in your portfolio this rotation?"
- "What's in your PDP right now — and what evidence do you have that you've worked on those needs?"
Grading calibration is important — especially across different assessors. If you're ever uncertain about what a grade should mean at a given stage of training, the RCGP's progression point descriptors provide detailed guidance for each capability. Regular calibration discussions with fellow trainers and TPDs at VTS days help ensure trainees receive consistent, fair, and useful feedback across the scheme.
❓ Frequently Asked Questions
The questions that come up most often from ST1 and ST2 trainees.
Will I fail WPBA if I get a bad grade in one assessment?
My consultant gave me "Excellent" for everything in ST1. Is that okay?
Can I use the same case for multiple assessments?
What if I'm in a hospital post and there aren't enough opportunities for GP-style WPBAs?
I'm training less than full time (LTFT). Does anything change?
What's the difference between an ESR and an ARCP?
How do I show progress if the standard never changes?
🏁 Final Take-Home Points
The bits to remember tomorrow — and every day of your training.