For the Healthcare Sector
AI tools like Epic AI, Google Health, and IBM Watson Health promise faster diagnosis and reduced admin burden. But when these systems make decisions or suggestions that clinicians accept without scrutiny, two problems emerge: junior doctors never learn the reasoning that underpins good judgement, and liability falls on your organisation when an AI-assisted decision goes wrong but was never properly documented or reviewed. The real question is not whether to use AI, but how to use it in a way that keeps human judgement in control.
These are suggestions. Your situation will differ. Use what is useful.
When a consultant uses IBM Watson Health to suggest a differential diagnosis, or when a GP practice relies on Microsoft Azure Health to flag a patient for follow-up, that decision must be recorded in the clinical note. Your current clinical governance policies likely do not specify what "AI-assisted" means in documentation, leaving clinicians unsure whether to note that an algorithm was involved. Without this record, you cannot defend the decision in a complaint, and you cannot learn from cases where the AI suggestion was wrong but followed anyway. Start now by updating your clinical documentation standards to require notation of which AI tool was consulted and whether the clinician agreed with its output.
When a registrar always starts with Google Health's suggested diagnoses rather than building their own differential, they never develop the pattern recognition that senior clinicians rely on. Over five years, your junior doctors become less able to spot the unusual case that the algorithm would miss. This is not harmless efficiency. It creates a pipeline of clinicians who cannot practise safely if the AI system fails. Require that junior doctors develop their own diagnostic thinking first, before they consult any AI tool. The AI becomes a check on their reasoning, not a replacement for it.
Many NHS trusts have built their patient safety systems around human checkpoints. A radiology report goes to a consultant before results are released. A medication flag triggers a pharmacy review. These processes assume human judgement at key points. When DeepMind AlphaFold or similar tools start automating parts of these pathways, your safety system becomes invisible and outdated. The process looks the same on paper but the human check has vanished. Before deploying any AI tool, map exactly which human decision points it will touch and decide in advance whether that checkpoint stays, moves, or becomes a secondary review.
When a patient is told 'the AI found your condition' rather than 'your symptoms and these test results suggest this diagnosis', something important shifts. The clinician becomes a messenger for the algorithm rather than the person who understands the patient's particular situation. This erodes trust, especially in long-term conditions where the relationship between clinician and patient is itself therapeutic. Patients need to hear their clinician explain the reasoning, in language they understand, even if an AI tool helped generate that reasoning. Make it explicit in your consent processes that AI will assist diagnosis but the clinician remains responsible for the explanation and the decision.
A one-hour induction on how to use Epic AI or Microsoft Azure Health is not training. It teaches someone how to input data and read results. Real training means understanding what the tool can and cannot do, recognising the signs that it has made an error, and knowing when to override it. Clinicians who fear the AI system will avoid it or over-rely on it. Those who understand its limits will use it well. Build ongoing education into your AI adoption plan. Include false case studies where the AI was wrong and discuss what the clinician should have caught. Make it clear that the person who knows when to ignore the AI is the most valuable person in the room.
Key principles
Key reminders
Related reads
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.