For the Healthcare Sector
Healthcare organisations adopt AI diagnostic tools like Epic AI and Google Health without updating clinical governance protocols that were built around human judgment alone. This creates liability gaps, deskills junior clinicians, and erodes the deliberative reasoning that patient safety depends on.
These are observations, not criticism. Recognising the pattern is the first step.
When Epic AI flags a diagnosis or IBM Watson Health suggests a treatment pathway, clinicians begin checking their own thinking against the tool instead of using their own assessment first. The cognitive direction reverses: the algorithm becomes the anchor, and human judgment becomes the verification step.
The fix
Require clinicians to document their own provisional diagnosis or clinical reasoning before viewing any AI recommendation, then record whether they agreed, disagreed, or modified the suggestion.
NHS trusts add a checkbox or text field stating that an AI tool was consulted, but do not record why the clinician accepted or rejected its output. This creates a liability gap: if an adverse event occurs, the trust cannot show that human judgement was actually applied.
The fix
Mandate that clinicians document their reasoning for accepting or rejecting each AI recommendation, including what clinical context or patient factors the AI tool did not account for.
DeepMind AlphaFold and similar tools perform well on typical presentations, but junior and mid-level doctors start to suppress doubt when a patient does not quite fit the algorithm's pattern. The tool's confidence becomes their confidence, even when their experience suggests caution.
The fix
Train clinicians to recognise when a patient presentation is atypical and to escalate to senior review before accepting an AI recommendation, rather than after.
Google Health diagnostics are integrated into a GP practice, so the order of investigations changes, or the timing of referrals shifts. But the protocol document still describes the old process. Junior staff and locums follow the written protocol, not the actual AI-informed workflow, leading to delays and safety risks.
The fix
Review and rewrite all diagnostic and referral protocols within 30 days of deploying a new AI tool, and train all staff on the new sequence before go-live.
Hospital systems assign AI tool access only to consultant-level staff to reduce risk, so junior doctors and registrars never develop the reasoning skills to challenge or contextualise an AI output. When they eventually reach senior roles, they cannot think independently of the tool.
The fix
Use AI as a teaching tool for junior clinicians: have them form their own diagnosis first, then review the AI output together with a senior, with the senior explaining why they agree or disagree.
A Microsoft Azure Health alert suggests a medication interaction or a patient safety flag, but because the alert came from a trusted system, it is treated as a decision rather than a signal. The clinician assumes the tool has already considered all context and does not perform independent verification.
The fix
Define in your patient safety policy which AI outputs require mandatory human review before action is taken, and which are informational only.
When administrative and clinical AI tools automate appointment reminders, test result notifications, and care plan updates, patients experience the NHS as a series of algorithm outputs rather than care from a known clinician. Trust in the organisation declines even when clinical outcomes are sound.
The fix
Reserve AI for administrative tasks and flagging only; ensure that all clinical communication and clinical decision explanations come directly from the clinician caring for that patient.
A hospital system uses IBM Watson Health to flag treatment options, but does not tell patients that an algorithm contributed to the recommendation. If the patient later experiences an adverse outcome, they feel misled about how their care was planned, damaging trust in the organisation.
The fix
Add a standard statement to all consent forms and care plan discussions: identify which AI tools informed the recommendation and explain what they do and what their limits are.
Epic AI is validated on diverse datasets in trials, but your NHS trust serves a population with different age, ethnicity, and comorbidity patterns. The tool may perform worse on your patients, but you do not know because you did not run a local validation study before full deployment.
The fix
Before go-live, audit the AI tool's performance on a sample of at least 200 cases from your own patient population, stratified by age, ethnicity, and condition severity.
If a clinician acted on an AI suggestion that was incorrect and a patient was harmed, the trust's liability defence weakens if there is no record that a qualified person reviewed the output before it was actioned. The AI vendor is not responsible; the trust is.
The fix
Establish a formalised review process for high-consequence AI outputs, with a named reviewer, documented approval, and clear escalation thresholds before any action is taken.
Staff attend a one-hour workshop on how to use Google Health diagnostics or Microsoft Azure alerts, then are expected to integrate the tool into daily practice for years. They never learn to recognise when the tool is performing outside its range or when their own clinical experience should override it.
The fix
Run a three-month learning programme: teach tool use in month one, then run supervised cases and group reflection sessions in months two and three to build confidence in knowing when to trust and when to question the output.
DeepMind or similar tools are given authority to triage patients or flag high-risk cases for escalation, but the criteria and performance of these decisions are not regularly audited. Some patients are systematically deprioritised or missed because the algorithm has learned patterns that no one reviews.
The fix
Audit all patient prioritisation decisions made or recommended by AI tools monthly, stratified by ethnicity, age, and socioeconomic status, to check for bias or systematic error.
Younger staff who are comfortable with Epic AI or IBM Watson Health become the de facto authorities on how decisions should be made, while experienced clinicians who built their reasoning without AI feel sidelined. The organisation loses access to the judgment it most needs.
The fix
Pair AI-experienced and AI-naive clinicians on rounds and case reviews for three months, ensuring both perspectives are heard when decisions are made.
A GP or registrar suspects that a Google Health recommendation is wrong but worries that questioning the AI will mark them as not keeping up or lacking confidence. They stay silent, and the risky decision proceeds unchallenged.
The fix
Establish a monthly case discussion forum where clinicians can present cases where they disagreed with an AI tool, with no penalty, and audit whether their concerns were justified.
Worth remembering
Related reads
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.