For Doctorss and Clinicians

The Most Common AI Mistakes Doctorss Make

Doctorss using AI diagnostic support are losing the habit of forming their own differential diagnosis first, then checking it against the AI output. This inversion of reasoning weakens clinical judgement faster than any single tool ever could.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Diagnostic Reasoning Under AI

When Glass Health or ChatGPT generates a differential diagnosis list, the human brain anchors to the first few items before consciously reviewing the clinical picture. Your pattern recognition never gets trained because you are pattern-matching against the AI list instead of the patient.

The fix

Generate your own differential diagnosis before opening the AI tool, write it down, then compare what the AI adds or reranks.

Google MedPaLM and similar tools show confidence scores that feel like diagnostic probability, but they reflect training data frequency, not the likelihood in your specific patient with their specific comorbidities and presentation. You make a treatment decision based on an 87% confidence score that means something entirely different in context.

The fix

When you see a probability score from any AI tool, ask yourself what clinical finding would change your mind, then examine whether the AI saw that finding in the note.

AI differential diagnoses are ranked by statistical likelihood in the training set, not by acuity or harm if missed. A life-threatening condition that is uncommon in the training data will appear lower on the list than a benign condition that is common.

The fix

After reviewing the AI output, manually scan for the three most dangerous diagnoses that fit the presentation, regardless of where they appear on the ranked list.

ChatGPT and other generative tools synthesise medical literature into fluent summaries that feel authoritative but may be condensing studies you have never read or misinterpreting the actual findings. You cite the AI summary to a patient or colleague without knowing if the underlying evidence actually supports it.

The fix

When an AI summary influences a clinical decision, identify the specific studies it references and read the abstract of at least one yourself.

Epic AI can surface previous notes and generate summaries, which saves time but trains you to rely on documentation quality rather than asking the patient directly about prior presentations, medication side effects, or details that did not make it into old records. Your interviewing skills atrophy.

The fix

In any case where diagnosis is unclear, spend five minutes asking the patient about the one thing the AI summary does not explain.

Clinical Judgement and Decision-Making

IBM Watson Health and similar decision-support tools rank treatment options and contraindication checks, but they cannot weigh your patient's values, functional goals, or tolerance for side effects the way you can. You present the AI recommendation as the right answer instead of as one option to discuss.

The fix

Treat every AI treatment recommendation as a starting point, not a conclusion. Ask your patient which outcome matters most to them before explaining what the AI ranked first.

When a patient's physical exam, vital signs, or imaging do not fit the AI differential diagnosis, you experience cognitive dissonance and often resolve it by trusting the AI over your direct observation. This is the opposite of how clinical medicine works.

The fix

If the clinical picture contradicts the AI output, that is signal. Document the contradiction and use it to reframe the problem, not to doubt your own assessment.

Epic AI flags interactions and contraindicates automatically, which reduces cognitive load, but means you stop memorising the common interactions in your own specialty. When the AI tool is unavailable or slow, your error rate climbs.

The fix

Each month, pick one interaction or contraindication that the AI flagged for you and learn it well enough to catch it without the tool.

Templated AI-assisted notes in Epic sometimes generate phrases and findings that sound right but do not match what you actually examined or what the patient reported. You review the note quickly and submit it because it is mostly accurate, then later it becomes the legal record of what you did.

The fix

Before you sign any AI-drafted note, read the physical exam and assessment sections word for word and edit any finding you did not directly assess yourself.

When Glass Health delivers a diagnosis suggestion quickly, the satisfaction of resolution is neurologically real. Your brain stops searching for alternative explanations or gathering more data. You feel like you have answered the question when you have only acknowledged the AI's suggestion.

The fix

Set a rule that you do not move to the next patient until you can explain the diagnosis in a sentence without reading the AI output.

Learning and Professional Development

Junior doctors and residents who learn to use ChatGPT and Epic AI before seeing thirty cases of chest pain with the supervisor never build the intuition that makes pattern recognition fast and reliable. Their diagnostic reasoning stays slower and more dependent on AI because they never internalised the patterns.

The fix

Restrict trainees to supervised practice without AI until they have worked up at least ten cases in your specialty area independently, then introduce the tools as comparison, not as starter.

When MedPaLM or ChatGPT answer a clinical question with citations, the answer feels research-backed without your needing to read the research. Your journal reading habit declines because the AI has already synthesised the literature for you.

The fix

Read one paper per week that contradicts or complicates something an AI tool told you in the past week, not to prove it wrong but to understand what it missed.

When you use ChatGPT to draft a patient explanation of their condition, you might explain a mechanism or side effect that sounds plausible but that you have never thought through yourself. Your patient education becomes less grounded in your actual knowledge.

The fix

Never use an AI-generated patient explanation without first explaining the concept to yourself out loud without the AI output present.

If a patient presentation is rare or unusual, the AI tool may not include the correct diagnosis in the top ranks because it was not common in the training data. You stop considering the diagnosis because the AI did not prime you to think of it, and you miss an unusual case.

The fix

When a patient does not improve on the first two diagnoses the AI suggested, add one diagnosis that is rare in your population but fits the presentation, and test for it.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.