By Steve Raju

For Doctorss and Clinicians

Cognitive Sovereignty Checklist for Clinicians Using AI Diagnostic Tools

About 20 minutes Last reviewed March 2026

When you use AI diagnostic support tools, your brain can shift into reaction mode rather than reasoning mode. You start working backward from the AI's list instead of forward from what you observe in the patient. This habit weakens the diagnostic judgement you need most when the case is unusual or the AI fails.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Doctors: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Build Your Differential Before You Check the AI

Write your own differential before opening the AI toolbeginner
Spend two minutes constructing your differential from the history, exam, and any results you have gathered. This protects your reasoning pathway. When you see the AI output second, you compare rather than defer.
Name the two diagnoses you would not want to missbeginner
State these aloud or write them down before the AI generates anything. These are the anchors that keep you connected to what matters clinically. The AI might miss them if they are rare or if your presentation is atypical.
Identify the one finding that would change your thinking mostintermediate
Ask yourself what result or observation would shift your diagnosis completely. It tests whether your reasoning is built on this patient or on the AI probability list. If you cannot answer, the working diagnosis is not solid enough yet.
Explain your logic to a colleague before consulting the AIintermediate
Speaking your reasoning aloud strengthens it and makes gaps visible. You catch your own assumptions before the AI confirms them. This works especially well during teaching rounds or case discussions.
Set a threshold for what diagnostic confidence you need before using AIintermediate
If you are very uncertain, the AI can anchor you to the wrong possibility too easily. Build your thinking to at least 60 percent confidence before checking the tool. Below that, you need more data first.
Document your reasoning separately from the AI outputbeginner
Keep your clinical note distinct from what the AI suggested. This protects your record and forces you to stand by your own thinking. It also lets you see later whether your reasoning was sound.

Test AI Output Against the Patient You Actually See

Ask whether the AI differential fits the severity of this patient's illnessbeginner
An AI tool like ChatGPT or MedPaLM ranks by frequency in its training data, not by how sick your patient looks right now. A common diagnosis might be last on your list because this person is critically unwell. You must override the ranking based on acuity.
Check if the highest probability diagnosis explains the full clinical pictureintermediate
AI outputs a probability for each condition, but it has not examined your patient. A 75 percent match might leave 25 percent of the findings unexplained. If the patient has three key features and the top diagnosis explains only two, question it.
Verify that the AI suggestions fit your patient's age, comorbidities, and contextbeginner
Tools like Epic AI can miss that this is a 28 year old with no surgical history or a 78 year old on five medications. You know the context the AI cannot see. Diagnoses that fit the demographics matter more than the algorithm ranking.
Reject any diagnosis that requires you to ignore a major exam findingintermediate
If the AI-suggested diagnosis does not account for a clear physical sign or lab abnormality, do not force it to fit. This is where many diagnostic errors start. The AI may have incomplete information about what you observed directly.
Look for what the AI did not suggest and ask whyadvanced
Glass Health or Watson Health does not show you what it ruled out. Spend 30 seconds asking whether an important diagnosis is missing from the list. Sometimes the omission reveals a gap in the AI's training or in what you told it.
Compare the AI's reasoning path with your ownadvanced
Good AI tools show you how they weighted each finding. If the tool and you reached the same diagnosis using different logic, one of you may be wrong. Work through the difference rather than accepting agreement at face value.
Demand a reason if the AI proposes a diagnosis you had not consideredintermediate
This is valuable, but only if you understand why. Ask the tool to explain what features led it there. Then check those features against what you actually found. This transforms the AI from an oracle into a consultant.

Protect Your Diagnostic Development and Accountability

Spend time on cases without AI support at least once per weekbeginner
If you use Epic AI or Glass Health for every patient, your pattern recognition skills will atrophy. You need the struggle of building a differential with incomplete information. This struggle builds the intuition you need in emergencies.
Teach one diagnostic case per week without showing the AI output firstintermediate
When you train residents or students, ask them to reason through a case before using the AI tool. This maintains the apprenticeship model where they see your thinking. Their diagnostic skills will develop faster than if they only react to AI suggestions.
Review cases where the AI ranked a diagnosis lower than you choseintermediate
These cases teach you the most. Write down why you ranked differently. Over time, you will see patterns in where AI fails and where you have an advantage because of your direct clinical exposure.
Keep responsibility for the diagnosis explicit in your documentationbeginner
Never write that the diagnosis was based on AI suggestion as if that removes your accountability. You chose to accept or reject what the tool offered. Your licence rests on your judgement, not the machine's ranking.
Use AI tools for literature synthesis, not as the first step in diagnosisintermediate
ChatGPT and MedPaLM work better for finding recent evidence on a diagnosis you have already formed. Use them to test whether current practice supports your thinking. Do not use them to generate the diagnosis itself.
Record whether your diagnosis matched the AI output and what changed your mind if it did notadvanced
Over months, this record shows you whether you trust the AI too much or too little. It also creates an audit trail if a case goes wrong. This is protection for you and improvement for your practice.

Five things worth remembering

Related reads


Prompt Pack

Paste any of these into Claude or ChatGPT to pressure-test your own judgment. They work best when you respond honestly before reading the AI reply.

Test your independent clinical reasoning

I have just reviewed an AI-assisted diagnostic recommendation for a patient. Before I accept it, ask me questions that would reveal whether my own clinical reasoning supports the same conclusion, or whether I am anchored to the AI output.

Check for automation bias before a major decision

I am about to follow a clinical decision support recommendation for [describe situation]. Challenge me: what is my own clinical reasoning? What does the patient presentation tell me that the algorithm cannot see?

Rebuild your unassisted diagnostic instinct

Present me with a clinical vignette and ask me to reason through it and reach a differential diagnosis without seeing any AI analysis. Then compare my reasoning to standard clinical guidance and tell me what I considered or missed.

Audit the limits of an AI tool you use regularly

I use [name the AI clinical tool] regularly. Help me think through what patient populations or presentations it is least reliable for, and where my clinical judgment is most needed to compensate for its blind spots.

Prepare for a difficult patient conversation

I need to discuss [describe situation] with a patient. Do not write a script. Instead, ask me questions that help me think through what this particular patient needs to hear, how they might respond, and what concerns I have not yet considered.


Reading List

Five books that give this topic the depth it deserves. Each one is genuinely worth reading, not just citing.

1

Being Mortal

Atul Gawande

A direct examination of where clinical judgment. And genuine patient-centred care, diverges from what algorithms optimise for. More relevant than ever.

2

The Checklist Manifesto

Atul Gawande

How structured process and human judgment work together in high-stakes clinical settings. The model for thinking about AI tools in medicine.

3

Thinking, Fast and Slow

Daniel Kahneman

The cognitive biases driving clinical error, availability, anchoring, representativeness, are the same ones AI tools can reinforce rather than correct.

4

How Doctorss Think

Jerome Groopman

A clinician's account of the reasoning errors behind diagnostic mistakes. And why the solution is better human thinking, not offloading judgment to algorithms.

5

Cognitive Sovereignty

Steve Raju

A framework for protecting independent clinical judgment as AI decision-support becomes embedded in every part of medical practice.


Questions to ask yourself

Use these before your next AI-assisted decision. Honest answers are more useful than comfortable ones.


Common questions

Can AI diagnose patients accurately?

AI diagnostic tools can match or exceed radiologist accuracy on specific tasks like reading chest X-rays or skin lesion images in controlled conditions. But diagnosis in practice involves integrating patient history, context, clinical presentation, and patient-reported symptoms that fall outside what any algorithm processes. The tools are useful; they are not a substitute for clinical judgment.

What are the risks of AI in clinical decision-making?

The main risks are automation bias (accepting AI recommendations without independent verification), dataset bias (AI trained on non-representative patient populations), and alert fatigue from AI systems that generate too many low-confidence flags. Doctorss who override AI without documenting their reasoning also face growing medico-legal exposure.

Should doctors follow AI treatment recommendations?

AI clinical decision support should inform, not replace, your clinical judgment. Evidence-based recommendations are useful for prompting consideration of options you might have missed. But the right treatment for a specific patient, accounting for their values, comorbidities, social situation, and preferences, requires the kind of contextual reasoning that AI systems do not provide.

How is AI changing medical practice?

AI is accelerating imaging analysis, flagging deteriorating patients in hospital systems, supporting drug discovery, and reducing documentation burden. The shift for clinicians is learning to work alongside tools that are highly accurate in narrow domains but brittle at the boundaries, knowing when to trust the algorithm and when your clinical judgment should override it.

Will AI replace doctors?

Specific diagnostic tasks will be increasingly automated. But the practice of medicine, building a therapeutic relationship, navigating uncertainty with a patient, making decisions in the presence of incomplete information, is not a pattern-matching problem. Clinicians who remain skilled at these fundamentals will remain essential.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.