40 Questions Doctorss Should Ask Before Trusting AI Diagnostic Outputs
When Epic suggests a differential diagnosis or ChatGPT summarises a research paper, you make the final call on patient care. Asking the right questions about AI outputs keeps your judgement sharp and your patients safe.
These are suggestions. Use the ones that fit your situation.
Questions About the Patient Data the AI Actually Saw
1Did the AI see the full clinical picture, or only lab results and vital signs without the patient's actual presentation?
2What information did I notice during the examination that the AI cannot access because it was not documented in Epic?
3Does the AI know this patient's comorbidities, or is it working from incomplete medication reconciliation?
4Has the AI been trained on patients who look like my patient, or mostly on populations from different settings?
5If the AI is analysing scans or imaging, did a radiologist review the same images before the AI gave its output?
6Does the AI know what I know about this patient's previous presentations and what turned out to be red herrings?
7Is the AI output based on current diagnostic criteria, or older data that reflected previous clinical standards?
8What percentage of this patient's documented history did the AI actually process before generating its recommendation?
9Did the AI see the timeline of symptoms, or just a static snapshot of the current state?
10Has the AI been told about atypical presentations in this patient's ethnic or demographic group?
Questions About How the AI Arrived at Its Answer
11When Glass Health lists a probability for each diagnosis, what is the margin of error, and is it telling me?
12Is the differential diagnosis ranked by statistical likelihood in the AI's training data, or by how common it is in actual practice?
13Does the AI explanation match my understanding of the pathophysiology, or is it rationalising after the fact?
14If I disagree with the AI's top diagnosis, does its second choice make more clinical sense to me?
15Has the AI weighted rare but serious diagnoses appropriately, or is it biased towards common conditions?
16When ChatGPT cites a study, did it actually read the methods section, or just the abstract and conclusion?
17Is the AI showing me its reasoning step by step, or just the final answer?
18What would happen to the AI's recommendation if I changed one key clinical finding?
19Is the AI using diagnostic criteria that match current guidelines in my country and setting?
20Does the AI's confidence level match how certain I actually should be given the evidence?
Questions About What You Should Do Next
21What specific test does the AI recommend, and can I explain to my patient why we need it instead of just following the tool?
22If I ordered every test the AI suggests, would I be over-investigating this patient?
23Does the AI's recommendation take into account that this patient cannot tolerate the first-line medication it proposed?
24Am I ordering this test because the AI said so, or because I independently think it answers my clinical question?
25If the AI recommends watchful waiting, can I articulate what red flags would make me escalate care?
26Does the AI know about my local prescribing guidelines and drug availability, or is it suggesting something unavailable here?
27What is my alternative diagnostic strategy if the AI's top recommendation turns out to be wrong?
28Is the AI's recommendation safe in this specific patient, or am I adding risk by following it without modification?
29Could I defend this decision to a colleague if the outcome is bad, or does it feel like I am just doing what the tool said?
30Have I told the patient what the AI suggested, or am I presenting it as my own clinical reasoning?
Questions About Your Own Clinical Reasoning
31Before I looked at the AI output, what was my own differential diagnosis?
32Has the AI diagnosis changed my mind, or just confirmed what I already thought?
33Am I trusting this AI output more because it came from a prestigious source like MedPaLM?
34What pattern am I recognising in this patient that makes me hesitant about the AI's top choice?
35If a trainee presented this case to me without using AI, what would I recommend they do?
36Is my hesitation about the AI output based on sound clinical reasoning, or am I just uncomfortable with automation?
37What would I do differently if the AI tool was not available to me right now?
38Am I developing diagnostic skills slower because I always have an AI differential diagnosis to react to?
39Can I spot the diagnosis the AI missed, or am I now dependent on these tools to think comprehensively?
40What would I tell my medical school students about how to use this tool without losing their own judgement?
How to use these questions
Write down your own clinical impression before opening the AI tool. Then compare. If you find yourself automatically deferring to the AI version, pause and ask why.
When an AI gives you a probability like 73% for diagnosis X, ask yourself: would I manage this patient the same way at 60% certainty? At 85%? The number alone should not change your practice.
Trainees: use AI to verify your thinking, not to replace it. If the tool disagrees with you, spend time understanding why before accepting its answer.
Your liability protection comes from showing your work. Document what the patient told you, what you found on examination, what the AI suggested, and why you agreed or disagreed. Never document just the AI output.
If you cannot explain the AI's recommendation to your patient in plain language without the AI present, you do not understand it well enough to act on it.