For the Healthcare Sector

40 Questions Healthcare Organisations Should Ask Before Trusting AI Diagnostic and Clinical Tools

When an AI system recommends a diagnosis or clinical action, your clinicians need to know whether that recommendation is safe to act on. These 40 questions help you build the judgement needed to use AI tools without letting them replace human accountability.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Clinical Governance and Audit

1 When Epic AI flags a diagnostic possibility, does your system record which clinician reviewed the output and what they decided to do with it?
2 If a patient is harmed after an AI recommendation was followed, can you retrieve the exact training data and decision logic that produced that recommendation?
3 Does your clinical governance process require sign-off from a named clinician before any AI-assisted diagnosis becomes part of the patient's official record?
4 Has your trust explicitly documented which clinical decisions can be made with AI support and which ones still require independent human assessment?
5 When Google Health or Microsoft Azure Health makes a recommendation, do your protocols specify the level of evidence needed before a junior clinician can act on it alone?
6 Does your AI implementation track how often clinicians override AI recommendations, and do you review those cases for patterns?
7 Are your patient safety incident reports set up to flag cases where AI output was a factor in adverse outcomes?
8 Does your trust have a process for communicating changes to AI tool accuracy or behaviour to all clinicians using them?
9 When IBM Watson Health suggests treatment pathways, is there a documented fallback protocol if the clinician disagrees?
10 Has your clinical governance lead reviewed the liability implications of each AI tool you are using, in writing?

Patient Safety and Clinical Reasoning

11 Can a junior doctor in your trust explain why DeepMind AlphaFold made a particular protein structure prediction, or do they just accept the output?
12 If an AI diagnostic tool is right 95 percent of the time, do your clinicians know what the 5 percent of errors look like?
13 Does your training programme for new staff include scenarios where AI recommendations are confidently wrong?
14 When Epic AI suggests a diagnosis, do your protocols require the clinician to generate at least one alternative hypothesis themselves?
15 Are your senior clinicians still doing diagnostic reasoning work, or has AI pushed that work away from them and toward junior staff?
16 Does your trust measure whether clinicians are building diagnostic skills at the same rate they did before AI implementation?
17 When Google Health or Azure Health analyses imaging, is a consultant radiologist required to review the output before it shapes treatment decisions?
18 Do your protocols specify which patient populations the AI tool was tested on, and whether your local population differs in ways that might change its accuracy?
19 If Watson Health recommends a treatment that conflicts with local clinical guidelines, is that flag highlighted to the clinician?
20 Has your trust documented the cognitive load on clinicians who must review and validate AI outputs during routine clinical work?

Transparency, Data, and Bias

21 Does your vendor provide the demographic breakdown of patients in the training data for your diagnostic AI tools?
22 If Epic AI performs better for some patient groups than others, have you documented which groups and why?
23 Can you access a plain-language explanation of what features the AI tool is using to make its recommendations, or is it a black box?
24 Does Microsoft Azure Health or Google Health flag when it is operating outside the distribution of data it was trained on?
25 Have you tested your AI diagnostic tools on your own patient population to see if performance matches the vendor's published figures?
26 When IBM Watson Health makes a recommendation, does your system show the confidence level and what that confidence is based on?
27 Has your trust reviewed whether any of your AI tools were trained on data that includes documented historical biases in diagnosis or treatment?
28 Do your clinicians know what happens if they submit a patient case that the AI tool has never seen before?
29 Has your organisation negotiated contractual rights to audit the performance of Epic AI, Azure Health, or other tools on your own data?
30 When DeepMind AlphaFold makes a prediction about protein structure, does your research team have access to confidence scores and uncertainty estimates?

The Therapeutic Relationship and Patient Trust

31 Does your consent process tell patients when AI has been used in their diagnosis or treatment recommendation?
32 If a patient asks whether their diagnosis came from a clinician or an AI tool, can your staff give a clear answer?
33 Have you measured whether patients are less likely to trust treatment recommendations when they know AI was involved?
34 Does your trust have a policy on whether clinicians can tell patients that an AI tool is infallible or more accurate than human judgment?
35 When Epic AI assists with diagnostic reasoning, does the clinician spend the same amount of time talking with the patient about the diagnosis?
36 Are your clinicians trained to explain to patients what AI did and did not do in their care?
37 If a patient is diagnosed using Google Health or Microsoft Azure Health, does your organisation offer them the option of a second opinion from a human clinician?
38 Has your trust considered whether AI-assisted care could discourage patients from raising concerns with clinicians?
39 Do your patient information leaflets and consent forms mention the AI tools used in your diagnostic or treatment pathways?
40 When Watson Health automates administrative decisions about care pathways, does your trust track whether patients feel they had adequate involvement in those decisions?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.