For Nursess and Clinical Staff

40 Questions Nursess Should Ask Before Trusting AI

AI tools in clinical settings generate alerts, summaries, and suggestions that feel authoritative but lack the context you gathered at the bedside. Asking the right questions before you act on AI output is how you keep your judgement in charge and your patients safe.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Questions about AI alerts and monitoring

1 How many times this shift has this same alert fired for this patient, and how many times did it matter clinically?
2 What specific vital sign or lab value triggered this alert right now, and does it fit with what I observed when I was at the bedside?
3 Could this alert be firing because the patient moved, the sensor shifted, or the equipment needs recalibration rather than because something changed in the patient's condition?
4 What did the patient look like, sound like, and feel like when I last assessed them five minutes ago, and does that match what the monitor is telling me?
5 Is this alert based on a single reading or a pattern, and if it is a single reading, how often do I see false positives for this threshold?
6 Have I dismissed this type of alert before without consequence, and if so, am I at risk of dismissing a real problem today?
7 What is the time lag between when the patient's condition changed and when this alert appeared on my screen?
8 Does the alert tell me what to do, or does it just tell me something is abnormal?
9 If I ignore this alert, what is the worst outcome that could happen, and how likely is it?
10 What information about this patient is missing from the data the monitoring system can see?

Questions about AI-generated documentation and summaries

11 Does this AI summary of the patient's status include the detail about pain location, behaviour, or response that I documented and that the oncoming shift needs to know?
12 Has the AI summary removed or changed any information about what the patient told me in their own words?
13 If the incoming nurse reads only this AI summary and not my full assessment, will they miss anything that changes how they approach this patient?
14 Did the AI include the family's concerns or requests that I documented, or has it reduced them to a clinical fact?
15 Does this summary reflect the order in which things happened, or has the AI reordered the events in a way that changes their meaning?
16 Has the AI filled in a gap in my documentation with an assumption, and if so, is that assumption correct?
17 What is missing from this AI-generated note that I would normally say aloud in a proper handoff conversation?
18 Does the summary capture why I made a particular decision, or does it only show what the decision was?
19 If a safety incident happened later, would this AI-generated documentation show that I assessed and made a deliberate choice, or would it look like I missed something?
20 Have I actually reviewed and verified this AI summary before it went into the medical record, or did I assume it was accurate?

Questions about clinical decision support and recommendations

21 Does this AI suggestion fit with the clinical picture I built by talking to the patient and examining them, or does it contradict something I found?
22 What data about this patient did the AI see, and what did it not see because I have not documented it yet?
23 Is the AI recommending treatment based on what this specific patient needs, or based on what works statistically for patients like this one?
24 If I follow this recommendation and it goes wrong, can I explain to a clinician or regulator why I chose to act on an AI suggestion?
25 Does this AI recommendation account for the patient's previous bad experience, their stated preference, or their cultural or religious needs?
26 Who is responsible if I act on this recommendation and it causes harm, and who will investigate what went wrong?
27 Has the AI recommendation been checked against the patient's current medications, allergies, and recent procedures?
28 What would I do if the AI tool was offline right now, and is that plan better or worse than what the AI is suggesting?
29 Is the AI recommending something I can actually do, or does it require a doctor's order, equipment, or time I do not have?
30 How confident is the AI in this recommendation, and how is that confidence score calculated?

Questions about your own clinical judgement and skills

31 When was the last time I made a clinical assessment without checking what the AI said first, and am I still confident in that skill?
32 Do I know what normal looks like for this patient based on my own observations, or am I relying entirely on what the monitor or the AI tells me?
33 If the AI system goes down in the middle of a shift, what can I still assess and monitor using only my own senses and training?
34 Have I caught a real problem that the AI missed recently, and if so, what was I noticing that the AI could not see?
35 Do I feel pressure to document or act in a way that makes sense to the AI system rather than in a way that makes sense for the patient?
36 Can I explain to a junior nurse why I made a clinical decision without pointing to what the AI said or showed?
37 What part of my nursing assessment am I doing less thoroughly because I trust the AI to catch problems in that area?
38 If I had to hand over this patient to another nurse without any AI tools available, what critical information would I tell them in person?
39 Do I recognise the signs that a patient is deteriorating, or have I become dependent on the alert to tell me?
40 When I disagreed with an AI alert or suggestion before, was I right, and did I document that so I can learn from the pattern?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.