40 Questions Nursess Should Ask Before Trusting AI
AI tools in clinical settings generate alerts, summaries, and suggestions that feel authoritative but lack the context you gathered at the bedside. Asking the right questions before you act on AI output is how you keep your judgement in charge and your patients safe.
These are suggestions. Use the ones that fit your situation.
1How many times this shift has this same alert fired for this patient, and how many times did it matter clinically?
2What specific vital sign or lab value triggered this alert right now, and does it fit with what I observed when I was at the bedside?
3Could this alert be firing because the patient moved, the sensor shifted, or the equipment needs recalibration rather than because something changed in the patient's condition?
4What did the patient look like, sound like, and feel like when I last assessed them five minutes ago, and does that match what the monitor is telling me?
5Is this alert based on a single reading or a pattern, and if it is a single reading, how often do I see false positives for this threshold?
6Have I dismissed this type of alert before without consequence, and if so, am I at risk of dismissing a real problem today?
7What is the time lag between when the patient's condition changed and when this alert appeared on my screen?
8Does the alert tell me what to do, or does it just tell me something is abnormal?
9If I ignore this alert, what is the worst outcome that could happen, and how likely is it?
10What information about this patient is missing from the data the monitoring system can see?
Questions about AI-generated documentation and summaries
11Does this AI summary of the patient's status include the detail about pain location, behaviour, or response that I documented and that the oncoming shift needs to know?
12Has the AI summary removed or changed any information about what the patient told me in their own words?
13If the incoming nurse reads only this AI summary and not my full assessment, will they miss anything that changes how they approach this patient?
14Did the AI include the family's concerns or requests that I documented, or has it reduced them to a clinical fact?
15Does this summary reflect the order in which things happened, or has the AI reordered the events in a way that changes their meaning?
16Has the AI filled in a gap in my documentation with an assumption, and if so, is that assumption correct?
17What is missing from this AI-generated note that I would normally say aloud in a proper handoff conversation?
18Does the summary capture why I made a particular decision, or does it only show what the decision was?
19If a safety incident happened later, would this AI-generated documentation show that I assessed and made a deliberate choice, or would it look like I missed something?
20Have I actually reviewed and verified this AI summary before it went into the medical record, or did I assume it was accurate?
Questions about clinical decision support and recommendations
21Does this AI suggestion fit with the clinical picture I built by talking to the patient and examining them, or does it contradict something I found?
22What data about this patient did the AI see, and what did it not see because I have not documented it yet?
23Is the AI recommending treatment based on what this specific patient needs, or based on what works statistically for patients like this one?
24If I follow this recommendation and it goes wrong, can I explain to a clinician or regulator why I chose to act on an AI suggestion?
25Does this AI recommendation account for the patient's previous bad experience, their stated preference, or their cultural or religious needs?
26Who is responsible if I act on this recommendation and it causes harm, and who will investigate what went wrong?
27Has the AI recommendation been checked against the patient's current medications, allergies, and recent procedures?
28What would I do if the AI tool was offline right now, and is that plan better or worse than what the AI is suggesting?
29Is the AI recommending something I can actually do, or does it require a doctor's order, equipment, or time I do not have?
30How confident is the AI in this recommendation, and how is that confidence score calculated?
Questions about your own clinical judgement and skills
31When was the last time I made a clinical assessment without checking what the AI said first, and am I still confident in that skill?
32Do I know what normal looks like for this patient based on my own observations, or am I relying entirely on what the monitor or the AI tells me?
33If the AI system goes down in the middle of a shift, what can I still assess and monitor using only my own senses and training?
34Have I caught a real problem that the AI missed recently, and if so, what was I noticing that the AI could not see?
35Do I feel pressure to document or act in a way that makes sense to the AI system rather than in a way that makes sense for the patient?
36Can I explain to a junior nurse why I made a clinical decision without pointing to what the AI said or showed?
37What part of my nursing assessment am I doing less thoroughly because I trust the AI to catch problems in that area?
38If I had to hand over this patient to another nurse without any AI tools available, what critical information would I tell them in person?
39Do I recognise the signs that a patient is deteriorating, or have I become dependent on the alert to tell me?
40When I disagreed with an AI alert or suggestion before, was I right, and did I document that so I can learn from the pattern?
How to use these questions
Keep a small log of times you dismissed an AI alert and what actually happened. Use this log to calibrate your own alert fatigue and to spot patterns in false positives.
Before you use an AI-generated summary in a handoff, read your original assessment out loud to the oncoming nurse. Note what is missing from the AI version.
When an AI suggests a clinical action, ask the ordering clinician what they would do if the AI tool was not available. Use that as your baseline.
Teach one junior nurse per week to do a focused assessment without looking at the monitor first. Help them build confidence in what their own eyes and hands tell them.
Document the reasoning behind your decisions, not just the decisions themselves. This protects you and makes it harder for AI to erase your judgement from the record.