40 Questions Pharmacistss Should Ask Before Trusting AI
Your judgement is the final safety layer between an AI recommendation and a patient outcome. When you stop asking questions about why an alert appeared or what a system assumed, you lose the ability to catch the errors that matter most.
These are suggestions. Use the ones that fit your situation.
Drug Interaction Alerts from Lexi-Interact and Epic AI
1Does this alert assume the patient is taking the full listed dose of both drugs, or could their actual dose be lower than the interaction threshold?
2Has the AI flagged this interaction because it appears in the database, or because there is evidence it causes harm in this patient's age group and renal function?
3What is the mechanism of this interaction, and does the patient's current lab values suggest they are actually at risk from it today?
4If I override this alert, what specifically am I responsible for monitoring, and do I have the clinical information to do that monitoring?
5Is the AI showing me the strength of evidence behind this interaction, or just that it exists somewhere in the literature?
6Could this alert be firing because of a similar drug name or code rather than the drug the patient is actually taking?
7Does the system know that the patient stopped taking one of these drugs two weeks ago, or is it only reading the current medication list?
8Has the AI considered the temporal sequence - did the patient take both drugs without incident already, or is this a new combination?
9What is this alert not telling me that a pharmacist would need to know before deciding to counsel or block the prescription?
10If ten pharmacists in my organisation see the same alert today, how many will override it, and what does that pattern tell me about its reliability?
Clinical Decision Support from Cerner AI and Micromedex
11Does the system have access to this patient's liver function tests, and if not, is it safe to follow a recommendation that assumes normal hepatic metabolism?
12What information did the AI use to recommend this dose, and what critical patient details am I adding that the system could not see?
13If the recommendation seems unusual, have I verified it against the current evidence in the BNF or journal articles, or am I deferring to the system?
14Is the AI recommending a drug based on diagnosis alone, without knowing whether the patient has already tried it and failed or had an adverse reaction?
15Does this system know what other specialists are treating this patient for, or is it only seeing the primary care indication?
16Could the system be recommending a medication because the patient's allergy flag is incomplete or uses different terminology than the AI's database?
17If I cannot articulate why the AI suggested this instead of the alternative, am I equipped to counsel the patient or defend this choice to another clinician?
18Does the recommendation account for this patient's specific social situation, such as ability to afford the drug or manage multiple daily doses?
19Is the AI recommending based on average patient data, and does this patient fall outside the range where that evidence applies?
20What would happen if I used my own clinical judgement instead of accepting this recommendation, and could that outcome be better or worse?
Alert Fatigue and the Normalisation of Risk
21How many alerts did the system generate for this patient's medication profile, and am I actually reading all of them or scanning for the red ones?
22When was the last time I actually stopped a prescription based on an alert from this system, rather than routinely overriding?
23If I override 95 percent of alerts in a week, what does that tell me about the threshold at which the system raises a flag?
24Have I caught a real medication error recently because of an AI alert, or have all my near-misses come from my own clinical review?
25Could I defend to a patient safety investigator why I ignored this specific alert, or would I struggle to explain it?
26Does my organisation track which alerts actually prevent harm and which are false positives, and am I using that data to calibrate my attention?
27If an alert is correct but I have seen it hundreds of times before, am I still giving it the same clinical weight, or has my brain learned to dismiss it?
28What is the consequence for the system if it generates too many alerts, and who bears the cost if I start ignoring real ones?
29Am I more likely to override a familiar alert than one I have not seen before, even if the clinical risk is the same?
30Could my organisation reduce alert fatigue by changing system settings, and if so, why haven't we?
Patient Counselling and the Loss of Clinical Conversation
31Is the counselling text the system generated based on the actual medication the patient is receiving, or a template that assumes a standard patient?
32Does the AI-generated guidance mention the specific side effect this patient asked about, or does it only cover the most common ones?
33If I read the system's counselling notes word-for-word, would the patient understand what they need to do, or would they need me to translate?
34Has the system flagged something the patient already knows from previous fills, and am I reinforcing their existing knowledge or wasting their time?
35When I counsel this patient, am I starting with what the AI told me to say, or am I starting with what they actually need to know?
36Could the patient's question reveal that the AI missed something clinically important, such as a contraindication the system did not flag?
37If the patient misunderstands the counselling, is it because the language was unclear, or because the system gave advice that did not fit their situation?
38Am I counselling this patient differently because an AI system told me to, and would my clinical judgement alone suggest a different emphasis?
39Does the system's guidance mention interactions with foods or supplements this patient actually uses, or only the ones in the generic database?
40When I finish counselling, could I explain to a patient safety officer that every piece of advice I gave came from my independent assessment, not just repetition of system text?
How to use these questions
Keep a log of alerts you override and why. After one month, review it to see which alerts genuinely helped your judgement and which created noise.
When an AI system recommends something that contradicts your initial instinct, stop and ask yourself if the system has information you missed, or if it is making an assumption you would not make.
Counsel patients first on what matters to them, then fill in the system's standard advice. You will often find the AI missed what they actually needed to hear.
If you cannot explain the clinical reasoning behind an AI recommendation in plain language, you are not ready to apply it yet.
Treat your ability to question and override AI systems as a professional skill that needs practice. Use it regularly or it atrophies.