For Pharmacistss
Protecting Your Judgement: AI Tools for Pharmacistss Without Alert Fatigue
AI systems like Epic and Cerner flag hundreds of drug interactions each week, but most are not clinically important for your specific patients. When you override the same alert repeatedly, your brain stops seeing it as a real warning, and the one dangerous combination gets missed. The risk is not that AI makes bad recommendations. The risk is that you stop thinking about why you are accepting or rejecting them.
These are suggestions. Your situation will differ. Use what is useful.
Recognise Alert Fatigue Before It Changes Your Decisions
Your pharmacy system generates interaction alerts that are technically correct but clinically irrelevant. A warfarin and ibuprofen flag appears for a patient on a stable dose of both for six months. You click through it. Tomorrow you see the same alert and click through again. By the tenth time, your brain treats it as noise. The moment this happens, you have lost the ability to catch a real problem when it appears. Alert fatigue is not laziness. It is how your attention works under load.
- ›Track which alerts you override most often in your own practice. If you override the same interaction more than twice a week, ask why the system is flagging it and whether the threshold needs adjustment in your EHR settings.
- ›Separate alerts into two categories: ones you always act on, and ones you always override. For the override pile, document the clinical reason once and request a system configuration change rather than clicking through repeatedly.
- ›When you feel the urge to click through an alert without reading it, stop and note what you are about to skip. This moment is your warning sign that fatigue is setting in.
Check AI Recommendations Against Patient Context That the System Cannot See
Lexi-Interact AI and Micromedex give you drug interaction severity ratings based on population data. They do not know if your patient has renal impairment, whether they are taking the doses listed in the database, or if they have tolerated this exact combination before. A major interaction warning might be correct for a young patient with normal kidney function but irrelevant for a 72-year-old with stage 3 CKD on a lower dose. Your judgement about whether the recommendation applies to this person, in this moment, is the real safety layer.
- ›Before accepting a clinical decision support recommendation from your EHR system, ask yourself: does this system know the patient's renal function, liver function, age, and current medication doses? If the answer is no, you must add that context yourself.
- ›Document one recent example where patient-specific information changed whether you followed an AI recommendation. Use this as a reference when you feel pressure to trust the system over your assessment.
- ›For patients on multiple interacting drugs long-term, flag their file so the alert does not reset your thinking each time they refill. You have already done the thinking. The alert should not force you to repeat it.
Practise Patient Counselling Decisions, Do Not Automate Them
ChatGPT and similar tools can generate patient-facing language about a medication in seconds. The text is clear and medically accurate. What it is not is tailored to what this patient needs to know right now. One patient needs to know the drug causes dizziness. Another needs permission to stop a medication that is making them miserable. A third needs reassurance that a side effect will fade. A templated counselling script written by AI covers none of these conversations. Your role is not to deliver information. Your role is to address the gap between what the patient thinks they need and what they actually need to take the medication safely.
- ›Use AI-generated counselling points as a checklist of basics, not as your talking script. Read the AI output, then close it and have the real conversation with the patient.
- ›After each counselling encounter, note one thing the patient asked or said that no template could have predicted. Over time, this collection shows you where your judgement adds the most value.
- ›When you are writing patient-facing text for your pharmacy, use AI as a first draft editor, not a first draft author. Start with what you know this population of patients actually needs to hear.
Keep Your Dispensing Safety Role by Questioning AI, Not Deferring to It
Your position as the last professional check before a patient receives a medication exists because computer systems have limits. A patient comes in with a new epilepsy prescription and a list of over-the-counter supplements. The EHR flags one interaction. You know epilepsy patients, you know which supplements actually matter, and you recognise a dosing pattern that concerns you for this particular person. The AI did its job. You do something harder: you apply experience and context that no system can hold. The moment you stop doing this thinking and accept what the system says, you have given away the role that matters most.
- ›Whenever an AI system recommends accepting a prescription without change, spend 30 seconds asking yourself what clinical reason would make you contact the prescriber. If you can name one, contact them.
- ›Set a rule for your practice: one prescription per shift where you actively question the AI recommendation even if the system shows green. This keeps your critical thinking muscle in use.
- ›Document the rare occasions when you catch something the AI missed. These become proof that your judgement still matters. They also show patterns that might improve your system settings.
Protect the Skills That Make You Irreplaceable
Some of your clinical skills will improve through AI support. You will spot patterns faster and access information wider. Other skills will atrophy if you do not use them. The ability to assess a patient's adherence from their behaviour in the pharmacy. The instinct to recognise when someone is taking a medication wrong and why. The conversation skill that helps a patient say what they are actually worried about. These are not tasks you can automate and still retain. They are the thinking you must do to stay sharp. When you give these tasks to AI to handle, you lose them.
- ›Identify three clinical judgements you make most often. For each one, spend two weeks per year without AI support. Do the assessment or counselling from your own knowledge and experience first, then check what the system would have recommended.
- ›Record what prescribers ask you about most often. These conversations are where your expertise is most visible and most valuable. Protect time for them and do not let AI-generated responses replace your voice.
- ›Ask a colleague what they see you do best in practice. If it is something no AI system is currently trying to do, you have found your core irreplaceable skill. Build your practice around it.
Key principles
- 1.Alert fatigue is a safety problem, not a time management problem. You must actively manage which alerts change your thinking and which ones you have decided not to act on.
- 2.AI recommendations are correct about the interaction or the dosing. Your judgement is correct about whether the recommendation applies to this patient at this moment.
- 3.Patient counselling scripted by AI covers what patients need to know. Your conversation covers what this patient needs to hear.
- 4.Your dispensing safety role exists because systems have limits. The moment you stop questioning those limits, you have given away your role.
- 5.Skills you do not use will fade. The skills that matter most to your practice are the ones no AI system can replace yet.
Key reminders
- When you override an alert more than twice, change your system settings rather than continue clicking through. This protects your attention for real risks.
- Before following clinical decision support, spend 10 seconds listing what the system does not know about your patient. If the list matters, your judgement matters.
- Generate AI counselling language, then put it away and have the real conversation. Use the generated text only if you forget what you wanted to say.
- Catch yourself dispensing without thinking by asking: what prescriber question would I need to call about right now? If you cannot answer, you are not thinking deeply enough.
- Spend two weeks per quarter working without AI decision support for one common scenario. The thinking muscle you maintain might be the one that catches the serious error.