For Pharmacistss

The Most Common AI Mistakes Pharmacistss Make

Pharmacistss are ignoring legitimate drug interaction alerts because they have seen too many false positives from Epic AI and Cerner AI. They are also accepting clinical recommendations from Lexi-Interact without checking whether the AI knows the patient's renal function, age, or reason for taking the medication.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Alert Fatigue and Missed Safety Signals

Epic AI and Cerner AI generate alerts for every possible interaction, including many that cause no clinical harm in practice. After overriding the same alert fifty times, you stop reading it carefully and override it automatically.

The fix

Before overriding any alert, write down the two drugs, the interaction mechanism, and one reason this specific patient is safe from it.

Most pharmacists look at whether an alert exists rather than whether it says 'severe' or 'monitor'. You are treating all alerts the same when the tool already categorises them differently.

The fix

Only proceed with a prescription if the alert is marked 'minor' or 'monitor' and you can name the monitoring step you will tell the patient about.

If you do not recognise a drug name or brand, you may assume the AI alert is overcautious rather than recognising you do not have enough information. This is especially risky with newer medications where your experience is limited.

The fix

When you do not know a drug well, treat the alert as higher priority and check a pharmacology reference before dispensing, not after.

IBM Micromedex tells you to 'monitor' an interaction but does not tell you how, how often, or what sign to look for. You tick the box and assume responsibility without a real plan.

The fix

Write the specific monitoring task and timeframe for the patient before you counsel them, so you know what you are actually asking them to watch for.

When Cerner AI suggests you can suppress an alert because similar prescriptions passed before, you assume the system has checked the evidence. It has only checked that other pharmacists did the same thing.

The fix

Only suppress an alert if you personally understand why the interaction is safe for this patient, not because the system has seen it before.

Outsourcing Clinical Judgement to Decision Support

Lexi-Interact gives standard dosing but may not know the patient's eGFR or whether they are on dialysis. You counsel the patient with the AI's dose rather than the dose that is safe for them.

The fix

Check the patient's renal function in their notes before you use any dosing recommendation, and adjust the AI's suggestion if needed.

ChatGPT is trained on text but it is not trained on this patient's allergies, contraindications, or medication history. A detailed question does not change what the tool actually knows.

The fix

Use ChatGPT only for questions about general pharmacology or patient education language, never for individual medication decisions.

If Epic AI suggests an antibiotic class and your patient has documented allergy to that class, you may still follow the recommendation because the AI sounds authoritative. You are checking the tool rather than checking your patient data.

The fix

Always verify that the AI tool has access to the patient's full allergy list and documented contraindications before accepting any recommendation.

IBM Micromedex gives you standard counselling points but your patient's real question is about whether the drug will affect their fertility, their job, or their driving. You deliver the script and miss the question.

The fix

Ask the patient one open question about what concerns them most before you start counselling, then tailor the AI guidance to their actual worry.

An 80-year-old on five medications with liver disease is not the same risk as a healthy 30-year-old taking the same two drugs, but Epic AI applies the same alert logic to both. You follow the system's categorisation instead of your clinical sense.

The fix

If you think a patient's age, weight, organ function, or medication count changes the risk, override the alert only after you have documented your reasoning.

Eroding Professional Competence and Patient Safety

When you know Cerner AI will catch any problems, you skim the prescription instead of working through it yourself. Your ability to spot interactions gets weaker each year.

The fix

Spot-check five prescriptions each week without looking at the AI alert first, to keep your own pattern recognition sharp.

You sent a patient home on a drug with a 'monitor for' alert but the AI did not say how you would check in or what threshold would mean stopping the drug. You have shifted responsibility to the patient without a real safety net.

The fix

For every alert marked 'monitor', decide before dispensing whether you will call the patient in three days, send a text, or schedule a follow-up appointment.

Your instinct says something is wrong with a prescription but Epic AI marked it as safe, so you override your doubt. You are using the tool to outsource responsibility rather than to support your own judgement.

The fix

If your gut tells you a prescription is risky, stop and check it manually, even if the AI says it is fine. Your uncertainty is data.

Lexi-Interact or Micromedex do not know that your patient told you last week they are not taking their other medications, or that they asked about stopping one. You treat the AI's picture of the patient as complete.

The fix

Before you use any AI recommendation, mentally list three things you know about this patient that the AI does not, and consider whether they change the answer.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.