By Steve Raju

For Pharmacistss

Cognitive Sovereignty Checklist for Pharmacistss

About 20 minutes Last reviewed March 2026

AI systems like Epic, Cerner, and Lexi-Interact generate so many interaction alerts that you stop reading them carefully. Your clinical judgment about which interactions matter for each patient atrophies because the system gives you answers without asking about the patient's actual situation. If you rely on AI counselling prompts, you may never discover what your patient actually needs to know.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Pharmacists: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 20 applicable

Tap once to check, again to mark N/A, again to reset.

Managing Drug Interaction Alerts

Set a daily time to review your override patterns in your AI systembeginner
Look at which interaction alerts you dismissed yesterday or last week. If you overrode the same interaction type three times, you are experiencing alert fatigue. This is when you stop reading alerts carefully and start clicking past real risks.
Before accepting an AI interaction recommendation, write down what you already know about the patientintermediate
Epic and Cerner alerts do not know your patient's kidney function, age category, or current dose changes. Writing down what you know first forces you to compare it against what the AI saw. You may find the AI missed critical context that changes whether this interaction matters.
When an alert seems low risk, name the actual harm that could happenbeginner
Do not dismiss an interaction just because it seems minor. Say it aloud or write it down. If you cannot name a real patient harm, the alert deserves dismissal. If you can name a harm, the alert deserves serious thought even if it feels familiar.
Check whether your AI tool flagged this interaction during the last fill for the same patientintermediate
If Lexi-Interact flagged a simvastatin and clarithromycin interaction six months ago and the patient is still on both drugs, the alert lost meaning. Your job is to decide if the interaction mattered then and whether it matters now. Do not let past acceptance become automatic future acceptance.
Assess interaction severity without looking at the AI severity rating firstadvanced
Read the mechanism and clinical evidence before you see the system's major/moderate/minor tag. Your independent judgment about severity may differ from the algorithm, especially for individual patients. Then compare your rating to the AI rating and notice where they diverge.
Document your reasoning when you override a significant interaction alertintermediate
Write a note explaining why you dismissed it. This prevents alert fatigue from becoming invisible. If you cannot write a clear reason, you may not have a clear reason. Quarterly review of your override notes shows patterns in your own judgment.
Ask the patient directly whether they are taking the medicines the AI thinks they arebeginner
Patients skip doses, take drugs differently than prescribed, or use older stock. If the patient says they take simvastatin twice weekly instead of daily, the interaction severity changes. The AI checked the prescription record, not the patient's actual behaviour.

Protecting Your Clinical Judgment in Dispensing Decisions

When an AI system recommends you reject a prescription, identify what patient information the system cannot seeintermediate
Cerner and Epic decision support operate on visible data only. The system does not know about the patient's recent hospital discharge, conversation with their GP about stopping a drug class, or allergy to a related medicine. These unseen facts often make a flagged prescription the right choice.
Keep a manual list of three prescriptions you have accepted despite AI rejection this monthintermediate
Review them weekly. Why did your judgment differ from the system's judgment? If you cannot articulate a reason, you may have overridden for convenience rather than patient benefit. If you can articulate it, you are maintaining active judgment.
Test one AI recommendation per week by thinking through the decision alone firstbeginner
Before reading the system's suggestion, decide what you would do. Then look at the recommendation. If they match, you are thinking alongside the AI. If they differ, investigate why. This prevents your reasoning from dissolving into the system's reasoning.
When age-related dosing seems wrong for a specific patient, check their actual renal functionadvanced
IBM Micromedex and similar tools apply population rules. An 82-year-old with normal creatinine clearance may tolerate a standard dose that the AI flagged as risky. Your judgment about this patient's individual physiology is still your responsibility.
Document when you choose not to follow an AI dispensing recommendationbeginner
Pharmacistss who silently disagree with AI systems develop hidden judgment that never gets checked. Writing it down creates accountability. It also builds a record of when your judgment proved right or wrong.
Ask yourself whether you would make this decision the same way without the AI systemintermediate
If the AI recommendation suddenly made you confident about a decision you felt uncertain about, ask why. Did the AI teach you something new, or did it just resolve your doubt through automation? These feel different and matter differently for your safety role.

Maintaining Real Counselling Conversations

Before you use an AI counselling prompt, talk to the patient first without itbeginner
Listen to what they ask about. What worries them? What have they tried before with similar medicines? Then check the AI prompt. If the prompt covers the same ground the patient raised, you know what matters. If it does not, you know the patient's actual concern was not on the system's list.
Record one patient question per shift that the AI counselling guidance did not addressbeginner
After a week, review these questions. They reveal gaps between what patients need to know and what the system thinks they need to know. This prevents your own judgment about patient education from atrophying inside AI-generated scripts.
When using ChatGPT for patient counselling background, verify the dose information against your pharmacy systembeginner
ChatGPT generates plausible-sounding information that can be wrong. If you use it to prepare talking points for a patient, cross-check the dose, frequency, and interaction warnings against Lexi-Interact or your verified source. Do not let the convenience of the chatbot replace your verification step.
Explain your clinical reasoning aloud to patients when you counsel themintermediate
Instead of delivering the AI counselling prompt as fact, say why this medicine matters for their condition and why you are watching for specific side effects. This keeps your own judgment active and lets patients understand your thinking, not just receive instructions.
When a patient disagrees with AI-guided counselling, investigate before you defend the systemintermediate
If a patient says they tried this medicine before and had a side effect the AI prompt did not mention, their experience is real data. Their individual response to the medicine matters more than the population data in the system. Acknowledge what they are telling you.
Adapt your counselling based on the patient's health literacy and language abilityintermediate
An AI-generated counselling script uses the same words for every patient. Your patient may not understand medical terminology. They may speak English as a second language. Your judgment about how to explain this medicine to this patient cannot be automated.
Test whether your counselling changed the patient's understanding by asking them to tell you back what mattersadvanced
Ask the patient to explain when they take the medicine or what sign of a side effect means they should call their GP. If they cannot say it back, your counselling did not work, whether the AI prompt was thorough or not. Your job includes measuring whether the patient actually understood.

Five things worth remembering

Related reads


Common questions

Should pharmacists set a daily time to review your override patterns in your ai system?

Look at which interaction alerts you dismissed yesterday or last week. If you overrode the same interaction type three times, you are experiencing alert fatigue. This is when you stop reading alerts carefully and start clicking past real risks.

Should pharmacists before accepting an ai interaction recommendation, write down what you already know about the patient?

Epic and Cerner alerts do not know your patient's kidney function, age category, or current dose changes. Writing down what you know first forces you to compare it against what the AI saw. You may find the AI missed critical context that changes whether this interaction matters.

Should pharmacists when an alert seems low risk, name the actual harm that could happen?

Do not dismiss an interaction just because it seems minor. Say it aloud or write it down. If you cannot name a real patient harm, the alert deserves dismissal. If you can name a harm, the alert deserves serious thought even if it feels familiar.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.