For Healthcare Administratorss

40 Questions Healthcare Administratorss Should Ask Before Trusting AI

Your AI tools show efficiency gains that look real on a spreadsheet but feel hollow when you walk the unit. The questions you ask before acting on an AI recommendation determine whether you build a faster system or a fragile one.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Questions About the Data Your AI Was Trained On

1 When Epic AI recommends patient flow changes based on historical throughput data, does that training data include months when your ED was understaffed or operating below capacity?
2 If Cerner's bed management AI was trained on your organisation's data from 2019 to 2021, how does it account for the permanent staffing changes you made in 2022?
3 Does the vendor tell you what happened to patients whose cases were unusual or rare in the training dataset? Did the AI simply ignore them?
4 When Azure Health suggests optimal scheduling based on past patterns, are those patterns built on data from before your nurses started taking mental health leave?
5 Has the vendor shown you the actual patient outcomes for cases the AI flagged as low-priority versus cases it flagged as high-priority during the training period?
6 If your organisation changed its admission criteria or discharge protocols during the training period, did the vendor rebuild the model or just note it as a limitation?
7 Does the AI system know the difference between a cancelled procedure that was genuinely unnecessary and one that was cancelled because a surgeon called in sick?
8 When IBM Watson Health analyses your resource allocation, does it know which departments have informal workarounds that made the official process faster?
9 Has anyone checked whether the training data overrepresents your best-performing units and underrepresents your struggling ones?
10 If the vendor trained the model on data from other health systems, how different are those organisations' case mix, payer mix, and discharge destinations from yours?

Questions About What Gets Measured and What Gets Hidden

11 When your dashboard shows bed turnaround time improved by 18 percent, does it also show whether post-discharge readmissions changed in the same period?
12 If Epic AI recommends discharging patients earlier based on throughput optimisation, who is tracking whether those patients are readmitted within 30 days?
13 Does your vendor contract require them to report when their recommendations correlate with increases in adverse events or safety incidents?
14 When Cerner's scheduling AI reduces idle time in operating theatres, are you measuring whether staff are rushing through post-operative documentation?
15 Has anyone asked your clinical staff whether the efficiency gains feel real or whether they are simply working faster without better systems?
16 If your organisation is now hitting all efficiency targets, what is the baseline you are comparing against? Was it set by operations staff or by clinical staff?
17 Does your data governance process flag when an AI recommendation contradicts what your experienced ward managers know about patient flow?
18 When Azure Health suggests staffing reductions based on demand forecasting, does the model account for the staff illness and turnover that happens when people are overworked?
19 Has your organisation measured the cost of staff turnover in clinical departments where the AI made big scheduling changes?
20 Are you tracking complaints from patients or families separately from formal incident reports? AI efficiency often surfaces first as patient experience problems.

Questions About Clinical Judgment and Deskilling

21 When you implement an AI recommendation for bed allocation, which clinicians stopped making that decision themselves and how will you know if their judgment atrophies?
22 If a senior nurse or doctor disagrees with what Epic AI recommends, do you have a process where they can override it without it being logged as a deviation?
23 Does your clinical governance committee review AI-generated operational recommendations, or are those classified as administrative and never reach clinical eyes?
24 When IBM Watson Health suggests which patients to prioritise for certain services, has your medical director reviewed the logic or just accepted the outcome?
25 If you implement Cerner's recommendations to consolidate certain functions, are you planning for the cost of retraining or rehiring if you need to undo it?
26 Which operational decisions made by your AI systems would be nearly impossible to reverse in six months if the recommendations turned out to be harmful?
27 Do your junior doctors and nurses see the clinical reasoning behind decisions, or do they only see the AI output and learn to trust the system without understanding it?
28 When Azure Health flags certain types of cases as lower priority, have you asked whether that affects which cases junior clinicians get to work on?
29 Has anyone measured whether your clinical staff still know how to do the operational tasks the AI now handles, or would they struggle if the system failed?
30 If the AI system is making resource allocation decisions that affect which wards get staffed first, are those decisions informed by clinical input or just throughput data?

Questions About Governance, Contracts, and Your Real Costs

31 Does your contract with the vendor require them to disclose when the AI recommendation correlates with worse clinical outcomes?
32 If Epic AI is making recommendations that affect patient safety, is that classified as a clinical tool requiring clinical governance review or an operational tool that skips that step?
33 Has anyone calculated the cost of the extra management oversight needed to monitor what the AI is recommending versus the savings it claims to deliver?
34 When you signed the contract for Cerner's AI module, did the vendor commit to retraining your staff if the system is withdrawn or fails?
35 Does your organisation own the data being fed into Microsoft Azure Health, and can you extract it and move to another system if the vendor relationship ends?
36 If the AI system makes a recommendation that harms a patient, do you have clarity on whether the liability falls to your organisation or the vendor?
37 When your board reviews the AI savings projection, are you comparing it to the cost of increased auditing, clinical governance, and staff retraining?
38 Has your organisation negotiated the right to audit the AI model and the data it was trained on, or is that considered proprietary by the vendor?
39 Do you have a plan for what happens operationally if the AI system makes a series of bad recommendations that take months to affect outcomes?
40 When the vendor shows you their savings projections, can you ask which health systems those projections came from and whether their case mix is similar to yours?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.