For COOs and Operations Leaders

40 Questions Chief Operating Officers Should Ask Before Trusting AI

Your supply chain optimisation tool can tell you what to do faster than your team can. It cannot tell you whether what it optimises for still serves your business. The difference between an AI recommendation and a good decision is the questions you ask before you act.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Before You Act on SAP AI or Oracle AI Supply Chain Recommendations

1 What cost did the AI model assign to a stockout, and does that match what an actual stockout costs your organisation?
2 If the model recommends reducing safety stock for a supplier, what happens to your operations when that supplier misses a delivery by one week?
3 Has anyone checked whether the AI is optimising for average performance or for your worst case scenarios?
4 When Palantir flags a vendor as high risk, what specific data points triggered that flag, and do those points match your actual experience with that vendor?
5 Does the AI model include the cost of switching suppliers, or only the unit cost of the new option?
6 What happens to the AI recommendation if your biggest customer changes their order volume by 20 percent?
7 Has the model ever been tested against a year when demand behaved unusually, or does it assume past patterns repeat?
8 If you followed this recommendation six months ago, would you have made more money or less?
9 Who in your organisation knows a supplier well enough to spot that the AI recommendation ignores a real constraint the data does not capture?
10 What does the model do when two of its objectives conflict, and is that trade-off the one you would make?

When Microsoft Copilot or ChatGPT Summarises Your Operational Data

11 Can you point to the actual source data that led to this summary, or are you working from what the AI claimed to find?
12 If this summary is wrong, how would you know before you made a decision based on it?
13 Does the AI summary highlight what is unusual, or only what is biggest?
14 When the AI says performance improved, improved compared to what baseline?
15 Has the AI picked up on seasonal patterns, or is it treating January the same as August?
16 If you have seen this type of operational problem before, does the AI summary match what you learned from that experience?
17 What metrics did the AI choose to highlight, and what metrics did it leave out?
18 Is the AI reporting what happened, or what the data happened to show this particular month?
19 Does this summary explain why something changed, or just that it changed?
20 If a team member disagreed with this analysis, what would you need to see to trust the team member over the AI?

When Process Optimisation Tools Recommend Removing Steps or Decision Points

21 What types of orders or situations fall outside the AI model's training data, and how will your process handle those?
22 If you remove this approval step, who loses the ability to catch problems that do not fit the standard case?
23 How often in the past year did someone use their judgement to override a standard process and was that override the right call?
24 Does the AI recommendation assume the same quality of supplier performance next year as this year?
25 What happens to cycle time if the automated decision occasionally needs human review, and does the model account for that?
26 When the AI says a step adds no value, what did it measure to reach that conclusion?
27 Who on your team handles the situations where the standard process breaks, and are they aware their role might disappear?
28 If compliance or audit requirements change, how quickly can you add a decision step back into an automated process?
29 Has anyone traced what happens when the AI makes the wrong decision at this point in your process?
30 Does the optimisation model include the cost of getting a decision wrong, or only the cost of making the decision?

When Your Team Questions an AI Recommendation They Think is Wrong

31 What operational knowledge does this person have that the AI does not, and have you checked whether they are right?
32 Has this team member caught problems that the AI would have missed in previous months?
33 If you override the AI and it turns out the person was wrong, what actually happens to your operation?
34 What would change in your data if you incorporated what this person is telling you?
35 Is this person questioning the recommendation because it conflicts with past practice, or because they see a real risk?
36 How much institutional knowledge would you lose if you removed the discretionary judgment step where this person typically intervenes?
37 When the AI recommendation succeeds, does your team understand why, or do they just accept the result?
38 Have you ever asked the person who knows this process best what constraints or risks the AI model might not capture?
39 If you had to explain to a customer why you made this decision, would you need to mention that your team disagreed with the AI?
40 What would it look like for you to find the right balance between trusting the AI and preserving the judgement that handled edge cases well?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.