40 Questions Chief Operating Officers Should Ask Before Trusting AI
Your supply chain optimisation tool can tell you what to do faster than your team can. It cannot tell you whether what it optimises for still serves your business. The difference between an AI recommendation and a good decision is the questions you ask before you act.
These are suggestions. Use the ones that fit your situation.
Before You Act on SAP AI or Oracle AI Supply Chain Recommendations
1What cost did the AI model assign to a stockout, and does that match what an actual stockout costs your organisation?
2If the model recommends reducing safety stock for a supplier, what happens to your operations when that supplier misses a delivery by one week?
3Has anyone checked whether the AI is optimising for average performance or for your worst case scenarios?
4When Palantir flags a vendor as high risk, what specific data points triggered that flag, and do those points match your actual experience with that vendor?
5Does the AI model include the cost of switching suppliers, or only the unit cost of the new option?
6What happens to the AI recommendation if your biggest customer changes their order volume by 20 percent?
7Has the model ever been tested against a year when demand behaved unusually, or does it assume past patterns repeat?
8If you followed this recommendation six months ago, would you have made more money or less?
9Who in your organisation knows a supplier well enough to spot that the AI recommendation ignores a real constraint the data does not capture?
10What does the model do when two of its objectives conflict, and is that trade-off the one you would make?
When Microsoft Copilot or ChatGPT Summarises Your Operational Data
11Can you point to the actual source data that led to this summary, or are you working from what the AI claimed to find?
12If this summary is wrong, how would you know before you made a decision based on it?
13Does the AI summary highlight what is unusual, or only what is biggest?
14When the AI says performance improved, improved compared to what baseline?
15Has the AI picked up on seasonal patterns, or is it treating January the same as August?
16If you have seen this type of operational problem before, does the AI summary match what you learned from that experience?
17What metrics did the AI choose to highlight, and what metrics did it leave out?
18Is the AI reporting what happened, or what the data happened to show this particular month?
19Does this summary explain why something changed, or just that it changed?
20If a team member disagreed with this analysis, what would you need to see to trust the team member over the AI?
When Process Optimisation Tools Recommend Removing Steps or Decision Points
21What types of orders or situations fall outside the AI model's training data, and how will your process handle those?
22If you remove this approval step, who loses the ability to catch problems that do not fit the standard case?
23How often in the past year did someone use their judgement to override a standard process and was that override the right call?
24Does the AI recommendation assume the same quality of supplier performance next year as this year?
25What happens to cycle time if the automated decision occasionally needs human review, and does the model account for that?
26When the AI says a step adds no value, what did it measure to reach that conclusion?
27Who on your team handles the situations where the standard process breaks, and are they aware their role might disappear?
28If compliance or audit requirements change, how quickly can you add a decision step back into an automated process?
29Has anyone traced what happens when the AI makes the wrong decision at this point in your process?
30Does the optimisation model include the cost of getting a decision wrong, or only the cost of making the decision?
When Your Team Questions an AI Recommendation They Think is Wrong
31What operational knowledge does this person have that the AI does not, and have you checked whether they are right?
32Has this team member caught problems that the AI would have missed in previous months?
33If you override the AI and it turns out the person was wrong, what actually happens to your operation?
34What would change in your data if you incorporated what this person is telling you?
35Is this person questioning the recommendation because it conflicts with past practice, or because they see a real risk?
36How much institutional knowledge would you lose if you removed the discretionary judgment step where this person typically intervenes?
37When the AI recommendation succeeds, does your team understand why, or do they just accept the result?
38Have you ever asked the person who knows this process best what constraints or risks the AI model might not capture?
39If you had to explain to a customer why you made this decision, would you need to mention that your team disagreed with the AI?
40What would it look like for you to find the right balance between trusting the AI and preserving the judgement that handled edge cases well?
How to use these questions
Keep a record of AI recommendations that turned out to be wrong. Review them quarterly to spot what the model consistently misses.
Assign one person to ask the sceptical question in every meeting where an AI recommendation drives a major decision. Rotate who has this role so it does not become their identity.
When an AI tool recommends removing a step, require the team to run parallel operations for at least one quarter. Compare outcomes before you make the change permanent.
Set a rule that any AI recommendation affecting more than one supply chain must be validated by someone who remembers what happened the last time something similar went wrong.
If the AI model optimises for cost alone, ask what would happen to that metric if your definition of acceptable risk changed. Use the answer to reset the optimisation weights.