For COOs and Operations Leaders
Chief Operating Officers often treat AI optimisation outputs as answers rather than starting points for human judgement. This creates fragile operations that fail when real conditions deviate from the data the model learned from.
These are observations, not criticism. Recognising the pattern is the first step.
SAP AI optimises for lowest cost per unit on measurable parameters. It does not see supplier reliability during disruption, relationship capital built over years, or the cost of switching when a crisis hits. You adopt the recommendation because it is quantified and defensible to the board.
The fix
Before implementing any SAP recommendation that cuts costs by more than 15 percent, ask your procurement team which suppliers would be lost and what happens to lead times during the next shortage.
Palantir finds the most common path through your data. It does not account for the specialist exceptions you handle monthly that never appear in aggregate statistics. You standardise around what the model sees and lose the flexibility that made your operation resilient.
The fix
Map every exception your team handles in a typical quarter and verify that Palantir's recommended sequence accommodates at least 95 percent of them without manual override.
Microsoft Copilot generates plausible sounding demand forecasts with no visibility into what it weighted or why. When you cannot explain the recommendation to your supply chain manager, you either blindly follow it or reject it entirely. Both responses waste the tool.
The fix
Ask Copilot to show you the three largest demand signals it identified in the last six months, then have your demand planner confirm whether those signals match what they saw in customer conversations.
Oracle AI optimises headcount to current workload using historical labour data. It does not know that your manufacturing ramp-up starts two months before peak season demand hits, or that training new operators takes eight weeks. You end up short during the period when you need the most capacity.
The fix
For any operation with clear seasonal patterns, manually override Oracle's recommendation by adding buffer staff two months ahead of your historical peak, then review whether you were right after three full cycles.
ChatGPT generates smooth, readable process steps based on training data. It omits the conditional logic and edge case decisions that experienced operators actually make. Your team treats the AI-written procedure as gospel and loses the nuanced judgment they used to apply.
The fix
After ChatGPT drafts a process, have the person who currently does that work mark every place where they make a choice or decide differently based on context, then embed those decision trees into the final document.
Your AI tool measures inventory turns month to month and recommends lower stock to improve the metric. Stockouts are invisible in routine reporting until a customer cancels an order or you pay expedited shipping. The measurable metric improved while the unmeasured cost grew.
The fix
Before implementing any inventory reduction, calculate what one day of stockout costs you in lost revenue or emergency logistics, then set a maximum acceptable stockout frequency that your AI recommendation must not exceed.
Palantir optimises for orders leaving on schedule, but does not measure how much overtime or expediting went into meeting that target. Your on-time delivery goes up while your labour costs and equipment strain go up with it. The metric hides the real operational cost.
The fix
Add a second metric: the percentage of on-time deliveries achieved without unplanned overtime or rush fees, and do not let that metric drop below your baseline from before the AI tool was deployed.
SAP AI scores your processes on speed and cost per transaction. Rework, customer complaints, and warranty claims do not affect the efficiency score in the way it is built. You chase efficiency gains that actually shift costs to your customer or quality function.
The fix
For any process where the AI recommends a change that increases speed, require that defect rate and rework percentage stay flat or improve before you adopt the change.
High utilisation looks good in reports and is easy to measure with Oracle AI. It ignores capacity buffer needed for maintenance, changeovers, and the flexibility to absorb urgent orders. When demand suddenly shifts, your high-utilisation operation cannot respond.
The fix
Set a maximum utilisation target for each production line based on the lead time flexibility your customers need, and treat that as a constraint the AI cannot optimise away.
Copilot or SAP AI shows you saved five percent per unit on procurement. You declare a margin win. But if you sold more at the old margin or shifted product mix toward higher-margin items without the cost reduction, the unit cost saving is noise. You celebrate the measurable number and miss what actually moved profit.
The fix
For any cost reduction recommendation, calculate the absolute profit impact on your actual sales mix for the next quarter, not just the per-unit saving.
You deploy Palantir's new sequence and tell your team the AI has optimised their workflow. They do not understand that the old way relied on local knowledge they no longer need, or that the new way trades speed for resilience. They resist because you have not given them permission to let go of the old judgment.
The fix
Before rolling out any AI recommendation, have your process owner explain in writing what the old method optimised for and what changed that makes the new method better for today's constraints.
Your experienced supply chain manager used to decide whether to accept a slightly late delivery based on inventory buffer and demand forecast. Now Palantir automatically rejects anything past the target date. Your best people stop thinking and start following rules. When an exception requires judgment, no one has it anymore.
The fix
Keep the final decision authority with your most experienced person for any transaction that falls outside normal parameters, and track how often they override the AI recommendation to spot when the tool is learning from good judgment.
Copilot recommends a new supplier based on cost and lead time data. The supplier fails during the first order. You lose faith in the tool and go back to manual decisions. You do not investigate whether the failure was a one-time event or a pattern, or whether Copilot's recommendation was actually wrong or you were working with incomplete data.
The fix
When an AI recommendation fails, spend one hour with your subject matter expert diagnosing whether the model was given bad input, the real world changed after the recommendation, or the model itself is unreliable before deciding to stop using it.
Your operations team uses SAP AI without understanding that it minimises cost in the current period while your strategy prioritises supply chain resilience. They see it as a tool and do not apply any strategic filter. When the recommendation conflicts with resilience, they follow the tool instead of the strategy.
The fix
Spend 30 minutes with each team that uses an AI tool explaining what metric it optimises for, what it ignores, and under what conditions you would override it in favour of a different goal.
Your scheduling relies on Oracle AI to assign work to equipment and people. When the tool goes down for maintenance or produces an error, your operation stops because no one remembers how to schedule manually or understand the current constraints. You have optimised away resilience.
The fix
For any AI tool that makes decisions affecting daily operations, maintain a manual decision process that your team practises at least quarterly so you can run for one week without the tool.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.