For COOs and Operations Leaders

How Chief Operating Officers Can Protect Judgement While Using AI for Operations

AI optimisation tools excel at finding patterns in normal conditions but struggle with the unusual situations that happen regularly in real operations. Your value as a COO comes partly from recognising when a supply chain decision or process change needs to deviate from what the model recommends. The risk is not that AI will make bad decisions on its own, but that you will stop noticing when the situation demands human judgment instead of algorithmic efficiency.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Recognise What Your Optimisation Models Are Actually Optimising For

When SAP AI or Oracle AI recommends a procurement schedule or warehouse allocation, you need to know which metrics drove that recommendation. Models optimise for what you measure. If your model optimises for cost per unit, it may recommend supply chain moves that leave you fragile when suppliers fail. Ask your operations analytics team to map every major recommendation back to the specific metrics in the model. Then ask yourself whether those metrics capture what matters operationally when things go wrong.

Keep Your Operational Intuition Close to the Ground

The discretionary judgment you use to handle edge cases comes from being close to the work. When you rely on Copilot summaries of supply chain data or Palantir dashboards as your only view of operations, you lose the feel for which situations are genuinely unusual. You stop noticing the small signals that precede larger problems. Schedule time each week to look at raw operational data directly, or talk to the people running the processes your AI tools oversee. You do not need to do this for every decision. You need to do it enough to recognise when a situation falls outside what the model was built to handle.

Build Decision Checkpoints Where Judgment Stays Required

The pressure to scale efficiency is real, but fully automating decisions removes the moment where you catch errors before they cascade. Instead of letting SAP AI or Oracle AI make routine procurement decisions without review, keep human approval at the point where consequences become large. This might mean keeping sign-off on orders above a certain value, or on changes to long-term supplier contracts, or on any recommendation that contradicts recent operational experience. The checkpoint slows some decisions slightly. It prevents the brittle failure mode where the AI system makes dozens of small optimisations that seem fine in aggregate but leave no margin for error.

Measure What Matters, Not Just What the Model Can Measure

Over-indexing on measurable metrics causes operational fragility. Your process optimisation tool measures on-time delivery, cost per transaction, and inventory turns because those are easy to quantify. It cannot measure supplier relationship resilience, staff morale during change, or how well a process handles the unexpected situation that occurs once a year. When you review performance against AI recommendations, track the metrics the model does not see. If your supplier diversification score dropped after following cost optimisation, or if your team is making more manual exceptions to the process, those are signals the model is missing something important about your operation.

Design Change Management Around Preserving Judgment, Not Just Adopting the Tool

When you deploy Palantir, Copilot, or new SAP AI capabilities, the change management effort matters more than the tool's power. Staff working close to the processes often see problems the AI will miss. If your programme treats adoption as a one-way shift from human decision-making to AI-driven decisions, you lose access to that ground-level judgment exactly when you need it most. Instead, structure the change so that workers understand when and why they should follow the AI recommendation, and when they should question it. Train people to recognise the edge cases their role handles well. Make it safe for them to raise concerns about a recommendation without that being treated as resistance to the tool.

Key principles

  1. 1.Know which specific metrics your optimisation model maximises and whether those metrics still represent what matters when operations face disruption.
  2. 2.Stay operationally close enough to recognise when a situation differs from the normal conditions your AI tool was built to handle.
  3. 3.Keep human judgment active in the decisions where edge cases occur regularly and consequences scale quickly.
  4. 4.Track the outcomes your metrics cannot measure, because fragility often appears there first.
  5. 5.Structure change management around preserving and improving judgment, not around removing it.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.