By Steve Raju

For COOs and Operations Leaders

Cognitive Sovereignty Checklist for Chief Operating Officers

About 20 minutes Last reviewed March 2026

SAP AI, Palantir, and Oracle AI can optimise your processes faster than your team can understand them. This speed creates a cognitive trap: you begin trusting the model's logic more than the operational intuition you built from years of handling edge cases. Cognitive sovereignty means keeping your judgement in charge while these tools do their work.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Chief Operating Officers: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Protect Your Operational Intuition Before It Fades

Document what your best operators do when rules breakbeginner
Before you hand off a decision to AI, capture how your experienced team handles the 5 percent of situations that fall outside normal process. These are the moments where human judgement creates value. Once you stop practising this skill, you lose it.
Ask your AI tool what it is actually optimising forbeginner
When Palantir or Oracle AI recommends a supply chain change, get the vendor to name the specific metric it improved. If it optimised cost per unit without measuring supply chain resilience or quality failures, you now know where the blind spot is. This is not about rejecting the recommendation. It is about seeing what the model cannot see.
Run a manual process in parallel to one AI-driven processintermediate
Pick a non-critical operational decision and have your team make it without the AI recommendation for one full cycle. Compare outcomes. You will either gain confidence in the AI or spot the failure modes that numbers alone would not reveal.
Assign one person to challenge the AI recommendation each weekintermediate
Give someone on your ops team explicit permission to question why the model recommended what it did. This person should present an alternative decision based on operational intuition. Even if you choose the AI recommendation, this practice keeps your team engaged in actual thinking rather than approval-stage rubber stamping.
Measure what happens to decisions the AI says it is uncertain aboutintermediate
Most AI tools flag low-confidence recommendations. Track whether these decisions fail more often than high-confidence ones. If they do not, your model may be poorly calibrated. If they do, ensure you have a human override process in place before the model makes these calls alone.
Rotate the people who review AI recommendationsbeginner
If the same person approves AI decisions month after month, they become a rubber stamp. Rotate the approvers so fresh eyes catch what patterns of thinking have become invisible.
Create a log of AI recommendations your team rejected and whyadvanced
Track the decisions where humans said no to the model and what happened as a result. This log is your evidence that human judgement still adds value. It also shows you which AI recommendations your team trusts least, which tells you where the model has a credibility problem.

Stay in Control of What Metrics Drive Your Organisation

List the metrics you use and name what each one actually representsbeginner
When you tell Copilot or ChatGPT to optimise your warehouse operations, write down whether you care more about throughput, quality, safety, staff retention, or cost. Then check if the AI tool has access to all of these. Most models optimise for what is easy to measure, not what matters most to you.
Audit your KPI dashboard for metrics that went up while something important went downintermediate
Process times may improve while first-contact resolution rates drop. Inventory turns may increase while supplier relationships weaken. These inversions are normal and important. They tell you your metrics are incomplete, not that your operations are better.
Ask operations leaders from your best-performing sites what metrics they ignorebeginner
The people closest to the work know which measurements the business watches but which do not actually predict success. Before you hand control to an AI system, capture this tacit knowledge about what matters in your environment.
Set explicit boundaries on what the AI can optimiseintermediate
Tell your SAP AI or Oracle AI implementation team which metrics are off limits. If cost reduction cannot come at the expense of safety or quality, say so before the model learns to ignore these constraints. This is your chance to encode your values into the system before the AI learns to work around them.
Track one vanity metric and one real-world outcome in paralleladvanced
Pick a process where the AI improved a metric you care about. Also measure something harder to quantify: did staff engagement improve, did customers complain less, did you sleep better knowing the operation was more resilient. These unglamorous observations often catch what dashboards miss.
Review how your metrics have shifted since you deployed AIintermediate
Compare your KPI priorities from before the AI system arrived to now. If your focus has shifted toward what the AI optimises well and away from what it struggles with, you have let the tool rewire your strategy. This is not always wrong, but it should be intentional.

Prepare Your Organisation for When AI Recommendations Fail

Stress test your AI recommendation in isolation before you deploy itintermediate
Before Palantir's supply chain recommendation changes your supplier mix, run it against your worst recent scenarios. What would have happened to this recommendation during the last supply shock, quality crisis, or demand spike you experienced. If the model would have made things worse, you need a human gate.
Name the failure mode you are most afraid ofbeginner
What outcome would damage your organisation most: cost overruns, safety incidents, customer service collapse, or loss of supply chain flexibility. Once you name it, check if your AI tool is even designed to prevent it. Many models optimise away from risk that is measurable and toward risk that is not.
Build a manual override process that does not punish people who use itbeginner
If your culture treats AI rejections as failures of individual judgment, people will stop trying. Make it safe to say the AI recommendation feels wrong, even if you cannot explain why. Some of the best operational decisions come from this instinct.
Run a post-mortem on the last major AI-driven decision that underperformedadvanced
Do not wait until the system causes a crisis. Pick a smaller decision where the AI recommendation did not work as well as expected. Walk through what the model missed, what assumptions proved wrong, and what your team should have questioned. This conversation is your insurance policy against overconfidence.
Keep one decision-making process deliberately manual and AI-freeintermediate
Choose one process where you ban AI recommendations entirely. This becomes your control group. It also keeps a core group of people practised at making complex decisions without model support. If your AI system ever fails, you will need these people to think clearly without it.
Document what your team will do if the AI system goes offline for a weekbeginner
Palantir goes down, SAP has a data quality crisis, your data feed breaks. Can your operations run without AI recommendations. If the answer is no, your team has already lost the skills to manage manually. This is a fragility risk disguised as progress.

Five things worth remembering

Related reads


Common questions

Should chief operating officers document what your best operators do when rules break?

Before you hand off a decision to AI, capture how your experienced team handles the 5 percent of situations that fall outside normal process. These are the moments where human judgement creates value. Once you stop practising this skill, you lose it.

Should chief operating officers ask your ai tool what it is actually optimising for?

When Palantir or Oracle AI recommends a supply chain change, get the vendor to name the specific metric it improved. If it optimised cost per unit without measuring supply chain resilience or quality failures, you now know where the blind spot is. This is not about rejecting the recommendation. It is about seeing what the model cannot see.

Should chief operating officers run a manual process in parallel to one ai-driven process?

Pick a non-critical operational decision and have your team make it without the AI recommendation for one full cycle. Compare outcomes. You will either gain confidence in the AI or spot the failure modes that numbers alone would not reveal.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.