By Steve Raju

For CFOs and Finance Leaders

Cognitive Sovereignty Checklist for Chief Financial Officers

About 20 minutes Last reviewed March 2026

AI forecasting tools give you probabilities so fast that your team stops questioning the inputs. Your board receives polished narratives generated from templates rather than genuine insight. FP&A stops building scenarios from first principles because the model always provides a starting point. Protecting your judgement means knowing when to ignore what the tool says.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Chief Financial Officers: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 20 applicable

Tap once to check, again to mark N/A, again to reset.

Audit the Assumptions Before You Trust the Output

Demand the input variables from your Anaplan model before reading the forecastbeginner
AI forecasting tools hide their arithmetic inside confidence intervals. Force your team to list every assumption about revenue growth, margin compression, and headcount before looking at the model output. This breaks the anchoring effect that happens when you see the number first.
Stress-test one assumption per month by handbeginner
Pick one key variable from your Bloomberg Terminal AI output or your ChatGPT financial model. Recalculate the impact manually using only what you know from operational data. This keeps your intuition sharp and catches when AI has misunderstood your business.
Ask your FP&A lead where the AI tool disagreed with last quarter's actual resultsintermediate
Every AI forecast is wrong. The useful question is whether it was wrong in ways that matter to your strategy. If Copilot predicted 8% revenue growth and you got 6%, understand why before you build next quarter's plan on the same assumptions.
Require a written assumption list for every board forecast, separate from the narrativeintermediate
Your Tableau AI dashboard produces beautiful visuals with embedded assumptions baked in. Before that chart goes to the board, your FP&A team must list the five most sensitive assumptions as a separate document. The board needs to see what the model is betting on.
Run a scenario where the AI's base case is wrong in ways that hurt you mostadvanced
If your AI tool forecasts stable working capital, build a scenario where it balloons. If it predicts flat interest rates, stress the downside. This is not about paranoia. This is about understanding whether your organisation can survive if the AI was optimistic.
Compare AI assumptions to what your operations team sees on the groundintermediate
Your Bloomberg Terminal AI might assume customer churn stays at 4%. Your head of sales knows two major clients are in danger. These inconsistencies are not data errors. They are signs that the AI is working from stale or aggregated data. Surface them before they become board surprises.
Lock assumptions in writing before you run the modelbeginner
The temptation with ChatGPT or Copilot is to iterate. Try different numbers until the output looks right. This is backwards. Write down your assumptions first. Then run the model once. Then examine the gap between what you predicted and what the AI calculated.

Build Board Narratives from Your Own Analysis, Not from Templates

Write your board narrative before you generate any AI summarybeginner
Copilot and ChatGPT can produce polished board language in seconds. But a template narrative obscures what you actually know. Write your own story first. Then use AI only to tighten the language. Your board hired you for your judgement, not for access to an AI language model.
Identify the one decision your board needs to make from this forecastintermediate
If your narrative could support multiple conclusions, it is not a narrative. It is a data dump. Before Tableau AI shapes your commentary, decide whether you are asking the board to increase investment, hold steady, or exit a business. Build the story around that one choice.
Reject any board slide that Anaplan generated without your rewritebeginner
AI-generated slides follow patterns. They prioritise completeness over relevance. Every slide your FP&A team produces for the board should have your fingerprints on it. Rewrite, condense, and redirect until the slide says what you believe matters.
State what you do not know as clearly as what you do knowintermediate
AI tools tend to smooth uncertainty into confidence intervals that sound certain. Your board needs to hear where your forecast is weak. If revenue guidance depends entirely on a contract renewal you won't know about until March, say so. This is not hedging. This is honesty.
Call out the scenarios the AI model did not consideradvanced
Your Tableau dashboard shows a downside case that assumes 10% margin compression. But your real downside risk is regulatory change. The AI cannot see that because it only knows history. Tell your board which risks the model ignores.
Replace AI confidence language with your own conviction levelintermediate
Do not say the forecast is 85% likely. Instead say you are confident in the guidance unless macro conditions shift or a major customer contract fails. Use language that lets the board know what would change your mind.

Rebuild Scenario-Building and Risk Assessment as Core Skills

Run one scenario per quarter by hand without letting the model suggest the variablesintermediate
When you always start with the AI's base case, your creativity atrophies. Once a quarter, have your FP&A team build a scenario using only a white board and operational knowledge. This trains your team to think about cause and effect instead of pattern-matching.
Test your M&A due diligence assumptions by comparing them to what the AI would forecastadvanced
Before you acquire a company, your team builds a synergy model. Run that same revenue and cost data through ChatGPT or Copilot separately. If the AI forecast is wildly different from your diligence numbers, that gap is information. It might mean your assumptions are too aggressive or that the AI has missed something real about the target.
Make your FP&A team explain why they disagree with the Anaplan forecastintermediate
If your team's bottom-up forecast is different from your top-down model, that tension is valuable. Do not smooth it away. Make them argue both sides. This debate is how you catch AI overconfidence and premature consensus.
Build a risk register separate from your AI scenario analysisbeginner
Your Tableau AI dashboard produces downside, base, and upside cases. These are variations, not risks. A risk register asks what could break your plan. A cyber attack on your billing system. A key person leaving. A supply chain rupture. These are not probability distributions. They are contingencies that AI tools rarely surface.
When the Bloomberg Terminal AI output surprises you, rebuild the logic yourself before accepting itadvanced
AI tools sometimes find real insights in data. Sometimes they find patterns that do not exist. If the AI flags an unusual correlation in your working capital or cash conversion cycle, do not put it in your board pack until you understand the mechanics yourself.
Assign someone on your team to be sceptical of every model outputbeginner
That person's job is to find flaws, question assumptions, and propose alternative scenarios. This is not a waste of time. This is how you avoid the group think that happens when a room full of people trusts the same AI tool.
Run your stress test scenarios through multiple AI tools and compare the resultsadvanced
If Copilot, ChatGPT, and your Anaplan tool all give you different answers to the same stress test question, that inconsistency matters. It tells you how much of the output depends on the tool rather than on economics. Use that knowledge to decide how much weight to give any single forecast.

Five things worth remembering

Related reads


Common questions

Should chief financial officers demand the input variables from your anaplan model before reading the forecast?

AI forecasting tools hide their arithmetic inside confidence intervals. Force your team to list every assumption about revenue growth, margin compression, and headcount before looking at the model output. This breaks the anchoring effect that happens when you see the number first.

Should chief financial officers stress-test one assumption per month by hand?

Pick one key variable from your Bloomberg Terminal AI output or your ChatGPT financial model. Recalculate the impact manually using only what you know from operational data. This keeps your intuition sharp and catches when AI has misunderstood your business.

Should chief financial officers ask your fp&a lead where the ai tool disagreed with last quarter's actual results?

Every AI forecast is wrong. The useful question is whether it was wrong in ways that matter to your strategy. If Copilot predicted 8% revenue growth and you got 6%, understand why before you build next quarter's plan on the same assumptions.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.