For CFOs and Finance Leaders
CFOs are anchoring their board narratives to AI confidence intervals without testing the assumptions underneath. When your FP&A team stops building scenarios from first principles and starts accepting Anaplan's starting point as gospel, your risk assessment becomes a performance of certainty rather than an act of judgement.
These are observations, not criticism. Recognising the pattern is the first step.
Your Bloomberg Terminal AI or Copilot generates a 75% confidence interval on next quarter revenue, and that number feels like fact because it comes with a decimal point. Your board sees that confidence interval and believes you have done rigorous stress testing. The real cost is that you have moved your judgement one step further from the underlying business drivers.
The fix
Before you present any AI forecast to the board, manually rebuild the three most critical assumptions (customer churn rate, average deal size, sales cycle length) from your own data and ask whether the AI result still holds.
These tools excel at producing scenario variants quickly, which creates the illusion that you have explored the decision space thoroughly. Your FP&A team stops asking hard questions about which scenarios matter and starts clicking buttons to generate options. After six months, nobody on your team remembers how to build a scenario by hand.
The fix
Define your scenario hypotheses in plain language before you open the tool. Write down what changes in each scenario and why that change matters to the board decision.
Copilot or ChatGPT produces a forecast, your team sees it as the baseline, and then all subsequent discussion circles around adjustments to that number. The first number you see becomes the gravity well for all thinking that follows. Your actual best estimate might be different, but it never surfaces.
The fix
Present your forecast alongside the AI output as two separate lines on the same chart, with explicit notes on where they diverge and why your judgement differs.
Anaplan tells you that a 10% revenue decline produces a 15% impact on operating margin, and you cite that ratio in your risk narrative. You have not verified that the model is holding fixed costs constant at the right level or that your pricing power assumption is realistic under stress. The number sounds precise so it sounds true.
The fix
For any stress scenario you will present to the board, manually recalculate the three most material line items and compare your result to the model output before you use the number.
Your forecasting tool extrapolates three years of growth data and produces a smooth curve, which becomes your strategic plan because it looks authoritative. The model cannot see structural changes in your industry, competitive threats, or customer behaviour shifts that have not yet appeared in the historical data. You have turned pattern-matching into strategy.
The fix
For any forecast beyond two quarters, overlay the AI trend with a separate scenario that reflects your own view of market disruption or capability changes in your organisation.
Copilot generates a boilerplate summary of your financial results, and it appears in the board deck because it is finished and sounds competent. Every other CFO at your peer conference is presenting the same template language because they used the same tool. Your board no longer hears your specific insight about what moved the needle in your business.
The fix
Write the first draft of your board narrative yourself without AI assistance, then use Copilot only to tighten the language and check for clarity.
You show the board a forecast with a 90% confidence band, derived from AI statistical methods they do not understand. The board treats the band as a guarantee because it has the appearance of mathematical rigour. When actual results fall outside the band, your credibility falls with them.
The fix
In your board materials, write one sentence explaining whether the confidence interval comes from historical volatility, model sensitivity analysis, or expert judgement about your specific risks.
Your AI tool forecasts revenue at 47.3 million pounds, and you cite that figure in board materials without rounding. The precision is illusory because your actual forecast uncertainty is probably plus or minus 5 million. You have confused decimal output with decimal accuracy.
The fix
Round all financial forecasts to the nearest 0.5 million in board materials and note the range of uncertainty separately.
A board member asks how you stress-tested an assumption in your AI forecast, and you become defensive because the forecast is already published and changing it looks bad. You answer with the model output rather than your actual thinking. The board concludes you do not understand your own numbers.
The fix
Prepare one scenario table before every board meeting that shows what happens if your three biggest assumptions shift by 20%, so you can answer this question with your own analysis.
Your Tableau dashboard identifies a correlation between two metrics that the AI surface as an insight, and it is interesting enough to mention to the board. You have not checked whether the correlation is real or whether it reflects a data quality issue or a timing coincidence. You are now defending a board statement that your business team cannot explain.
The fix
Before you cite any insight from Tableau AI in a board conversation, ask your business unit head to confirm whether the pattern matches their experience of how the business actually works.
Your organisation uses Bloomberg Terminal AI or another risk tool that assigns a risk score to a forecasting assumption or a M&A target. The score feels objective so it feels safe. You have not questioned whether the model is using current data about your specific business or generic historical benchmarks. The confidence in the number has become a substitute for your own risk judgement.
The fix
For any AI risk rating above a medium threshold that will influence a board decision, document what three data inputs the model weighted most heavily and assess whether those inputs reflect your current situation.
Copilot or a specialised AI tool rapidly produces a preliminary assessment of a target's financial health, growth trajectory, or integration risk. You pass the summary to your investment committee because the analysis is clean and fast. You have not assigned anyone to manually verify the top ten conclusions against the source documents. The speed has become a substitute for thoroughness.
The fix
Assign one team member to manually verify the five most material conclusions from AI due diligence analysis before the investment committee votes.
Your forecasting tool or risk model learned patterns from the past five years, which does not include a period of sustained inflation, a credit shock, or a major customer loss in your industry. The model cannot warn you about risks that have not yet appeared in the training data. You present a risk assessment that looks comprehensive but is blind to the unprecedented.
The fix
After you run your standard AI risk model, spend 30 minutes writing down three scenarios that were not present in your historical data, and estimate their probability yourself.
Anaplan recommends a specific cost reduction path because it optimises for a metric you specified, like EBITDA margin. The recommendation looks objective because a machine produced it. You have not acknowledged that the model is ignoring factors like employee retention, customer satisfaction, or supplier relationship strength because those are not in the spreadsheet.
The fix
Before you present any cost reduction or restructuring recommendation from an AI model, ask your Chief People Officer and Chief Customer Officer to identify the risks the model cannot see.
Your organisation uses AI to flag potential control failures or anomalous transactions in your financial close process. The system is efficient and it catches real problems. You have gradually reduced manual audit steps because the AI coverage feels comprehensive. Then the AI misses a class of transactions it was never trained to flag, and you have a control gap in your annual audit.
The fix
Document the specific control objectives your AI tool is designed to monitor, then map those to your annual audit scope and confirm that manual procedures still cover the gaps.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.