For CFOs and Finance Leaders

40 Questions CFOs Should Ask Before Trusting AI Forecasts and Board Reports

Your board expects you to own every number you present, yet your FP&A team increasingly relies on AI to generate forecasts and narratives. The gap between what you present and what you actually understand has become a blind spot that matters.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Financial Forecasting and Scenario Building

1 When Anaplan generates a revenue forecast with a confidence interval, what historical period did it train on, and does that period match your current market conditions?
2 Can your FP&A team manually rebuild the top three scenarios Copilot suggests without the tool's help, or have they forgotten how to challenge the starting assumptions?
3 If Bloomberg Terminal AI recommends a commodity price assumption for your cost model, do you know whether it weighted recent volatility more heavily than longer-term trends, and is that weighting correct for your hedging horizon?
4 When you stress-test a forecast by changing one variable, does your AI tool automatically adjust related assumptions, and are those adjustments based on your actual business relationships or historical correlation?
5 Has your team ever run a forecast where they manually set all assumptions first, then compared their result to what the AI produced without those constraints?
6 For your working capital forecast, does the AI model account for changes you made to payment terms with your top three customers, or is it purely extrapolating historical behaviour?
7 When ChatGPT writes a scenario narrative for your board pack, have you traced back each claim to the actual numbers in your model, or is the prose creating confidence that the numbers alone do not support?
8 Do you know the exact assumptions behind your three-year EBITDA forecast, or would you need to ask your FP&A manager to explain what the AI started with?
9 If you had to defend one of your AI-generated forecasts to a sceptical board member in five minutes without slides, could you do it?
10 When Tableau AI creates a trend line on your dashboard, what is the underlying model, and have you verified it against a simple regression you could explain to your CFO peer group?

Board Reporting and Narrative Risk

11 Does your AI-generated board narrative use language and structure that resembles every other company's board report, and if so, what is actually distinctive about your position?
12 When Copilot drafts your CEO commentary on financial performance, does it flag areas where the numbers contradict the narrative, or does it simply make the narrative fit the numbers?
13 Have you asked your board members whether they notice that your written commentary feels different from previous years when you wrote it yourself, rather than letting AI draft it?
14 If your board decides to challenge one of your strategic assumptions at the meeting, will you be able to walk them through your actual thinking, or will you be defending what the AI wrote?
15 Does your AI tool know which risks your board actually cares about, or is it generating generic risk narratives based on industry templates?
16 When your dashboard uses AI to highlight anomalies or exceptions, have you manually verified that the flagged items are actually material to your board's decisions, or are you just reporting what the algorithm surfaced?
17 If you removed all the AI-generated narrative from your board pack and kept only the tables and charts, would the numbers still tell a coherent story, or does the AI narrative do the actual work?
18 Does your CFO letter to the board contain any insights that would surprise someone who only read the financial tables, or is it all post-hoc explanation?
19 Has anyone on your board actually asked you to show your working on a key forecast, and if they did today, could you?
20 When you present a quarterly variance to plan, is the explanation AI-generated or built from conversations with your business unit leaders about what actually happened?

Risk Assessment and M&A Due Diligence

21 In your most recent acquisition due diligence, did ChatGPT or another AI tool write the risk summary, and if so, did it flag any risks that the actual deal team had already identified and dismissed?
22 When you run financial stress tests on a target company's model, are you changing the assumptions yourself based on post-acquisition realities, or is the AI extrapolating pre-acquisition historical data?
23 Does your AI risk tool know about material contracts, customer concentration, or regulatory changes that are not yet visible in the historical financial data?
24 If you asked your FP&A team to rank the top five risks to your revenue forecast without using any AI tools, would their list match the risks that Tableau AI or Copilot flagged?
25 When Bloomberg Terminal suggests a credit spread that affects your cost of capital assumption, do you understand what drove that suggestion, or are you simply accepting it because Bloomberg is authoritative?
26 In a due diligence process, has your AI tool ever surfaced a red flag that conflicted with what management told you, and if so, how did you resolve the conflict?
27 Do you have a separate, human-led risk assessment that runs in parallel with your AI-generated risk models, or is your risk view entirely dependent on what the tools produce?
28 When you model the downside case for an acquisition, are you using AI-suggested stress levels, or are you stress-testing based on scenarios you have actually experienced in past downturns?
29 Has your audit committee specifically asked you to explain how you validate AI-generated risk assessments, and do you have a credible answer?
30 If a material risk materialised that your AI tool did not flag, would you be able to explain to your board why you relied on that tool despite the miss?

Model Integrity and Assumption Ownership

31 Can you name, without looking at the model, the three most sensitive assumptions in your annual forecast, and do you know whether your team regularly reviews them or leaves them to the AI?
32 When you hand a model to a business unit leader for input, do you ask them to validate the AI-suggested starting assumptions, or do you assume they will correct anything obviously wrong?
33 Does your FP&A team document why they accept or reject AI recommendations for specific line items, or do they only document the final numbers?
34 If you ran your current forecast model with 2024 data instead of 2023 data, would the AI-suggested assumptions change materially, and if so, why should you trust either version?
35 Have you ever caught your FP&A team using an AI tool's output as a sanity check rather than rebuilding the logic from scratch?
36 When Copilot suggests a formula or calculation structure in your spreadsheet, do you verify it against your established methodology before accepting it?
37 Does your team version-control the assumptions in your model, and can you trace whether an assumption came from your business leaders or from AI automation?
38 If you needed to restate your forecast next quarter, would your team be able to explain which assumptions were human judgements and which came from the AI tool?
39 When you hire new FP&A analysts, do you still teach them how to build a forecast from first principles, or do you assume they will learn the tool first?
40 Has your external auditor ever asked you how you validate the assumptions that AI tools generate, and if not, should they be?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.