For Risk Managers

40 Questions Risk Managers Should Ask Before Trusting AI Risk Models

Your board relies on you to spot the risks that models miss. When AI generates risk scenarios, stress tests, or monitoring alerts, asking the right questions now prevents catastrophic surprises later.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Questions About Model Assumptions

1 What historical period did the AI train on, and does that period include the type of stress we are now facing?
2 Which data sources did the AI exclude or downweight, and why might those exclusions matter for emerging risks?
3 What happens to this model's output if correlation structures shift between asset classes or business units?
4 Can you list every assumption the AI embedded in the scenario, or are some assumptions hidden in the model weights?
5 Has anyone stress-tested the AI model against scenarios where its training data behaves opposite to what it predicts?
6 Which risks did the AI assign low probability to, and do those low probabilities feel intuitively wrong to your senior risk practitioners?
7 If the AI is using synthetic data to fill gaps, how much of this risk model is real history versus generated guesses?
8 What tail risk behaviours did the historical training period miss because they happen once every fifty years?
9 Does the model assume that relationships between variables remain stable, or does it account for regime changes?
10 Who in your organisation owns the decision to accept these assumptions, and have they signed off in writing?

Questions About AI-Generated Risk Scenarios

11 Can you manually recreate one of the AI-generated scenarios by hand, or are the internal mechanics too complex to follow?
12 Does the AI scenario include contagion paths that your organisation would actually experience, or does it model textbook contagion?
13 Which scenarios did the AI reject as implausible, and do those rejected scenarios include your organisation's actual failure modes?
14 If you ran this same AI tool six months ago, would it have generated the same scenarios, or do the outputs drift?
15 Are the AI-generated scenarios clustered around outcomes the model found easy to compute, rather than outcomes your board actually fears?
16 Does this scenario analysis account for the fact that your three largest counterparties use the same AI risk tools you do?
17 What would happen to the organisation in a scenario where the AI tool itself fails or produces correlated bad guidance across the industry?
18 Does the scenario include impacts that your business would feel but that do not show up in the variables the AI monitors?
19 How did the AI decide which variables matter most in the scenario, and did anyone check whether those priorities match your regulatory obligations?
20 If you removed the five variables the AI weighted most heavily, how much does the scenario change?

Questions About Board Risk Reporting

21 If the AI summary says risk is stable, can you point to the raw data that supports that claim, or have you only seen the smoothed AI summary?
22 Does your board report include the uncertainty range around AI-generated risk numbers, or only the point estimates?
23 Have you shown your board the scenarios where the AI model failed worst in backtesting, or only the best-performing ones?
24 Who is responsible if the board acts on an AI risk alert that turns out to be a false signal from the model?
25 Does your board report identify which risks are invisible to the AI because they do not appear in historical patterns?
26 Can you explain to the board in plain language why the AI ranked risk A higher than risk B, or is the reasoning opaque?
27 What did your risk radar tell you about emerging risks in the past year that the AI did not detect until much later?
28 Does the board know how often the AI changes its risk rankings week to week, and whether that volatility is signal or noise?
29 Have you disclosed to the board that this risk report is built from AI summaries that were never independently validated by a human risk team?
30 If a major risk event hits the organisation next quarter, would the board remember that you told them the AI said the risk was low?

Questions About Your Own Judgement

31 Which risks have you stopped monitoring closely because the AI is now monitoring them for you?
32 If an AI alert contradicts your intuition about a risk, do you investigate the gap or do you assume the AI is right?
33 Can you name three emerging risks that do not yet pattern-match to historical data, and why the AI would miss them?
34 How many risk decisions have you made in the past month based on AI output alone, without input from someone who understands the business?
35 If the AI tool broke tomorrow, which risks would your organisation become blind to overnight?
36 Have you asked your risk team whether they feel their expertise is being replaced by the AI, rather than amplified?
37 What signals do you notice in conversations with business leaders that the AI is not picking up from the data?
38 When was the last time you rejected an AI recommendation because your judgement told you it was wrong?
39 Does your organisation have a backup plan for risk management if the AI vendors we rely on change their models or go out of business?
40 What would happen to your risk culture if risk managers became people who monitor AI alerts rather than people who make risk judgements?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.