40 Questions Risk Managers Should Ask Before Trusting AI Risk Models
Your board relies on you to spot the risks that models miss. When AI generates risk scenarios, stress tests, or monitoring alerts, asking the right questions now prevents catastrophic surprises later.
These are suggestions. Use the ones that fit your situation.
1What historical period did the AI train on, and does that period include the type of stress we are now facing?
2Which data sources did the AI exclude or downweight, and why might those exclusions matter for emerging risks?
3What happens to this model's output if correlation structures shift between asset classes or business units?
4Can you list every assumption the AI embedded in the scenario, or are some assumptions hidden in the model weights?
5Has anyone stress-tested the AI model against scenarios where its training data behaves opposite to what it predicts?
6Which risks did the AI assign low probability to, and do those low probabilities feel intuitively wrong to your senior risk practitioners?
7If the AI is using synthetic data to fill gaps, how much of this risk model is real history versus generated guesses?
8What tail risk behaviours did the historical training period miss because they happen once every fifty years?
9Does the model assume that relationships between variables remain stable, or does it account for regime changes?
10Who in your organisation owns the decision to accept these assumptions, and have they signed off in writing?
Questions About AI-Generated Risk Scenarios
11Can you manually recreate one of the AI-generated scenarios by hand, or are the internal mechanics too complex to follow?
12Does the AI scenario include contagion paths that your organisation would actually experience, or does it model textbook contagion?
13Which scenarios did the AI reject as implausible, and do those rejected scenarios include your organisation's actual failure modes?
14If you ran this same AI tool six months ago, would it have generated the same scenarios, or do the outputs drift?
15Are the AI-generated scenarios clustered around outcomes the model found easy to compute, rather than outcomes your board actually fears?
16Does this scenario analysis account for the fact that your three largest counterparties use the same AI risk tools you do?
17What would happen to the organisation in a scenario where the AI tool itself fails or produces correlated bad guidance across the industry?
18Does the scenario include impacts that your business would feel but that do not show up in the variables the AI monitors?
19How did the AI decide which variables matter most in the scenario, and did anyone check whether those priorities match your regulatory obligations?
20If you removed the five variables the AI weighted most heavily, how much does the scenario change?
Questions About Board Risk Reporting
21If the AI summary says risk is stable, can you point to the raw data that supports that claim, or have you only seen the smoothed AI summary?
22Does your board report include the uncertainty range around AI-generated risk numbers, or only the point estimates?
23Have you shown your board the scenarios where the AI model failed worst in backtesting, or only the best-performing ones?
24Who is responsible if the board acts on an AI risk alert that turns out to be a false signal from the model?
25Does your board report identify which risks are invisible to the AI because they do not appear in historical patterns?
26Can you explain to the board in plain language why the AI ranked risk A higher than risk B, or is the reasoning opaque?
27What did your risk radar tell you about emerging risks in the past year that the AI did not detect until much later?
28Does the board know how often the AI changes its risk rankings week to week, and whether that volatility is signal or noise?
29Have you disclosed to the board that this risk report is built from AI summaries that were never independently validated by a human risk team?
30If a major risk event hits the organisation next quarter, would the board remember that you told them the AI said the risk was low?
Questions About Your Own Judgement
31Which risks have you stopped monitoring closely because the AI is now monitoring them for you?
32If an AI alert contradicts your intuition about a risk, do you investigate the gap or do you assume the AI is right?
33Can you name three emerging risks that do not yet pattern-match to historical data, and why the AI would miss them?
34How many risk decisions have you made in the past month based on AI output alone, without input from someone who understands the business?
35If the AI tool broke tomorrow, which risks would your organisation become blind to overnight?
36Have you asked your risk team whether they feel their expertise is being replaced by the AI, rather than amplified?
37What signals do you notice in conversations with business leaders that the AI is not picking up from the data?
38When was the last time you rejected an AI recommendation because your judgement told you it was wrong?
39Does your organisation have a backup plan for risk management if the AI vendors we rely on change their models or go out of business?
40What would happen to your risk culture if risk managers became people who monitor AI alerts rather than people who make risk judgements?
How to use these questions
Schedule a quarterly stress test where your team manually builds one scenario and compares it to what the AI generated for the same inputs. The gaps reveal where the AI is hiding assumptions.
Ask SAS Risk AI or Palantir to show you the sensitivity analysis for their top three risk drivers. If they cannot explain it clearly, you have a model risk problem.
Create a log of risks your team flagged that the AI did not flag. When you hit three or more gaps in a category, you have found an emerging risk the model is blind to.
Before presenting an AI-generated risk report to the board, ask one trusted senior risk officer to challenge one conclusion. If they cannot find anything wrong, you may be in an echo chamber.
Document the date you accepted each major assumption in your AI risk models. When regulators or auditors ask who decided to rely on this assumption, you need a clear answer.