For Risk Managers

Protecting Your Judgement: Risk Managers Using AI Without Losing Their Edge

Your risk models now run faster because AI generates scenarios and stress tests in hours instead of weeks. The danger is that speed creates confidence, and confidence can mask embedded assumptions that nobody has actually questioned. The AI tools you use (SAS Risk AI, IBM OpenPages, Azure) are excellent at pattern matching and statistical rigour, but they cannot tell you what risks your organisation has never seen before.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Stop Treating AI Risk Models as Validated Just Because They Run

When SAS Risk AI or Azure generates a scenario set, it is performing statistical interpolation within the bounds of historical data. Your board will read the output as rigorous because it looks precise. But precision is not the same as validity. Every assumption buried in that model (correlation structures between market factors, tail behaviour of credit spreads, recovery rates in a systemic event) was chosen by someone, and that someone was usually the AI training process, not a human expert who lived through a cycle where that assumption broke. Before you report any model output to the board, you must name the three assumptions that matter most and test them against what you actually know from your career.

Your Board Reporting Needs to Show Uncertainty, Not Hide It

AI summary tools smooth out the rough edges of risk reporting. ChatGPT and internal AI summarisers turn a messy scenario analysis into a neat narrative, and your board gets a story instead of a warning. The scenarios where your model breaks down, where correlations fail, where historical data lies to you, these become footnotes or disappear entirely. When you build board reporting from AI-generated summaries, you are asking the AI to choose what uncertainty matters most. Your job is to choose that yourself, and then force it into the report even when it makes the narrative less clean.

Emerging Risks Do Not Pattern-Match to Your Training Data

Your AI models train on historical data. They are brilliant at finding patterns in what already happened. They are blind to risks that have never happened before or that look different this time. If you have never modelled a cyberattack on the core clearing system, your scenario AI will not generate it because there is no historical distribution to learn from. Your intuitive risk radar, developed from years of conversations with traders, technologists, and other risk managers, is actually superior at spotting novel threats. But you are probably ignoring it because the AI models did not flag it, so it feels less rigorous.

Stress Test Your AI Tools Against Real Past Failures

Take a major risk event that actually happened in your organisation or industry in the last ten years. Rewind your AI models to before that event and ask: would they have caught it? If they would not have, then you know where your models fail. This is not about blaming the AI. This is about knowing which risks sit in the blind spot. When you run this test on SAS Risk AI or Azure AI scenarios, you will often find that the model was technically correct but the event happened in a way the model had not been trained to expect. That is exactly the kind of failure mode your board needs to know about.

Build a Manual Override Process Before You Need It

You will have a moment where your intuition and the AI model disagree, and the disagreement will matter enough to report it to the board or to stop a trade. If you do not have a process for that moment before it arrives, you will either override the model based on gut feeling (which looks irresponsible) or trust the model over your judgement (which is the whole problem). Your override process should be simple: name the specific concern, explain why the AI model might have missed it, show what you would do differently, and let the board or your risk committee decide. This process makes your override transparent and defensible, and it also keeps you honest about when you are really seeing a risk versus when you are just uncomfortable with what the data says.

Key principles

  1. 1.AI risk models are fast and precise but not necessarily valid: your job is to find and test the assumptions that could be wrong.
  2. 2.Board reporting must show uncertainty, not hide it: if your summary is too clean, you have probably removed the risks that matter most.
  3. 3.Emerging risks do not pattern-match to historical data: your intuitive risk radar is a legitimate input, not a bias to be removed.
  4. 4.Test your AI models against real past failures to find where they would have been blind: this is how you know which future events might escape detection.
  5. 5.Build an override process before you need it so that your judgement stays sovereign even when the AI system is working perfectly.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.