By Steve Raju

For Risk Managers

Cognitive Sovereignty Checklist for Risk Managers Using AI

About 20 minutes Last reviewed March 2026

Risk models built by SAS, Palantir, or ChatGPT can look bulletproof because they are verbose and mathematical. You can lose the ability to spot when those models rest on untested assumptions or when they miss entire categories of emerging risk. Your board depends on you to know what your models do not know.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Risk Managers: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Stress-test the assumptions inside your AI risk models before they reach the board

List every assumption your AI model made about market regimes, correlation structures, and tail behaviourbeginner
Do not accept the model output as given. Force the AI tool to show its work. Write down which assumptions came from the training data and which were set by default in your software. Many risk managers skip this step because the output looks authoritative.
Run a stress test where each major assumption fails individuallyintermediate
If your Azure AI model assumes equity correlations stay below 0.8, test what happens at 0.95. If it assumes emerging market volatility follows a normal distribution, test fat tails. Record which assumption breaks your capital buffers.
Ask the AI tool what data it was trained on and whether that data covers the current market regimebeginner
If your IBM OpenPages scenario model trained on 2010 to 2022 data, it has never seen the 2024 interest rate environment or recent geopolitical fragmentation. Be explicit about this blindness when reporting to the board.
Identify which assumptions your organisation's risk professionals disagree onintermediate
Bring your scenario analysts, traders, and compliance heads into the room. Ask them directly whether the AI model's correlation matrices and volatility forecasts match what they see in markets. Document their objections.
Test whether small changes to assumptions cause the model to produce opposite conclusionsintermediate
Sensitivity analysis matters most when stakes are high. If tweaking one parameter from 0.5 to 0.6 flips your recommendation from accept risk to reject risk, your model is too fragile to present to the board without heavy caveats.
Require the AI tool to quantify the uncertainty around each parameter it estimatesadvanced
Most AI risk models give you a single number for volatility, correlation, or default probability. Push back. Ask for confidence intervals. If the AI cannot tell you whether a parameter sits at the 5th percentile or the 95th percentile of reasonable values, you do not have a usable model.
Compare your AI model output to the outputs of older, simpler models you trustbeginner
Run the same scenario through a spreadsheet-based model from three years ago. If results diverge sharply, understand why before you present either one to the board. Reconciliation reveals which assumptions changed and which are specific to AI.

Protect your ability to sense emerging risks that AI systems cannot pattern-match

Create a monthly emerging risk log that exists outside your AI platformsbeginner
Use a simple spreadsheet or document. Note signals your team picks up from meetings, research, and intuition that your SAS or Palantir models do not yet flag. These are the risks your historical data does not recognise.
Ask your risk team to identify one risk per quarter that was not in any AI-generated scenariointermediate
This keeps your team's risk radar active. They must write down why they think it matters and what would make it real. Over time, this log becomes your organisation's record of blindspots in AI-driven risk management.
Challenge your board's comfort with AI summaries of risk by asking pointed follow-up questionsintermediate
If the AI report says operational risk is stable, ask what changed operationally in the last quarter. If market risk appears contained, ask about tail correlation across asset classes. Force the report writer to justify conclusions rather than accept the AI's smooth narrative.
Review which risks your AI tools have never flagged as material in the last three yearsintermediate
This absence is a sign. If credit concentration, geopolitical supply chain shock, or regulatory change never appears in your AI reports, it is not because those risks disappeared. It is because your training data does not teach the AI to see them.
Conduct quarterly scenario planning sessions without AI assistanceadvanced
Bring together your most experienced risk managers, traders, and business leaders. Walk through plausible tail events by hand. Compare these human-generated scenarios to what your Azure or ChatGPT scenarios produced. Gaps reveal where your team's judgement differs from the AI's learned patterns.
Map which external signals your AI models ignore because they cannot be quantifiedadvanced
Regulatory intent, management stability at counterparties, shifts in customer behaviour, and technological disruption often matter to risk but do not appear in historical data. Note these explicitly in board reports. Tell the board what your models cannot see.

Build safeguards so AI failures do not become catastrophic system failures across your organisation

Identify which risk decisions your organisation now makes almost entirely through AI outputsbeginner
List them. Capital allocation decisions, stress test scenarios, emerging risk alerts, counterparty limits. The more decisions that flow through one AI system, the greater the systemic risk if that system fails or produces correlated errors.
Test what happens to risk reporting when your primary AI platform goes offlineintermediate
Can you run board reports? Can you respond to a market shock? If the answer is no, you have built a dangerous concentration of risk. You need manual fallback processes that work within hours, not days.
Require all AI-generated risk models to have a manual review checkpoint before board distributionbeginner
One senior risk manager must read the full model documentation and output before it reaches executives. They sign off on the limitations and assumptions. This person is accountable if the model misleads the board.
Document the failure modes of each AI tool your organisation uses for risk workintermediate
SAS Risk AI could fail if data feeds corrupt. ChatGPT could hallucinate if fed ambiguous scenario descriptions. IBM OpenPages could miss edge cases in policy interpretation. Write these down. Share them with the board as part of your risk appetite statement.
Create an independent risk challenge function that does not rely on the same AI platforms as your primary risk teamadvanced
Your second line of defence cannot use the same tools as your first line. If both SAS and your model validation process run on the same Azure infrastructure, a failure takes both down. Separate your tools. Use spreadsheets, different vendors, or manual processes for your challenge function.
Run a worst-case scenario where multiple AI systems across your industry fail simultaneouslyadvanced
If most large banks use similar Palantir or SAS configurations, correlated failure is possible. Ask your board whether your organisation could operate risk management for one week if all major AI platforms were unavailable. If the answer is unclear, you have a systemic weakness.

Five things worth remembering

Related reads


Common questions

Should risk managers list every assumption your ai model made about market regimes, correlation structures, and tail behaviour?

Do not accept the model output as given. Force the AI tool to show its work. Write down which assumptions came from the training data and which were set by default in your software. Many risk managers skip this step because the output looks authoritative.

Should risk managers run a stress test where each major assumption fails individually?

If your Azure AI model assumes equity correlations stay below 0.8, test what happens at 0.95. If it assumes emerging market volatility follows a normal distribution, test fat tails. Record which assumption breaks your capital buffers.

Should risk managers ask the ai tool what data it was trained on and whether that data covers the current market regime?

If your IBM OpenPages scenario model trained on 2010 to 2022 data, it has never seen the 2024 interest rate environment or recent geopolitical fragmentation. Be explicit about this blindness when reporting to the board.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.