Cognitive Sovereignty Self-Audit for Risk Managers
This audit measures whether your risk judgement remains independent when you rely on AI systems for modelling, scenario generation, and board reporting. A high score means you retain the capacity to challenge AI outputs and catch risks that do not fit historical patterns.
Build a 'model assumptions register' that documents what each AI system assumes about the business. Review it quarterly and flag assumptions that no longer match reality.
Establish a 'risk radar' meeting monthly where you and experienced colleagues identify emerging threats without using AI. Compare your list to what the AI systems flagged. Investigate mismatches.
When you reject an AI recommendation, write down your reasoning and file it. Six months later, check whether you were right. Use this feedback to calibrate your scepticism.
Run a 'failure scenario' exercise once per year. Assume all your AI systems fail simultaneously. What risks would you miss? What manual processes would you need?
Never present a risk to your board as a percentage or score without explaining which AI-generated assumptions underpin it and which ones you have independently stress-tested.