Cognitive Sovereignty Self-Audit for Policy Analysts
This audit measures whether your policy judgement remains your own or is being shaped by AI tools you use for research and briefing production. It identifies where you have ceded thinking to algorithms and where your civil service experience still guides decisions.
When you see agreement across multiple sources, ask whether the sources are independent or whether they are all citing the same upstream research. AI tools often create the appearance of consensus by treating secondary citations as primary evidence.
Keep a written record of predictions you made about policy implementation. Review them annually. This creates accountability for your own judgement and helps you recognise where your intuition leads you astray.
Before you brief a minister, read the previous briefing on the same topic from three years ago. Notice what has changed in the evidence and what has changed in the political context. This protects you against presenting old analysis in new language.
When a new AI tool is introduced, use it for one week while keeping detailed notes on what it changes about how you think. Do not assume that faster is better. Ask whether speed is replacing thinking.
Protect one hour per month for conversations with colleagues in enforcement, regulation, or implementation. These conversations are the hardest for AI tools to replicate and the most valuable for your judgement.