By Steve Raju

For Policy Analysts and Public Servants

Cognitive Sovereignty Checklist for Policy Analysts

About 20 minutes Last reviewed March 2026

When you delegate policy synthesis to AI, you inherit its blind spots. ChatGPT and Claude flatten conflicting evidence into consensus. Perplexity and Copilot optimise for the measurable and miss the irreducibly political. Your role is to rebuild the complexity that AI removes so that Ministers can make judgement calls, not follow recommendations.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Policy Analysts: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Protect the disagreement in your evidence base

List the schools of thought before you ask AI to summarisebeginner
Write down which economists, commentators, or research teams disagree on your policy question before you paste anything into an AI tool. This forces you to know what exists before the algorithm decides what matters.
Extract the specific studies an AI cites and check their conclusions yourselfbeginner
AI tools reference real papers but often misrepresent them. When Copilot or Claude names a study, spend five minutes reading the abstract and methods. You will catch instances where the tool has inverted the finding or ignored the study's own limitations.
Flag where an AI summary erases uncertaintyintermediate
AI tends to present findings as settled fact. When you see phrases like 'evidence shows' or 'research demonstrates', go back to the original sources and note what the authors actually claim about confidence, sample size, or replicability. Your brief must carry that uncertainty forward.
Identify which stakeholder positions are missing from the AI outputintermediate
AI tools train on published sources and miss the views of groups without media platforms. For regulatory analysis, this means you lose the concerns of frontline delivery staff, smaller firms, or communities affected by implementation. Ask colleagues in implementation teams what the AI missed.
Compare AI summaries across different tools on the same policy questionintermediate
Run the same prompt through Claude, ChatGPT, and Perplexity. The differences reveal what each tool's training data emphasises. One may focus on economic impact, another on equity. You now see which version was shaped by what the tool prioritises.
Document which evidence the AI refused to synthesiseadvanced
Some tools decline to summarise contested topics. Note what the AI said it would not touch. That refusal itself is data about what the tool's designers decided was too contested or controversial. You need to cover it yourself.
Reconstruct the causal chain behind any AI risk assessmentadvanced
When Copilot or Claude identifies a policy risk, ask it to show you the step-by-step reasoning. Then test each step against what you know about implementation. AI often chains together plausible-sounding statements that experienced civil servants would recognise as ignoring how resistance actually works.

Preserve institutional knowledge and political judgement

Record what you know about this policy area that the AI cannot knowbeginner
Before writing your brief, write down what you know from previous attempts at similar reforms, from relationships with delivery partners, or from past policy failures. This becomes your check against AI recommendations that ignore why something was tried before and abandoned.
Interview a colleague who was in post five years agobeginner
If you are analysing a policy area that has a recent history, spend an hour with someone who worked on it before. Ask what the stakeholder map looked like then, what changed, what politicians learned. AI has no access to this institutional memory.
Test AI recommendations against known implementation barriersintermediate
When Claude or ChatGPT suggests a policy approach, ask yourself which delivery team will have to make it work. Who will resist it. What resources are missing. AI recommends solutions that look rational on paper but crash against resource constraints, turf wars, or staff scepticism that you know about.
Map stakeholder positions yourself instead of asking AI to do itintermediate
Create your own stakeholder map before you consult an AI tool. Mark who benefits, who loses, who has leverage, who will comply reluctantly. This exercise preserves your own political judgment about what is implementable, which AI tools cannot assess.
Identify where the AI answer differs from what Ministers previously decidedadvanced
Cross-reference any AI recommendation against the decision record from the last time this issue came before Cabinet or Parliament. If the AI suggests the opposite, investigate why. The answer may be new evidence or it may be that the tool missed the political constraints that shaped the earlier choice.
Name the civil servant who should sign off on this analysis and ask them to read itadvanced
Before you file a policy brief, decide who is accountable for it. Then ask that person to read what you have written and challenge any AI-generated passage that feels politically or operationally naive. Their accountability forces you to justify claims to someone who knows the territory.

Maintain accountability and avoid cognitive homogenisation

Record which AI tool you used and what you asked itbeginner
Keep a log of prompts and tools. This creates a trace. If a decision later falls apart, you can show what you asked, what the AI said, and where you added your own judgment. Without this record, your brief becomes unaccountable.
Write the dissenting view yourself before you delegate synthesis to AIintermediate
For any contested policy area, write down the strongest case against the position you are about to brief. Doing this yourself forces you to engage with the strongest opposing argument rather than having AI flatten it into one summary that your Minister then relies on.
Identify which evidence base all the AI tools shareintermediate
ChatGPT, Claude, and Copilot draw on overlapping training data. This means they often arrive at similar conclusions not because those conclusions are correct but because all three tools have seen the same sources. Check whether other research communities or countries have studied the same question and reached different findings.
Specify what counts as success before you ask AI to assess optionsintermediate
Before you prompt an AI tool to evaluate policy options, define what you are measuring. Is it cost, speed, equity, public trust, or something else. Different weightings produce different recommendations. By choosing your success metric first, you keep control of what the AI optimises toward.
Challenge the AI when its answer matches what you expected to hearadvanced
If an AI summary perfectly supports the position you already held, that is a warning sign. Push back on it. Ask the tool to argue the opposite position. A brief that merely confirms what you thought is likely too simple to brief a Minister on.
Trace any policy recommendation back to its original source, not the AI's summary of itadvanced
If Perplexity or Claude recommends a specific policy approach, find the original research or policy document it cites. Read it yourself. AI summaries of academic papers often lose the conditions under which the recommendation applies or the populations it was tested on.

Five things worth remembering

Related reads


Common questions

Should policy analysts list the schools of thought before you ask ai to summarise?

Write down which economists, commentators, or research teams disagree on your policy question before you paste anything into an AI tool. This forces you to know what exists before the algorithm decides what matters.

Should policy analysts extract the specific studies an ai cites and check their conclusions yourself?

AI tools reference real papers but often misrepresent them. When Copilot or Claude names a study, spend five minutes reading the abstract and methods. You will catch instances where the tool has inverted the finding or ignored the study's own limitations.

Should policy analysts flag where an ai summary erases uncertainty?

AI tends to present findings as settled fact. When you see phrases like 'evidence shows' or 'research demonstrates', go back to the original sources and note what the authors actually claim about confidence, sample size, or replicability. Your brief must carry that uncertainty forward.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.