For Policy Analysts and Public Servants

Policy Analysis Without Losing Judgement: Using AI Tools Responsibly

AI tools can summarise evidence fast, but they flatten disagreement into consensus and hide the political choices buried in every policy question. When you delegate complexity reduction to Claude or ChatGPT, you lose the sense of what experts actually dispute and why that dispute matters to implementation. Your job is to keep your own judgement in the loop and use AI as a research assistant, not a replacement for the thinking that only you can do.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Keep the Disagreement Visible in Your Briefs

AI tools trained on broad evidence bases will smooth over genuine expert disagreement and present settled consensus where none exists. When you use ChatGPT or Claude to summarise literature on a regulatory question, the output often reflects the statistical centre of gravity across sources rather than the real fault lines that should inform your brief. You need to read the original sources yourself on contested points, then write those disagreements into your brief explicitly so the decision maker sees what is actually at stake.

Do Your Own Risk Assessment Before Asking AI for One

When you ask ChatGPT or Copilot to assess the risks of a policy proposal, the tool will highlight risks that are measurable, quantifiable and documented in training data. Political risk, implementation friction and the resistance you will face from specific stakeholder groups are often too context-specific and human-dependent for AI to see. You should do your own rapid risk scan first, drawing on your experience and conversations with delivery teams, then use AI to check whether you have missed technical risks or precedent-based concerns.

Protect Institutional Memory by Documenting What the AI Missed

Your experience of what has been tried before, what stakeholders will resist quietly rather than publicly, and which delivery teams have the capacity to move fast is institutional knowledge that your organisation cannot afford to lose. When you use AI tools to speed up research synthesis, you are running a real risk that that knowledge becomes invisible to your colleagues because it never appears in the AI output. You need to supplement every AI assisted brief with an annotation of your own judgement so the next analyst can see how you thought about the problem.

Use AI to Find Gaps in Your Thinking, Not to Fill Your Thinking

The most useful way to work with Claude or Copilot is to form your own initial judgement on a policy question, then ask the AI tool to stress test it. This protects you from outsourcing the hard thinking while still getting the benefit of rapid access to comparable examples and research. If you use AI to generate your first draft analysis, you will tend to accept its framing without noticing how many political dimensions it left out or how it resolved uncertainty in ways that suit the training data rather than your actual policy context.

Track What Decisions Trace Back to AI Recommendations

Accountability gaps open up when a policy brief becomes a chain of recommendations that originated in an AI tool but now have the weight of official analysis behind them. If a decision maker acts on a Copilot generated risk assessment or a Claude summary that you did not independently validate, and that decision produces a failure, the accountability story becomes unclear. You need to be able to trace every significant recommendation back to either original sources you have read yourself or to clearly labelled AI assistance so the accountability chain stays visible.

Key principles

  1. 1.Read primary sources on contested questions rather than accepting AI summaries as consensus.
  2. 2.Assess political and implementation risk using your own judgement first, then use AI to check for technical risks you may have missed.
  3. 3.Document the institutional knowledge and context that AI tools cannot see so your colleagues can learn from your thinking.
  4. 4.Form your own analysis before using AI to stress test it rather than using AI to generate your first thinking.
  5. 5.Make the source of every significant recommendation traceable so accountability does not dissolve into the AI system.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.