For Policy Analysts and Public Servants
Policy Analysis Without Losing Judgement: Using AI Tools Responsibly
AI tools can summarise evidence fast, but they flatten disagreement into consensus and hide the political choices buried in every policy question. When you delegate complexity reduction to Claude or ChatGPT, you lose the sense of what experts actually dispute and why that dispute matters to implementation. Your job is to keep your own judgement in the loop and use AI as a research assistant, not a replacement for the thinking that only you can do.
These are suggestions. Your situation will differ. Use what is useful.
Keep the Disagreement Visible in Your Briefs
AI tools trained on broad evidence bases will smooth over genuine expert disagreement and present settled consensus where none exists. When you use ChatGPT or Claude to summarise literature on a regulatory question, the output often reflects the statistical centre of gravity across sources rather than the real fault lines that should inform your brief. You need to read the original sources yourself on contested points, then write those disagreements into your brief explicitly so the decision maker sees what is actually at stake.
- ›After using Claude for a summary on a contentious topic, manually check three sources the AI cited to see if it suppressed minority positions
- ›In your brief, write a separate section labelled 'Where experts disagree' rather than relying on AI to signal uncertainty naturally
- ›Use Perplexity to identify which academic or policy schools hold different views on your question, then read their original work rather than AI paraphrases
Do Your Own Risk Assessment Before Asking AI for One
When you ask ChatGPT or Copilot to assess the risks of a policy proposal, the tool will highlight risks that are measurable, quantifiable and documented in training data. Political risk, implementation friction and the resistance you will face from specific stakeholder groups are often too context-specific and human-dependent for AI to see. You should do your own rapid risk scan first, drawing on your experience and conversations with delivery teams, then use AI to check whether you have missed technical risks or precedent-based concerns.
- ›Before prompting Claude with 'what are the risks of this policy', write down three risks you predict based on similar programmes you have handled
- ›Ask your AI tool specifically about implementation resistance in one sector at a time rather than asking for a general risk assessment
- ›Flag to the decision maker which risks came from AI analysis and which came from your own knowledge of how the civil service actually works
Protect Institutional Memory by Documenting What the AI Missed
Your experience of what has been tried before, what stakeholders will resist quietly rather than publicly, and which delivery teams have the capacity to move fast is institutional knowledge that your organisation cannot afford to lose. When you use AI tools to speed up research synthesis, you are running a real risk that that knowledge becomes invisible to your colleagues because it never appears in the AI output. You need to supplement every AI assisted brief with an annotation of your own judgement so the next analyst can see how you thought about the problem.
- ›Add a short section to briefings that says 'From experience, we should watch for' and list the political or implementation realities the AI tool would not surface
- ›When Copilot or ChatGPT produces a stakeholder analysis, rewrite it to reflect the actual power dynamics in your specific regulatory landscape rather than generic stakeholder theory
- ›Keep a running note of what each AI tool consistently gets wrong in your policy domain so you know where to invest your own reading time
Use AI to Find Gaps in Your Thinking, Not to Fill Your Thinking
The most useful way to work with Claude or Copilot is to form your own initial judgement on a policy question, then ask the AI tool to stress test it. This protects you from outsourcing the hard thinking while still getting the benefit of rapid access to comparable examples and research. If you use AI to generate your first draft analysis, you will tend to accept its framing without noticing how many political dimensions it left out or how it resolved uncertainty in ways that suit the training data rather than your actual policy context.
- ›Write your own two-paragraph analysis of a problem first, then ask ChatGPT 'What am I not considering here' rather than asking it to analyse the problem from scratch
- ›Use Perplexity to find three case studies from other jurisdictions, then decide yourself whether they apply to your situation rather than letting the AI draw the comparison
- ›When you disagree with an AI summary, treat that disagreement as a sign that you need to read the sources yourself to understand why
Track What Decisions Trace Back to AI Recommendations
Accountability gaps open up when a policy brief becomes a chain of recommendations that originated in an AI tool but now have the weight of official analysis behind them. If a decision maker acts on a Copilot generated risk assessment or a Claude summary that you did not independently validate, and that decision produces a failure, the accountability story becomes unclear. You need to be able to trace every significant recommendation back to either original sources you have read yourself or to clearly labelled AI assistance so the accountability chain stays visible.
- ›In your brief, distinguish between findings that came from your reading of primary sources and findings that came from AI tools by using footnotes that say 'Source identified via ChatGPT search'
- ›Do not let an AI generated recommendation become a policy decision unless you have personally assessed whether it fits your political and operational context
- ›Keep a record of which AI tools you used for each brief so you can explain your methodology if the decision is later questioned
Key principles
- 1.Read primary sources on contested questions rather than accepting AI summaries as consensus.
- 2.Assess political and implementation risk using your own judgement first, then use AI to check for technical risks you may have missed.
- 3.Document the institutional knowledge and context that AI tools cannot see so your colleagues can learn from your thinking.
- 4.Form your own analysis before using AI to stress test it rather than using AI to generate your first thinking.
- 5.Make the source of every significant recommendation traceable so accountability does not dissolve into the AI system.
Key reminders
- When Claude or ChatGPT summarises a contentious policy area, manually read the sources on whichever side you are less familiar with to check whether the AI flattened real disagreement
- Ask your AI tool to explain why experts disagree on your question rather than asking it to resolve the disagreement for you
- Use Microsoft Copilot for locating comparable case studies across jurisdictions, but reserve your own judgement on whether those cases actually apply to your political context
- Create a prompt template that starts with your own problem statement and ends with 'What have I overlooked in this analysis' rather than asking the AI to do the analysis from scratch
- Before any AI assisted brief goes to a decision maker, write one paragraph in your own voice explaining what the analysis depends on and where you took on the work yourself