For Policy Analysts and Public Servants

40 Questions Policy Analysts Should Ask Before Trusting AI Summaries

When you send a policy brief or risk assessment to ministers, your professional judgment is what matters, not the AI tool that helped you prepare it. Asking the right questions of your AI outputs protects both your credibility and the quality of political decision-making.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Questions about what the AI actually read

1 When Claude or ChatGPT summarised those consultation responses, what date was its training data cut off at? Did it see responses filed in the last six months?
2 If you asked Perplexity to identify the key disagreements in recent regulatory guidance, can you verify that it found the actual documents or did it reconstruct a plausible-sounding debate?
3 Your AI tool flagged three major studies supporting a particular policy position. Can you check whether those studies actually exist or whether the tool generated convincing citations?
4 When the AI recommended focusing on stakeholder X over stakeholder Y, what evidence base did it use? Did it have access to internal government correspondence, or only published sources?
5 If you fed the AI a government policy paper and it produced a summary, has it missed entire sections because they were in a table or appendix format?
6 You asked the AI to compare how three different EU countries implemented a regulation. Is it comparing the official text, or versions the AI saw quoted in secondary sources?
7 When Microsoft Copilot pulled in evidence about implementation costs, was it referencing government evaluations or news articles written by people without access to actual programme data?
8 The AI suggested that a particular approach has no precedent. Have you checked government archives and previous departmental work that might not be available online?
9 If the AI analysed stakeholder positions from a set of documents you uploaded, did you include all relevant stakeholders or only the ones you remembered to search for?
10 When the AI produced a timeline of policy changes, does it reflect what actually happened in your department or what was announced and reported in the media?

Questions about what the AI left out

11 Your risk assessment now flags three main implementation barriers. What barriers did the AI not mention because they are political rather than logistical?
12 The AI summary of stakeholder opinion treats the responses as data points. Which organisations will oppose this policy based on funding relationships or ideological positions rather than technical arguments?
13 When you asked the AI to identify the strongest objection to a proposed regulation, did it find the real objection that powerful actors are making privately, or the objection that appears most in published statements?
14 The AI recommended a policy approach as lower-cost than the alternative. Does this account for the cost of change management and staff retraining in your own organisation?
15 Your brief now says there is consensus on a particular point. Have you checked whether silence from certain stakeholders means agreement or simply that they have not yet mobilised?
16 The AI identified best practice from another department. What was tried in your own department before that was abandoned for reasons that might not be documented?
17 When the AI listed options for managing this issue, did it miss the option that is politically impossible even though it would be technically effective?
18 The risk analysis now ranks certain outcomes as low-probability. Have you checked whether those outcomes simply have not happened in the available data because no one has tried this approach before?
19 Your brief presents a regulatory obligation as straightforward to implement. What assumptions about staff capacity and capability is the AI making?
20 The AI summary treats the evidence for a policy approach as settled. Which experienced civil servants in your own organisation have seen similar policies attempted and can tell you what went wrong?

Questions about how the AI reasoned

21 When ChatGPT recommended this approach, did you ask it to explain which studies it weighted most heavily and why? Does that weighting match your department's actual priorities?
22 The AI produced a cost-benefit analysis that favours option A. Have you tested whether flipping the assumptions (different cost estimates, different timescales) would change the recommendation?
23 Your risk assessment lists implementation difficulty as a major concern. Is the AI assessing difficulty objectively or reproducing the difficulty that other departments reported when they faced similar mandates?
24 When the AI compared this policy to similar policies elsewhere, what made it choose those comparisons? Would a different set of comparisons support a different conclusion?
25 The AI suggested that a particular stakeholder concern is unlikely to materialise. Is that based on evidence or on the frequency with which that concern appears in the documents you fed it?
26 Your brief now concludes that option B is more feasible than option A. Can you separate what the AI found in the sources from what it inferred?
27 When Perplexity synthesised different expert positions, did it weight them equally or did it weight them by how often they appeared in the sources it could access?
28 The AI identified a particular risk as critical. Is this because the risk is genuinely large or because it is the type of risk that gets discussed in policy papers and academic journals?
29 Your summary now says that most respondents to the consultation supported this approach. Did the AI count responses equally or weight them by the size of the organisation making them?
30 When the AI drew a conclusion about what will work, was it reasoning from evidence about what has worked before or was it generating a plausible-sounding answer?

Questions about what happens next with your judgment

31 If ministers act on this AI-informed brief and it goes wrong, can you trace the failure back to the AI's reasoning or will it appear to be a failure of your departmental judgment?
32 Are you using this AI summary because it is the best way to understand the issue or because it is faster than reading the source documents yourself?
33 Which parts of this brief would you defend in front of Parliament if you had to explain the reasoning in detail?
34 If a select committee asked you to produce the evidence base for this policy recommendation, which parts of that evidence base came from the AI tool rather than from your own research?
35 Have you shared this AI-generated analysis with colleagues who have direct experience in this policy area and asked them whether the framing is missing something important?
36 When you hand this brief to your director, are you confident enough in the analysis to defend it if the assumptions turn out to be wrong?
37 If this policy is implemented and creates unexpected resistance from staff or the public, will you be able to identify whether the resistance was unpredictable or whether the AI simply did not flag it?
38 Have you identified which parts of this analysis depend on the AI's reasoning and which parts depend on your own expertise and institutional knowledge?
39 If you were to brief on this policy without using the AI summary, what would you say differently?
40 Is there someone in your organisation who should see the raw source documents before this brief goes forward, even though the AI has already summarised them?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.