For Policy Analysts and Public Servants
Policy analysts often treat AI summaries as authoritative condensations of evidence when they are actually lossy translations that strip away the disagreement you need to see. When you rely on Claude or ChatGPT to synthesise regulatory positions or risk factors, you outsource the political judgement that only you can make.
These are observations, not criticism. Recognising the pattern is the first step.
ChatGPT and Claude present conflicting research as balanced bullet points, which makes disagreement disappear. Your briefing then presents settled facts when the policy disagreement is actually the point.
The fix
Read the original sources yourself and use AI only to help you locate which sources disagree on which specific claims, then build your brief around those fractures.
When you ask Perplexity or Claude what experts think about a policy question, you get a centrist summary that reflects what is most quoted online, not what your actual stakeholders believe. This hides the organised minorities whose resistance will derail implementation.
The fix
Ask AI to list the named individuals and organisations that hold each distinct position, then verify those positions yourself through their own publications.
Large language models optimise for comprehensiveness and academic authority, so they favour peer-reviewed studies over parliamentary records, stakeholder testimony, or previous failed attempts that civil servants remember. Your brief then lacks the institutional memory that prevents repeating old mistakes.
The fix
Create a list of non-academic sources you want included (previous implementation reports, stakeholder submissions to consultations, departmental post-mortems) and tell AI explicitly to weight those sources.
When you ask Claude to summarise a regulatory issue, it frames the problem in terms of the evidence it has access to, not in terms of political feasibility or implementability. Your brief then treats the problem as technical when it is actually political.
The fix
After AI produces a summary, rewrite the problem statement yourself based on what you know about which parts are genuinely contested among stakeholders and which parts ministers care about.
When you feed Microsoft Copilot or ChatGPT a batch of consultation submissions and ask for themes, the AI groups responses by linguistic similarity, not by the strategic positions people were actually advancing. A response saying regulation 'should not be too burdensome' gets grouped with a response saying regulation 'must remain proportionate' when one group actually opposes the policy entirely.
The fix
Use AI to index the responses (show me which submission mentioned X) but read the full text of at least the strongest contrary positions yourself.
ChatGPT and Claude assess risk by reference to regulatory frameworks and documented case law. They miss the political risk that a technically compliant decision will face organised opposition from stakeholders with power. Your assessment then ignores the kind of risk that actually derails policy.
The fix
After AI produces its compliance analysis, add a separate section in which you list which groups will object to this decision and what power they have (media reach, legal resources, electoral significance, veto points).
When you ask Claude to estimate how many citizens fall into a regulatory category, it uses published statistics and applies them mechanically. It cannot see the gap between the official category and the lived reality that previous caseworkers knew. Your impact assessment then overstates your ability to predict actual implementation.
The fix
Take AI's number as a starting point, then ask caseworkers in the operational teams whether that matches what they see on the ground, and adjust based on their tacit knowledge.
Perplexity and Claude rank risks by frequency of mention in available literature. They assign high significance to risks that are well-documented in policy papers and low significance to risks that are known through experience but not published. Your assessment then treats documented risks as more serious than political risks.
The fix
Ask AI to separate risks by source type (published evidence, regulatory precedent, stakeholder concern) and then add your own weighting based on which ones your predecessors actually had to manage.
When you ask ChatGPT what the reputational risks are to a policy decision, it draws on media analysis and activist messaging from published sources. It cannot see which stakeholder groups have influence with your specific ministers or which media outlets your ministers read. Your risk assessment then treats all criticism as equally significant.
The fix
Use AI to identify what criticisms are being made, then assess the reputational risk yourself by checking which of those critics have access to the decision-makers you serve.
When you ask Claude for risks in a regulatory change, it produces a list based on similar historical cases in its training data. Risks that are novel, or that happened in your department specifically but were never published, simply do not appear. Your risk register then misses the failure modes that matter most.
The fix
After AI generates the list, meet with the senior civil servants who were here during the last similar attempt and ask what actually went wrong, then add those items to your register.
When you ask Claude whether a policy will face implementation resistance instead of asking the programme director who has managed three failed attempts at similar reform, you lose the situated knowledge that only comes from having lived through the system. Over time, that knowledge leaves the organisation entirely.
The fix
Use AI to prepare for conversations with experienced staff: ask AI what questions you should ask them, then talk to them.
ChatGPT and Copilot assess implementability by reference to regulatory requirements and best practice guidance. They do not know that your IT systems cannot handle a particular data requirement, or that frontline staff have always found workarounds to a rule because the rule was written by people who did not understand the workflow. Your decision then assumes implementation is feasible when it is not.
The fix
When AI says a regulation is implementable, ask the teams who will actually operate it whether they see obstacles that are not in any official documentation.
When you ask Perplexity to suggest how to engage a particular stakeholder group, it produces generic engagement tactics from its training data. It does not capture the relationship history you have with that group, or what the last engagement director learned about which people in that organisation are actually empowered to make commitments. Your strategy then repeats mistakes.
The fix
Use AI to draft engagement plans, but have the previous engagement lead review them and add the specific relationships and previous commitments that matter.
When you feed a stakeholder submission to Claude and ask it to identify the underlying interests, the AI treats the written position as reliable. It does not know that this organisation always takes a hard public position but will negotiate behind closed doors, or that this other group is being coordinated by a third party not mentioned in their submission. Your understanding of the actual landscape is then superficial.
The fix
Use AI to flag what stakeholders say explicitly, but supplement it with a conversation with the stakeholder engagement officer who knows what each group will actually do.
When you ask ChatGPT what a decision-maker should know about a policy question, you are asking the AI to guess what matters to your specific ministers in your specific context. You miss the information about internal politics, about what the minister has already committed to other colleagues, or about what the secretary of state overheard in a corridor that is driving actual priorities. Your briefing then addresses the wrong question.
The fix
Talk to the private office or the minister's previous policy advisers to understand what they actually care about, then ask AI to help you write a briefing that addresses those concerns.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.