For Politicians and Elected Officials
AI can write a grammatically perfect speech that your constituents will never believe came from you. Tools like Claude and ChatGPT can generate policy positions so coherent that you forget they were not tested against the real concerns you hear in town halls. The risk is not that these tools are bad at their job. The risk is that they are so good at sounding like authority that they can replace the actual work of forming judgement through argument, community knowledge, and deliberation that voters expect from their representatives.
These are suggestions. Your situation will differ. Use what is useful.
When you run a policy proposal through Perplexity or Claude to find supporting evidence, you are outsourcing your research but not your reasoning. The danger comes when you adopt the AI's framing of the issue as your own position without testing it against what you know from your local context, your constituents' actual lived experience, and your own values. Policy positions that emerge from AI analysis alone will lack the conviction that comes from honest disagreement and argument. Use the AI output as a starting point for the deliberative work you should be doing anyway: asking whether this position fits your community's priorities, whether it survives scrutiny from people who disagree with you, and whether you could defend it in a room full of sceptical constituents.
AI can assemble a competent speech body from your talking points and policy briefs. What it cannot do is capture the moment that made you run for office, the specific family story that drove your commitment to a particular cause, or the authentic connection you feel to the place you represent. Constituents hear the difference between a speech that begins with a real memory and one that opens with a plausible-sounding anecdote crafted by ChatGPT. The sections of your speech that matter most for trust and authenticity are exactly the sections you need to write yourself. Use AI to help you structure your argument or fill in background detail. Write the parts that only you can write.
Scaling personalised emails to constituents using AI tools creates a genuine problem for electoral trust. When voters discover that their heartfelt message received a response generated by an algorithm, it reveals that the relationship they thought existed with their representative was a computational fiction. This is not just a reputational risk. It is a damage to the social foundation that representative democracy rests on. You can use AI to help you process the volume of constituent mail you receive. You cannot use it to replace your own attention to the concerns that matter most to people in your constituency.
AI can analyse media coverage and suggest talking points with impressive speed. Microsoft Copilot can help you spot patterns in how your positions are being reported. The risk is that you begin to let the AI's assessment of what is newsworthy or what matters override your own political judgement about which issues deserve your attention. You know your constituency better than any algorithm. You understand which fights matter to the people who voted for you and which ones are distractions. Use AI to help you see what the media ecosystem is doing. Use your own judgement to decide what you should do in response.
Political figures who use AI without acknowledging it face credibility collapse when the use becomes public. Voters are increasingly sceptical of communications that feel constructed rather than authentic. Transparency about where you use AI and where you do not is a form of honesty that builds trust rather than eroding it. This does not mean announcing every use of Copilot or Claude. It means being clear with your constituents about the role of technology in your communication and ensuring that the work that requires your personal judgement visibly comes from you.
Key principles
Key reminders
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.