For Politicians and Elected Officials

The Most Common AI Mistakes Politicians and Elected Officials Make

Politicians often use AI to write speeches and policy briefs without inserting their own convictions first, then wonder why voters detect inauthenticity. The real risk is outsourcing the deliberative work that makes you a representative rather than a relay for algorithmic output.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Speech and Communication Mistakes

You feed the AI a topic and constituent concern, but you have not written down what you actually believe about it yet. The AI generates something coherent that sounds like you, but it reflects the training data, not your judgement. Your speech sounds prepared but not authentic.

The fix

Write three to five sentences of your genuine view before you open Claude or ChatGPT. Paste that as context, then ask the AI to develop speech material around your stated conviction.

ChatGPT or Perplexity generates a touching anecdote about your town's founding or a local business you mention in your brief. You use it because it fits the speech arc. The detail is wrong. A constituent who knows the real history spots it immediately and questions your credibility on matters that actually matter.

The fix

Every local reference that came from AI must be checked against local archives, long-term residents, or the local historical society before it goes into your final speech.

You ask the AI to rewrite a draft for tone and flow before you have settled what you really think about the issue. The polished version presents a position clearly, and you go with it because it reads well. You have outsourced your deliberation to the tool.

The fix

Lock your actual position in writing before you ask any AI tool to refine language or tone.

You use Claude to generate personalised responses to twenty constituent emails. The AI reads their concern and writes a reply that references their issue. It is fast. But you have not actually considered what they said or what your answer should be. Trust erodes when constituents realise they received a template response.

The fix

Write a genuine one or two sentence response to each constituent yourself, then ask Claude only to expand that response into full letter form.

Quorum flags a trending concern across constituent emails and suggests a framing for your response. You adopt it because it is data driven. You have not actually spoken to constituents about what they care about in that issue. Your response misses what matters locally.

The fix

Before you use Quorum's suggested framing, call three to five constituents who raised that concern and ask them what aspect matters most to them.

Policy and Judgement Mistakes

You ask Perplexity to summarise the case for and against a bill you are voting on. The tool produces a balanced, well-sourced brief. You read it, decide the brief is persuasive, and vote accordingly. You have not actually read the bill or thought through how it affects your district. Your vote reflects the AI's weighting of evidence, not your judgement.

The fix

Read Perplexity's brief only after you have read the bill itself and formed a preliminary view. Use the brief to stress-test your thinking, not to replace it.

You ask Claude to analyse how a proposed environmental regulation would affect local manufacturers. Claude produces a coherent analysis based on general principles. You cite it in your policy memo. The analysis misses a crucial detail about your district's specific supply chain that you should have known.

The fix

After Claude produces its analysis, ask your staff to verify each major claim against interviews with at least two local business operators in that sector.

You ask ChatGPT to outline your position on housing policy. The output is thoughtful but contradicts something you said last year and does not reflect your party's commitments. You publish it anyway because it reads well. You create a record of inconsistency that opponents will use.

The fix

Paste your party platform and your own last three relevant public statements into ChatGPT as context before you ask it to draft a new position paper.

You ask Copilot to research economic impacts of a proposed tax change. It produces a well structured brief with citations. You use the brief in a parliamentary debate without verifying the sources. Later, you discover Copilot omitted research that contradicted its framing. You looked unprepared.

The fix

Click through and verify at least three citations from any Copilot policy brief before you use it in a public statement.

A housing crisis is brewing in your electorate. You ask Claude to outline potential policy responses. The output is thoughtful and comprehensive. You use it to shape your public position. When you finally meet constituents, you discover their biggest concern is one the AI ranked as minor. You have been representing an AI's priorities instead of your constituents' priorities.

The fix

Before you ask any AI to analyse or suggest policy on an issue your constituents raise, hold at least one public meeting or conduct at least five one-on-one conversations to understand what they actually need.

Constituent Relations and Trust Mistakes

You use ChatGPT to write variations of a campaign message for a thousand constituents, with the AI inserting their names and localising the message algorithmically. Each email reads as if written for that person. Each one is entirely synthetic. Constituents who recognise the pattern feel manipulated. Trust breaks.

The fix

Hand-craft genuine responses to constituents who write to you directly. Use AI only to handle routine acknowledgements of mass campaigns.

Your AI tools flag trending topics among constituent emails. You respond to those and leave others unanswered. You are no longer deciding what matters to your electorate. The tool is. Constituents whose concerns were not flagged feel unheard.

The fix

Read a sample of unanswered constituent emails yourself each week and decide which deserve a response. Do not rely on the AI's ranking of what is important.

A constituent writes about a problem with a local service. You paste their email into Claude and ask for a response. Claude generates something helpful. You send it. The response does not address the specific detail of their situation because you never told Claude that detail. The constituent feels the response is generic.

The fix

Before you ask Claude to draft a response, write two sentences of your own understanding of that specific constituent's situation, then paste that into the tool.

Quorum's tools identify constituents most likely to agree with you based on sentiment analysis of their emails. You focus your outreach there. You strengthen relationships with people who already agree with you and ignore constituents you need to persuade. Your electorate becomes less representative.

The fix

Use AI tools to find constituents raising concerns you have not yet engaged with, not constituents who already agree with you.

A constituent emails a detailed concern. You run it through Claude for a response. Claude produces something kind and comprehensive. Nowhere in the response does it acknowledge the constituent by name or reference the specific effort they made to reach out. The response feels robotic.

The fix

Before any AI tool touches a constituent response, add a sentence that names the constituent and shows you understood the specific step they took to contact you.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.