For Journalistss and Reporters

The Most Common AI Mistakes Journalistss Make

Journalistss are using AI to compress research and speed up story development, but this is creating a new kind of reporting error: stories built on AI summaries instead of primary sources. The risk is not inaccuracy but disconnection from what your sources actually meant and what your instinct tells you matters.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Research and Source Engagement

When you feed a report or interview transcript to an AI and ask for the key points, you skip the friction that actually builds your understanding. That friction is where you notice contradictions, spot what sources avoided saying, and develop the questions that lead to better reporting.

The fix

Read primary sources yourself first, then use AI only to flag what you might have missed, not to replace your own reading.

Perplexity or Google Gemini can quickly show you what has already been written about a person or organisation. But this is not the same as knowing what you do not yet know. Coming prepared with AI summaries makes you sound informed but keeps you from asking the questions that come from genuine curiosity.

The fix

Use AI research to fill gaps in public context, then spend time on your own thinking about what a source would know that nobody has asked them yet.

Your younger colleagues especially are at risk here. The instinct for what makes a story comes from talking to people and doing the groundwork. AI summaries of previous coverage feel like reporting, but they are actually just reading other people's work faster.

The fix

Limit AI summaries to the final 20 percent of your research, after you have done interviews and your own reporting.

Otter is fast and mostly accurate, but it regularly misses context cues, mishears names, and misses the pauses that indicate a source is uncertain or choosing their words carefully. If you do not listen to the original audio yourself, you lose crucial reporting details.

The fix

Always listen to at least 15 minutes of original audio from any Otter transcript before quoting from it.

ChatGPT is good at spotting narrative momentum, but it optimises for engagement and obvious angles, not for what you have learned reporting actually matters. Using AI to find your lede before you have finished reporting pushes you toward stories that confirm what AI already sees as important.

The fix

Write down your lede yourself once you have done the reporting, then ask AI where its version differs and why, so you can choose deliberately.

News Judgement and Story Development

When you use Claude or ChatGPT to brainstorm story angles, these tools will suggest angles optimised for clicks and shares. But news judgement is not about what will perform best online. It is about what your readers need to know and why this story matters now.

The fix

Ask yourself why this story matters to your audience before consulting AI about angles, then test AI suggestions against your own judgement.

A ChatGPT summary of data can look bulletproof because the tool sounds confident and connects dots clearly. But if you have not independently verified the data itself and understood what limitations or context surrounds it, you are reporting on the AI's reasoning, not the facts.

The fix

Verify any statistic or data point that AI highlights before you use it as a foundation for your story argument.

Gemini or Claude can cross-check facts against their training data, but they cannot tell you whether a source changed their answer between your first and second conversation. They also miss the nuance of what a source now regrets saying or has walked back unofficially.

The fix

When you have a factual dispute with a source, call them back yourself. Do not ask AI to arbitrate.

If you hand your notes to Claude or ChatGPT before your reporting is complete, the draft will have gaps that shape how you finish reporting. You will end up asking sources questions that fill holes created by the AI version, not holes you genuinely found.

The fix

Wait until your reporting is finished and you have written a strong outline before using AI to help shape the draft.

Sources choose their words for reasons. When Claude smooths out a quote or makes a source sound more articulate, you are changing what they said, even if the fact is technically preserved. This is especially dangerous with sources from marginalised communities whose actual voices matter to the story.

The fix

Leave all source quotes and attributions untouched. If a quote needs editing for length or clarity, do it yourself and note the edit.

Investigation and Source Relationships

Perplexity can quickly show you who has commented publicly on a topic. But sources who matter most, especially those with sensitive information, do not appear in search results. If you only contact sources that AI suggests, you will miss the people who actually know what happened.

The fix

Use AI-generated source lists only as a baseline. Spend your time finding sources nobody has quoted yet by asking your existing sources for referrals.

When you have interviewed someone before, asking ChatGPT to remind you what they said feels efficient. But you lose the chance to notice patterns in what they emphasise or avoid. You also cannot hear their tone or remember the conversational context that shapes how you approach them next.

The fix

Go back to your original notes or listen to the recording from a previous interview instead of asking AI to summarise it.

It is faster to ask Claude a follow-up question than to call a source back. This trains you away from the habit of sustained contact with sources. You lose access to the small moments where trust builds and sources feel comfortable telling you things off record.

The fix

If you need clarification from a source, call them. The conversation itself matters more than getting the answer fast.

Gemini can write a polished source pitch, but if you have not done the thinking yourself about why you are contacting this person and what you are really asking them, the AI version will sound generic. Sources respond to reporters who sound like they have done the work.

The fix

Write one version of your source request yourself first, then use AI only to check the tone, not to create the pitch.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.