For Media and Publishing
News organisations are outsourcing editorial judgement to AI because engagement metrics favour what the algorithm produces, not what readers actually need to know. The cost is a slow collapse of the investigative muscle that broke important stories and the public trust that separates journalism from marketing.
These are observations, not criticism. Recognising the pattern is the first step.
Editorss run story ideas through ChatGPT or Claude to forecast clicks before assignment, then kill important stories that score low. This reverses the chain of command: the algorithm picks the news instead of journalists.
The fix
Judge story importance first by newsworthiness and public interest, then use AI predictions only to shape how you present stories that already matter.
Reporters use Perplexity or Claude to compile background instead of calling sources and building relationships. Sources teach you what questions to ask and what matters. AI teaches you what it already found.
The fix
Use AI only to identify gaps in your knowledge before you reach out to sources, not as a substitute for the conversations that reveal story leads.
Platforms like AP Automated Insights generate headlines that Google Gemini confirms are optimised, so editors publish them without testing how real readers interpret them. Optimisation is not clarity.
The fix
Read every AI-generated headline aloud to a colleague who has not seen the story and ask what they think it means.
Claude and ChatGPT produce fluent summaries of background facts, so editors use these as pre-article context without checking whether the summaries reflect what actually happened. Fluency hides inaccuracy.
The fix
Compare every AI summary against your original sources and remove any detail that does not appear in those sources.
When Gemini or Claude presents information with no hedging language, editors assume it is checked and publish it. AI tools generate confident-sounding sentences whether or not they are true.
The fix
Treat every factual claim in AI output as unverified until you have confirmed it yourself against a reliable source.
When AP Automated Insights or Claude handle the research and first drafting, junior reporters never learn how to find patterns in documents or build confidence with sources. Skills atrophy before they develop.
The fix
Reserve AI for finishing tasks that a reporter has already started, not for beginning the work of investigation.
Most outlets use ChatGPT or Claude for story research and angle development, so they all discover the same angles and miss the same blind spots. Your AI tools are now your competitors' tools.
The fix
Assign one reporter each week to work without AI on one story, so your newsroom develops at least one angle that the algorithm did not suggest.
AI tools that analyse reader behaviour show that your audience engages most with celebrity news and outrage. Chasing those metrics trains your audience to expect entertainment instead of information.
The fix
Track two metrics: what your audience engages with, and what stories your audience says changed how they understood the world.
Editorss use AI tools to run final checks on stories, expecting the tool to catch what human reviewers missed. AI catches patterns it has seen before, not new kinds of error.
The fix
Have a human editor who did not write the story read it for accuracy before publication, and use AI only to check for consistency within the story itself.
Some organisations use Claude or ChatGPT to produce story drafts, then publish them under a reporter's byline without noting AI involvement. Readers cannot assess the reporting method and think a human made editorial judgements that the human did not make.
The fix
Disclose which parts of reporting involved AI tools and which parts involved human reporting and source conversations.
When Claude or Gemini hallucinates expert perspectives or historical statements, editors sometimes publish these without checking whether the person actually said it or the event actually happened. The quote looks real because the AI wrote it in quote marks.
The fix
Never use a quote from any source you have not directly consulted, and check every quote against a recording or contemporary record.
When five outlets use the same AI tools to cover a breaking news event, readers see five articles that say nearly the same thing and assume the story is solid. They are seeing algorithmic consensus, not journalism.
The fix
Assign at least one reporter to do original reporting on events your competitors are covering, even if it delays your story.
ChatGPT and Claude excel at explaining what is already public knowledge. Investigative reporting requires asking uncomfortable questions about what powerful people want to hide. You cannot train AI to do that.
The fix
Reserve investigation for reporters who ask questions for which the answer might damage someone, and do not ask AI to shortcut that part of reporting.
Worth remembering
Related reads
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.