For Media and Publishing

40 Questions Media and Publishing Should Ask Before Trusting AI

When Claude drafts a story outline or ChatGPT suggests your next headline, you need to know what questions separate good editorial judgment from algorithmic convenience. These 40 questions help editors and publishers catch the moments when AI outputs look right but undermine the reporting that built your audience's trust.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Editorsial Independence and News Judgment

1 If this AI-generated story structure had been rejected by your editor five years ago, what reporting work would have forced you to discover why?
2 Does your engagement metric for this piece measure whether readers learned something new, or only whether they clicked?
3 What sources does Perplexity cite for its summary of this topic, and did your reporters contact any of those sources directly?
4 When Claude suggests a different angle on a story, can you identify whether it found something your reporting missed or whether it is optimising for a pattern in training data?
5 Who at your organisation has the authority to reject an AI-generated headline because it is literally accurate but editorially misleading?
6 If you published this AI-drafted explainer without a reporter checking the factual chain, how would you know if a single link broke?
7 What story have you killed in the last year because it did not fit your audience metrics, and would an AI system have killed it faster?
8 When your newsroom uses the same AI tools as four competitors in your region, what stops all of you publishing the same story with the same emphasis?
9 Does your editorial team have a named person responsible for deciding when a story is too sensitive or complex to draft with AI assistance?
10 How would you explain to readers why they should trust your judgment on a story when you used an AI tool to research and draft it but not to write your masthead?

Audience Trust and Transparency

11 If you disclose AI use for a story, how much of your audience will stop trusting the reporting because they think AI replaced human judgment rather than assisted it?
12 What is the difference between your audience trusting your brand and your audience not being able to tell which stories had human reporting behind them?
13 When AP Automated Insights generates earnings stories for your finance section, how would a reader know whether this was automated reporting or human analysis?
14 Has your organisation tested whether readers trust a story less when you say an AI tool helped research it, even though a human reported it?
15 If a reader quotes your AI-assisted story in an argument and later it turns out to be slightly wrong, who does your audience blame?
16 What does your style guide say about when not to use AI tools, and who enforces it?
17 When you use Google Gemini to generate audience analysis about what topics drive engagement, how do you know that recommendation reflects what readers actually care about versus what the algorithm thinks they should see?
18 Does your corrections policy change when an error came from an AI tool that a human approved?
19 How would you verify a story if your only sources are other media outlets that used the same AI tool to research it?
20 If competitors' AI tools start generating similar stories faster than your reporters can report, does that pressure change your standards for what counts as verified reporting?

Journalists Development and Reporting Skills

21 Which skills from your newsroom are you no longer teaching junior reporters because AI now handles that task?
22 When a young reporter uses ChatGPT for initial research instead of learning to find and read primary sources, what do they miss about how to spot a weak argument?
23 If your investigative team uses Perplexity to summarise a complex issue instead of spending days understanding it, how would they recognise a missing piece that breaks the story?
24 What reporting instinct develops from rejection, and how does a reporter get that instinct if an AI tool pre-filters what stories look promising?
25 When you hire, can you tell the difference between a reporter who knows how to think and a reporter who knows how to prompt an AI tool?
26 If your training programme used to teach reporters how to work a source over months, but now emphasises how to use Claude efficiently, what have you lost?
27 How many of your current senior reporters developed their judgment by doing the work that AI now automates?
28 When a reporter shows you a story drafted by an AI tool, how do you know whether they understand the subject or whether they just got lucky with the prompt?
29 What does an apprentice journalist learn about judgment from watching an experienced editor reject AI outputs?
30 If a talented reporter leaves because they spend more time managing AI tools than doing journalism, what did you actually gain?

Content Quality and Cognitive Risks at Scale

31 When five news organisations in your market all use the same AI tool to cover the same earnings announcement, are you still offering readers different reporting or just different formatting?
32 What story would your reporters find if they had to spend a week in a topic instead of asking Gemini for a summary?
33 If your audience intelligence tool says engagement is up when you use more AI-drafted content, how do you know that means trust is up?
34 When Claude suggests a narrative frame for a developing story, how do you check whether that frame is the most accurate one or just the most coherent one?
35 Does your newsroom track how many stories you publish based on AI-synthesised information versus original reporting you did?
36 If AI tools tend to produce centrist or consensus outputs by design, what stories about real conflict or complexity are you not seeing?
37 When you use AP Automated Insights for local sports results, how much of your sports section now contains only information that every other outlet also publishes?
38 What happens to news quality in your region if every newsroom optimises for engagement metrics using the same AI tools?
39 If your AI tool hallucinates a fact and your editor misses it because they trusted the tool, how is that different from a reporter making an error, and how do you catch it?
40 When readers cannot tell the difference between human-reported stories and AI-drafted ones, have you measured whether that affects how they act on the information?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.