40 Questions Media and Publishing Should Ask Before Trusting AI
When Claude drafts a story outline or ChatGPT suggests your next headline, you need to know what questions separate good editorial judgment from algorithmic convenience. These 40 questions help editors and publishers catch the moments when AI outputs look right but undermine the reporting that built your audience's trust.
These are suggestions. Use the ones that fit your situation.
1If this AI-generated story structure had been rejected by your editor five years ago, what reporting work would have forced you to discover why?
2Does your engagement metric for this piece measure whether readers learned something new, or only whether they clicked?
3What sources does Perplexity cite for its summary of this topic, and did your reporters contact any of those sources directly?
4When Claude suggests a different angle on a story, can you identify whether it found something your reporting missed or whether it is optimising for a pattern in training data?
5Who at your organisation has the authority to reject an AI-generated headline because it is literally accurate but editorially misleading?
6If you published this AI-drafted explainer without a reporter checking the factual chain, how would you know if a single link broke?
7What story have you killed in the last year because it did not fit your audience metrics, and would an AI system have killed it faster?
8When your newsroom uses the same AI tools as four competitors in your region, what stops all of you publishing the same story with the same emphasis?
9Does your editorial team have a named person responsible for deciding when a story is too sensitive or complex to draft with AI assistance?
10How would you explain to readers why they should trust your judgment on a story when you used an AI tool to research and draft it but not to write your masthead?
Audience Trust and Transparency
11If you disclose AI use for a story, how much of your audience will stop trusting the reporting because they think AI replaced human judgment rather than assisted it?
12What is the difference between your audience trusting your brand and your audience not being able to tell which stories had human reporting behind them?
13When AP Automated Insights generates earnings stories for your finance section, how would a reader know whether this was automated reporting or human analysis?
14Has your organisation tested whether readers trust a story less when you say an AI tool helped research it, even though a human reported it?
15If a reader quotes your AI-assisted story in an argument and later it turns out to be slightly wrong, who does your audience blame?
16What does your style guide say about when not to use AI tools, and who enforces it?
17When you use Google Gemini to generate audience analysis about what topics drive engagement, how do you know that recommendation reflects what readers actually care about versus what the algorithm thinks they should see?
18Does your corrections policy change when an error came from an AI tool that a human approved?
19How would you verify a story if your only sources are other media outlets that used the same AI tool to research it?
20If competitors' AI tools start generating similar stories faster than your reporters can report, does that pressure change your standards for what counts as verified reporting?
Journalists Development and Reporting Skills
21Which skills from your newsroom are you no longer teaching junior reporters because AI now handles that task?
22When a young reporter uses ChatGPT for initial research instead of learning to find and read primary sources, what do they miss about how to spot a weak argument?
23If your investigative team uses Perplexity to summarise a complex issue instead of spending days understanding it, how would they recognise a missing piece that breaks the story?
24What reporting instinct develops from rejection, and how does a reporter get that instinct if an AI tool pre-filters what stories look promising?
25When you hire, can you tell the difference between a reporter who knows how to think and a reporter who knows how to prompt an AI tool?
26If your training programme used to teach reporters how to work a source over months, but now emphasises how to use Claude efficiently, what have you lost?
27How many of your current senior reporters developed their judgment by doing the work that AI now automates?
28When a reporter shows you a story drafted by an AI tool, how do you know whether they understand the subject or whether they just got lucky with the prompt?
29What does an apprentice journalist learn about judgment from watching an experienced editor reject AI outputs?
30If a talented reporter leaves because they spend more time managing AI tools than doing journalism, what did you actually gain?
Content Quality and Cognitive Risks at Scale
31When five news organisations in your market all use the same AI tool to cover the same earnings announcement, are you still offering readers different reporting or just different formatting?
32What story would your reporters find if they had to spend a week in a topic instead of asking Gemini for a summary?
33If your audience intelligence tool says engagement is up when you use more AI-drafted content, how do you know that means trust is up?
34When Claude suggests a narrative frame for a developing story, how do you check whether that frame is the most accurate one or just the most coherent one?
35Does your newsroom track how many stories you publish based on AI-synthesised information versus original reporting you did?
36If AI tools tend to produce centrist or consensus outputs by design, what stories about real conflict or complexity are you not seeing?
37When you use AP Automated Insights for local sports results, how much of your sports section now contains only information that every other outlet also publishes?
38What happens to news quality in your region if every newsroom optimises for engagement metrics using the same AI tools?
39If your AI tool hallucinates a fact and your editor misses it because they trusted the tool, how is that different from a reporter making an error, and how do you catch it?
40When readers cannot tell the difference between human-reported stories and AI-drafted ones, have you measured whether that affects how they act on the information?
How to use these questions
Before publishing any story drafted with AI assistance, ask a senior reporter who did not work on it: would you have reported this angle differently? Their answer tells you whether the AI output was thorough or just adequate.
Track which stories came from AI-assisted research versus original reporting for three months. Compare engagement, trust signals, and corrections needed. Let the data show whether your judgment or the AI's was better.
Create a specific editorial rule about which story types require human reporting start to finish. Investigative pieces, breaking news with public safety implications, and coverage of your own organisation are obvious ones. Find yours.
When an AI tool suggests a story structure, ask your team whether that structure serves the reader's need to understand or serves algorithmic engagement. Make the distinction explicit in your meetings.
Set up a monthly review where editors assess whether junior reporters are developing judgment or developing prompt-writing skills. If it is the latter, change how you use the tools in training.