For Editorss and Editorsial Directors
Editorss using AI for copy work often outsource the thinking that made their publication distinctive in the first place. The tools feel objective and fast, which makes it easy to stop noticing when they erase the very voice readers came for.
These are observations, not criticism. Recognising the pattern is the first step.
Grammarly flags repetition as an error, so editors accept its suggestions to vary sentence structure. But your writers developed their voice through patterns, and Grammarly's changes make them sound like everyone else. The cost is that readers stop recognising your publication's tone.
The fix
When Grammarly flags repetition, ask whether the repetition serves the writer's voice before changing it, and override the suggestion if it does.
Hemingway flags passive voice and long sentences as mistakes. But your best writers use passive voice for reason, and sometimes a long sentence builds meaning better than three short ones. Accepting all flags turns writing into compliance instead of craft.
The fix
Read each flagged sentence aloud and ask why the writer chose it before deciding whether the suggestion improves the piece.
The tool generates headlines optimised for clicks by making them sensational or generic. Your publication's headline style is part of your brand and trust with readers, and AI-generated headlines often sacrifice accuracy for novelty.
The fix
Use Adobe Express AI only to generate options you can read alongside your own, never as a replacement for your headline writer's judgment.
When a writer submits text generated by ChatGPT or heavily AI-assisted, editors assume it is finished because it reads smoothly. But smooth prose can hide weak thinking, and you lose the chance to push the writer toward their own ideas.
The fix
Ask writers to flag AI-assisted sections and read them with extra scepticism about whether the argument actually works.
Claude produces coherent alternatives quickly, so editors sometimes approve them without noticing the tool has changed your publication's conventions for voice, tone, or structure. Small changes across many pieces add up to a different publication.
The fix
Create a one-page style reference for voice and tone, and check any Claude rewrite against it before accepting.
Editorss run text through Grammarly, see green checkmarks, and assume the editing is done. But grammar checks catch surface errors, not whether the story is structured well, whether evidence supports the claims, or whether the piece should exist at all. Your deeper editorial eye atrophies.
The fix
Read the full manuscript once for structure, argument, and fit before running it through any tool.
AI tools measure engagement signals like keyword density, readability score, and time on page. Chasing these metrics means writing for algorithms instead of the people who subscribe to your publication. You stop making editorial choices based on what your readers actually need.
The fix
Set editorial priorities first, then use AI tools only to check whether a finished piece meets those priorities.
ChatGPT and Claude can generate multiple angles for a story and you can A/B test them. The winning angle is whatever the AI predicted would engage most readers, not what your editors believe is most true or important. Editorsial judgment becomes data-driven guessing.
The fix
Generate angles with your team first, then use AI tools to test them if you want, but decide on the angle based on editorial merit.
Some publications feed story ideas through AI tools to predict which ones will perform. But AI has no sense of what your publication stands for or what readers trust you to cover. You outsource the editorial identity that makes you worth reading.
The fix
Your editors should pitch and choose stories. Use AI afterwards to test headlines or structure, never before to decide whether to run the story.
Editorss send writers a document with fifty Grammarly corrections highlighted and think the feedback is complete. Writers never learn why a sentence matters or how to develop their voice because they are just fixing flagged items. Your publication loses the next generation of strong writers.
The fix
Write an editorial letter that explains the two or three most important changes and why they matter, separately from line edits.
Instead of telling a writer that a section does not work, an editor runs it through Claude, shows the polished version, and says this is how it should be done. The writer learns nothing about their own thinking and becomes dependent on tools instead of developing judgement.
The fix
When a section needs major work, write comments explaining what is wrong and what you want to see, then let the writer rewrite it.
Editorss assume writers will figure out AI on their own. Some writers use it as a crutch for thinking, some avoid it entirely out of anxiety, and none of them develop a clear sense of when AI helps and when it gets in the way. You raise a generation confused about their own craft.
The fix
Create a simple policy that says what AI writers can use for what (research, yes; first drafts of analysis, no) and why, then discuss it in a team meeting.
A writer submits text that reads like ChatGPT and the editor's instinct is to rewrite it rather than send it back. The writer never gets feedback that this is unacceptable and keeps doing it. Your publication's voice becomes increasingly hollow.
The fix
When you recognise AI-generated first draft prose, send it back to the writer with a note saying it needs to be their own thinking and voice.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.