For CMOs and Marketing Leaders
Chief marketing officers are using AI to compress the work of insight teams and creative directors into single tool outputs, which erodes the actual judgement those roles existed to provide. The cost is brand voice that sounds like every other brand, and a leadership team that can no longer tell the difference between good creative and competent AI.
These are observations, not criticism. Recognising the pattern is the first step.
When you ask ChatGPT or Perplexity to distil focus group transcripts or survey data into key insights, the AI identifies patterns that are statistically obvious and already visible to any reader. You lose the moments where a consumer says something that contradicts their own behaviour, or where a small comment reveals something your brand has been missing for years.
The fix
Read primary research yourself first, write three sentences about what surprised you, then use AI to find supporting evidence for that surprise.
Claude can produce copy that sounds professional and on-brand within its own training data, but it averages your voice toward the middle of what exists online. You end up with a brand that sounds like a blend of your three biggest competitors because that is statistically what sits in the centre of your category.
The fix
Write one paragraph in your brand voice yourself, show it to your team, ask them to spot the specific word choices and sentence rhythms that feel distinctly yours, then write a brief for Claude that includes those specific elements.
AI image tools excel at producing aesthetic output that fits genre conventions. If you brief Midjourney with your strategic direction and then launch what it generates, you are launching what thousands of other marketers might launch. Your visual distinctiveness comes from creative choices that feel slightly wrong or unexpected at first.
The fix
Have your creative director produce rough sketches or mood boards first, then use AI tools to execute variations on what the human already chose.
HubSpot AI and similar tools can organise customer behaviour data and flag trends, but they smooth out the friction points and contradictions that often signal where your actual competitive advantage lives. You end up with a strategy that addresses the obvious need that every competitor is already addressing.
The fix
Ask your insight team to flag three data points that seem to contradict each other, then build your campaign strategy around understanding why those contradictions exist.
Personalisation tools in HubSpot and similar platforms can divide your audience into hundreds of micro-segments, but if the underlying logic is flawed, you end up with different versions of a mediocre message rather than one strong message that resonates across segments. You also lose the ability to recognise when a segment actually wants something different from what the data appears to show.
The fix
Before you ask the tool to personalise, have one conversation with customers in each major segment to test whether the AI's segmentation logic matches how those people actually think about your category.
When Claude or ChatGPT produces first drafts consistently, your team stops practising the thinking that goes into an opening line or an argument structure. After six months of editing AI output rather than creating from blank pages, your senior writer's ability to recognise exceptional copy atrophies. You become a proofreader instead of a creative director.
The fix
Require your team to write the first draft on hard briefs, then use AI to generate alternatives alongside what they produced, and compare the differences.
AI tools produce work that is almost never bad. This means your team loses the ability to distinguish between competent and exceptional because the floor has risen so high. You end up launching the first version of AI output that meets the brief, rather than pushing it to be distinctive.
The fix
Before you review AI output, write down the one specific thing you want this piece of content to do that is different from what competitors are doing, then judge every version against that one criterion.
Midjourney is exceptional at producing polished images that fit your brief, but polish and distinctiveness are not the same thing. A distinctive campaign often looks slightly unfinished or uses colour or composition in a way that breaks convention. AI tools optimise for the visually coherent, not the visually brave.
The fix
Have your art director write down three specific visual rules your campaign will follow (colour restriction, compositional choice, unusual technique) and include those rules explicitly in your Midjourney brief.
Headline writing is where creative judgement matters most for campaign impact, but it is also where AI tools are most persuasive because they can produce ten grammatically perfect options instantly. You stop generating your own options and start picking from AI options, which means you are always picking from within what the algorithm considers safe.
The fix
Write five headlines by hand before you ask ChatGPT for options, then compare your five against the AI's five and notice which ones you wrote that AI would not have produced.
HubSpot AI and Claude make it possible to produce hundreds of variations of blog posts, email series, and social content. You can publish at a volume that would have taken months of human work. This creates a powerful incentive to fill the calendar with content, regardless of whether it fills an actual gap in your audience's thinking.
The fix
Before you use AI to scale a content series, interview three target customers and ask them what they actually wish your brand would publish about, then measure whether your scaled content answers that question.
AI research tools aggregate existing information and can miss the specific details that matter: the exact targeting choices competitors are making, the timing of their campaign pushes, or the subtle positioning shifts happening across their content. You end up building strategy based on what competitors claim they are doing, not what they are actually doing.
The fix
Spend one hour per week actually visiting three competitor websites and reading their latest campaigns before you ask an AI tool what those competitors are likely doing.
When you approve campaigns based on AI summaries of testing data or consumer response, you are missing the outliers and contradictions that often point to bigger opportunities. The summary is designed to clarify, which means it smooths over the messy parts where real insight lives.
The fix
Require your team to show you the raw data or raw responses for any campaign decision that involves spending above a certain threshold.
HubSpot AI and similar tools can project campaign performance based on historical data, but they cannot account for a specific competitor move you know is coming, or a regulatory change, or the loss of a key distribution channel. You set targets that fail to reflect your actual operating environment.
The fix
Take the AI projection, then meet with your sales and operations teams to list three external factors that could shift those numbers, and adjust your target accordingly.
Tools that use AI to score content quality against a rubric can spot obvious flaws and compliance issues, but they cannot judge whether a campaign is distinctively on-brand or whether it will land with the specific audience you care about most. You end up with campaigns that pass the checklist but feel generic.
The fix
Before you publish anything created or significantly shaped by AI, have at least one person who has been with your brand for over two years review it and confirm it sounds like you.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.