40 Questions Marketing and Advertising Should Ask Before Trusting AI
AI tools now generate creative, plan media, and segment audiences faster than any human team. The risk is that speed replaces judgement, and what gets measured improves while what makes a brand memorable gets lost. Asking the right questions before you act on an AI recommendation is the difference between a campaign that works and one that merely runs.
These are suggestions. Use the ones that fit your situation.
1When Midjourney or Adobe Firefly generates three visual directions, how many of them look like they could belong to a competing brand in the same category?
2Does the AI-generated copy for this campaign say something about the client's brand that only this brand could say, or could it run unchanged for a competitor?
3If you removed the logo from the creative output Claude or ChatGPT produced, could a consumer still recognise the brand?
4What craft decision did the AI make about tone, visual style, or message priority, and is that decision based on what works for this specific brand or on statistical averages across similar brands?
5Has anyone on your team articulated the strategic reason why this particular audience would care about this message, or has the AI recommendation skipped straight to message variants without strategy?
6When you prompted the AI tool, did you give it your brand strategy document and competitive context, or did you give it only the product brief?
7Does this creative idea depend on a human insight about your audience that the AI could not have discovered on its own?
8If the AI generated ten creative variations, are the meaningful ones genuinely different or are they surface-level rephrasings of the same idea?
9Can you explain to the client why you chose this direction over the others, using brand strategy language or only using efficiency metrics?
10What would this campaign look like if you had spent two days on strategy before you opened the AI tool?
Media Buying, Targeting, and Audience Judgment
11When Google Performance Max recommends an audience segment, what is it optimising for: your stated conversion goal or its own model of user behaviour that may not match your brand's actual customer?
12Has the media plan reflected thinking about who your brand can genuinely win with, or has it automated toward the largest addressable audience?
13If the AI tool recommends bidding more heavily on audiences that are statistically similar to past converters, does that overlap with audiences you actively want to avoid?
14What audience segment is the algorithm excluding, and would you lose brand equity by being absent from that segment?
15Does the media buying recommendation account for the fact that your brand may need to reach people who have never heard of you, or only people who already look like existing customers?
16When Performance Max or a programmatic buying AI allocates budget across channels, is it favouring short-term conversion metrics over channels that build brand memory?
17Has anyone checked whether the AI's audience definition matches what your client actually knows about their customer, or does it contradict what your account team learned in the last strategy meeting?
18If you removed AI-driven audience targeting and ran the media plan based on human strategic selection of three core audiences, how different would the results be?
19Does the AI recommendation explain why this audience matters for this brand, or does it present the numbers without the reasoning?
20What happens to brand distinctiveness if you let the algorithm reach whoever is cheapest to convert, rather than whoever is most valuable for building brand equity?
Measurement, Effectiveness, and What Actually Gets Counted
21The metric the AI is optimising for (clicks, conversions, efficiency ratio) is what you are measuring. What are you not measuring that your client actually cares about?
22Has anyone run this campaign against a control group that saw zero AI-optimisation, so you know whether the AI is actually improving results or just shifting spend toward lower-hanging fruit?
23If this campaign achieves its efficiency targets but brand awareness or brand preference declines in tracking studies, will the AI-driven approach still be considered successful?
24The AI tool reports that creative variation B outperformed variation A. On what metric, over what timeframe, and among which audience subset?
25Is the campaign being measured on what the brand strategy said success looks like, or on whatever metric the AI platform makes easiest to track?
26When you compare this AI-optimised campaign to the last campaign a human planner optimised, are you comparing them on the same metrics or different ones?
27Has anyone audited the AI's reporting to check whether it is attributing results fairly across channels, or is it taking credit for activity that would have happened anyway?
28If short-term conversion improves but your client's brand is becoming less distinctive than competitors, does your measurement framework catch that decline before the client does?
29The AI recommends cutting spend on an audience segment that is expensive to convert. What is the lifetime value of that audience, and is the AI factoring it in?
30What would you need to measure to prove that this campaign built brand equity, as opposed to simply shifting conversions from expensive channels to cheap ones?
Team Judgment, Craft Knowledge, and Decision-Making Power
31When the account lead makes a recommendation that contradicts what the AI tool suggests, what happens to that recommendation?
32Does your team still have a strategy document that exists before the AI recommendations, or has the strategy become whatever the AI outputs?
33Can the person running the AI tools explain in plain language why they prompted the model the way they did, or are they following a standard template?
34If the AI recommends a direction that feels wrong to your most experienced strategist, do you have permission to reject it, or does the client expect you to follow the algorithm?
35Who decides what counts as a good outcome: the metrics dashboard or a human conversation about what the brand needed to achieve?
36Has anyone on your team spent time recently learning what actually makes a campaign effective in your client's category, or has that knowledge been replaced by prompt engineering?
37When you onboard a new team member, do you teach them media strategy and creative principles or do you teach them how to use the tools?
38If the client asks 'Why did you choose this direction?', can your team give an answer that is based on brand thinking, or only on what the AI generated?
39What is one campaign decision from the last three months that was made by a human account person instead of by algorithm, and what was the result?
40Does your retainer agreement with the client include time for strategy and judgement, or has it been squeezed down to time for generating and testing variations?
How to use these questions
Before you prompt any AI tool, write down your brand strategy, your audience insight, and what makes this brand different. If the AI output contradicts your strategy, the tool is not the problem. The problem is that you skipped the thinking.
When the AI generates ten options, resist the temptation to test all ten. Pick three based on strategy, then test those. You will learn more and stay in control of what the brand stands for.
Set a rule: no AI-generated creative goes to a client without a human explanation of the strategic thinking behind it. The moment your account lead becomes a prompt validator instead of a strategist, your brand distinctiveness starts to decline.
Audit your metrics quarterly against what your client said success looked like at the beginning. If you have drifted toward measuring only what the AI platform makes easy to track, correct it immediately.
Block time each month for at least one campaign decision or creative direction to be made by human judgement alone, without AI input. The cognitive muscle you keep exercising is the one that will save you when the algorithm gets it wrong.