For Management Consultants
Management consultants are outsourcing their research verification to AI summaries without checking source quality or currency. This creates a gap between the rigour clients pay for and the rigour you actually deliver.
These are observations, not criticism. Recognising the pattern is the first step.
When you ask an AI to summarise a published study, you get a plausible summary that omits limitations, sample sizes, or publication dates that matter for your argument. You then cite this in your deck without knowing if the AI misread the methodology or the findings.
The fix
Always read the abstract and methodology section of any study you cite, even if the AI summary seemed complete.
These tools aggregate web information with a lag and cannot access paywalled industry reports, earnings call transcripts, or recent strategy announcements. Your competitive landscape becomes a two-month-old snapshot you present as current.
The fix
Use these tools only to identify which reports exist and where to find them, then source the actual documents yourself.
ChatGPT excels at producing plausible problem trees and issue frameworks that sound strategic. You then present these to the client as if they reflect the actual situation, when they are really just a starting point that needs validation.
The fix
After generating a problem structure in Claude, spend one hour mapping it against the specific data the client gave you, and mark which parts are assumption versus evidence.
You upload a CSV to ChatGPT or Notion AI and ask for trends. The AI finds patterns, but misses data quality issues, outliers you know are real, or the business context that changes how to interpret a correlation. You then defend findings you do not fully understand.
The fix
Before using AI on any dataset, spend thirty minutes understanding its source, date range, and what the numbers actually represent in the client's business.
When you ask Claude to build a revenue forecast model or cost scenario, it produces detailed outputs with decimal places and growth curves that look authoritative. You include these in your deck because the AI presented them with such conviction, even though you did not validate the assumptions.
The fix
Question every assumption in any financial projection an AI produces, and change the numbers that do not match what you know about the client's business.
You use AI to generate initial language for a key recommendation, refine the wording a bit, and put it in the deck. The recommendation is plausible and well-written, but it is generic enough to fit five different clients. The insight that would actually change how the client thinks is missing.
The fix
After getting an AI draft, rewrite the recommendation statement using a specific fact or constraint from this client's situation that the AI did not know.
You ask ChatGPT or Copilot to suggest slide structures for your analysis, and you use them because they look polished. But the slides are arranged by topic, not by the sequence of choices the client actually faces. The story does not move them toward action.
The fix
Sketch out the three to five client decisions you need to influence before you write any slides, then design your slide sequence around those decisions.
When you ask Claude to generate three strategic options or three scenarios, it creates three grammatically perfect options with no indication of which ones are actually viable given the client's constraints. You present all three equally, diluting the credible recommendation.
The fix
After generating options with AI, rank them by feasibility using what you know about the client's appetite, budget, and capability.
A ChatGPT suggestion about organisational change conflicts with a constraint the client explained in week two. You miss this because the AI generated the recommendation in isolation, and you did not check it against your own notes.
The fix
Before putting any recommendation in a client deck, search your engagement notes for any constraint or stated priority that contradicts it.
You ask Perplexity about a competitor's recent moves and it returns results from six weeks ago, or it conflates two different announcements. You include this in your client briefing as though it is current, and the client makes a decision on outdated or incorrect information.
The fix
Always note the date of the information Perplexity returned, and tell the client if the information is more than four weeks old.
A junior consultant has a question about customer segmentation, asks ChatGPT, gets a segmentation framework, and builds on it. They never developed the ability to think through segmentation from first principles. When they hit a client situation that does not fit the template, they cannot adapt.
The fix
Require junior staff to complete the analytical work manually first, then use AI to check their logic or expand their thinking.
Different consultants use different tools at different stages of the work. One person uses Copilot for all research, another refuses to use AI at all. The standards for what counts as verified are unclear, so some deliverables are thin and others are rigorous.
The fix
Write a one-page guide for your team that specifies which tasks get AI help (literature review, initial brainstorm, draft writing) and which require independent work (primary source validation, client recommendation, data analysis).
You complete one engagement faster because you used AI extensively. The next time you pitch similar work, you lower your fee because the work seems faster. But the actual cost to the client's confidence and decision-making is the same. Your fees decline while your risk stays constant.
The fix
Price based on client value and decision risk, not on how fast you can produce output with AI.
A consultant creates a full slide deck with ChatGPT, and you assume they cleaned it up properly. They did not. The deck gets sent with an unsourced statistic, or a recommendation that contradicts a client comment, or a chart that misrepresents the data.
The fix
Review the final deck against the original research and client situation yourself, particularly any slides that were AI-assisted.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.