For Consulting and Professional Services
Consulting firms are using AI to produce client deliverables faster, then charging the same fees while clients expect lower costs. The real damage happens when junior consultants stop developing independent judgement because AI now handles the analytical work that built their thinking.
These are observations, not criticism. Recognising the pattern is the first step.
Partners run a question through Claude, clean up the prose, add your firm logo, and send it to the client. The client recognises the generic structure and questions what they are paying for. Your competitive advantage was never the ability to format information.
The fix
Use Claude to surface what you already know and disagree with, then build your view from there.
When you ask Copilot, ChatGPT, and Perplexity the same question, they converge on similar answers. You treat convergence as validation and tell the client what the tools agree on. You have just removed the one thing clients hire consultants for: a view that differs from what any intelligent person could find alone.
The fix
When tools agree, actively argue the opposite position in writing before you recommend anything.
Copilot drafts a competitive analysis in two hours instead of three days. You bill the same amount. The client discovers the speed through visible tool use or cheaper alternatives emerge and undercut your price. You have compressed your value into a timeline nobody trusts.
The fix
Either charge less because you delivered faster, or demonstrate that speed freed your senior people to do deeper work the client actually needs.
Your model relied on junior consultants doing research and analysis that took two weeks. AI does it in one day. You keep the same project timeline and bill the same fees. The client gets worse value because they are paying for time that no longer exists. Eventually they notice.
The fix
Reduce project duration and fees proportionally, then invest the freed capacity into developing stronger recommendations for the same client.
You dump research into Notion and run summarise on it, expecting the output to become your thinking. Notion produces a readable summary that misses the tension between two sources, the one data point that breaks the pattern, or the recommendation that contradicts what the client hoped to hear. Your insight was never just better organisation of what existed.
The fix
Use Notion AI only to reorganise raw material, then write your own analysis of what matters and why.
Previously a junior consultant spent weeks building a financial model, making assumptions visible, understanding where the sensitivity lay. Now they spend hours checking whether ChatGPT's model is correct. They learn to evaluate AI, not to judge what matters. In three years they have no independent analytical instinct.
The fix
Have junior consultants build their own analysis first, then compare it to what AI produced and write down why they differ.
Teaching a junior to interpret a messy dataset takes time and creates frustration. Using Copilot to clean and analyse it is faster. The junior learns the output format but never builds the scepticism that comes from wrestling with bad data. They become dependent on the tool and lose credibility when the tool fails.
The fix
Have junior consultants hand-clean at least one dataset per engagement, even if it is slow, so they develop doubt about automation.
You ask Claude for five strategic options for a client. It produces five that are syntactically different but strategically similar. A junior consultant who learned strategy by generating options under real constraints (budget, capability, politics, timeline) would have produced one option that actually works. Your junior learns to expect abundance instead of trade-off.
The fix
When you ask AI for options, specify the constraints first, then have junior consultants defend why their constraint interpretation was right or wrong.
A partner uses ChatGPT to draft talking points about why the client's favourite idea will not work. The junior consultant delivers them as findings. The client dismisses them as generic concern-raising rather than expert caution. The junior never learns that the real skill is reading the room and timing the hard message. They treat the analysis as separate from the relationship.
The fix
Have junior consultants sit in the room when difficult recommendations land, then debrief what worked and what did not in how it was received.
A partner uses ChatGPT to draft a regulatory analysis. The output goes into a client deliverable. Nobody records that AI generated part of it. If the analysis is later questioned, you cannot explain your reasoning. You have created a compliance and liability gap in the name of speed.
The fix
Require partners to log which AI tool they used, what they asked, and whether they modified the output before it entered a deliverable.
A consultant pastes sensitive client information into ChatGPT to analyse it faster. The data enters OpenAI's training pipeline. A competitor's analyst later sees something close to your client's information in a generated output. You have violated confidentiality through a tool you did not read the terms for.
The fix
Restrict use of ChatGPT and Perplexity to non-confidential work, and only use paid Copilot and Claude when client data is involved.
Your senior partner believes Claude is as reliable as Google Scholar. They cite its output in a client presentation without checking sources. The client finds the source and discovers Claude fabricated a statistic. Your firm loses credibility in that relationship and the partner does not understand why the tool was not honest.
The fix
Run a two hour session with all partners on which AI tools hallucinate consistently, and which categories of output require manual verification.
Your firm bans AI use across the board because one partner used ChatGPT irresponsibly on a client project. You lose the real productivity benefit of Copilot for internal drafting or Notion AI for structuring research. You have confused the tool with the behaviour.
The fix
Create different policies for different tools and use cases. Allow Copilot for internal drafting. Restrict ChatGPT and Perplexity to public information only. Require documentation for Claude outputs in client work.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.