For Investment Bankers

The Most Common AI Mistakes Investment Bankers Make

Investment bankers are outsourcing the thinking that built their judgement to AI, then treating the output as rigorous because it looks complete. The real risk is not bad models but the atrophy of the instinct needed to spot when a model is subtly wrong.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Financial Modelling and Valuation

AI valuation tools produce a range that feels authoritative because it cites comparable companies and precedent transactions. You skip the step of manually checking which variable moves the output most because the tool ranked them for you.

The fix

Before presenting any valuation to a client, rebuild the sensitivity table yourself using only the three variables you believe matter most, then compare it to the AI output to see what the tool weighted differently.

When ChatGPT or Copilot builds a three-statement model, the revenue growth assumption, CAPEX schedule, or tax rate sits inside a cell you inherit rather than choose. Junior bankers cannot interrogate what they did not build.

The fix

Print or export the completed model and manually rewrite every assumption row as a standalone input cell, even if it duplicates the AI's work, so you own the logic.

AI models often handle working capital as a percentage of revenue or use historical averages, which works until a deal involves a customer concentration change or inventory shock that the model cannot see coming.

The fix

For any deal involving supply chain integration or customer transitions, manually recalculate working capital for years one and two based on actual client conversations, then check whether the AI model's forecast is even plausible.

These tools select comparables based on industry code and market cap, which misses the fact that the client's deal target operates differently in terms of customer base, geography, or margin profile. You inherit a peer group you would never have chosen.

The fix

After the AI generates the peer list, remove two companies that do not belong and add two that do, then justify each choice in writing before using the multiples in your model.

An EV bridge from Bloomberg or a custom AI tool can show the move from EBITDA to equity value across ten lines, but you are not seeing whether the tax assumption or the net debt forecast aligns with the client's actual capital structure.

The fix

Manually recalculate the last two steps of the bridge (working capital to free cash flow, and free cash flow to equity value) so you can explain to a client why you got to this number, not the tool.

Deal Sourcing, Due Diligence, and Judgement

ChatGPT or Copilot can produce a three-page investment summary that covers synergy potential, market tailwinds, and management strength. What it cannot do is tell you whether the deal will actually work given the client's appetite for cultural integration or the customer's likelihood of staying post-close.

The fix

After the AI drafts the thesis, spend two hours interviewing your client about the one thing that could derail this deal, then rewrite the risks section yourself so it reflects a conversation you had, not a pattern the AI learned.

Tools can output a 50-item due diligence list covering regulatory, financial, and commercial domains. For your specific deal, three items matter and 40 are boilerplate. You do not know which is which because you did not build the list.

The fix

Use the AI checklist as a starting point, then delete anything that does not apply to this industry or this client's stated concerns, and add the two or three things you learned from dealing with this client before.

An AI-assisted deal memo can outline cost synergies, revenue synergies, and integration timeline. It cannot tell you whether the operating partner actually believes those synergies are real, which is the only reason they matter.

The fix

Before finalising any synergy estimate, call the operating partner and ask them to give you three examples of deals where they achieved a synergy like the one you are forecasting, then adjust your model based on what you learn.

Copilot can flag unusual year-on-year variance or one-off items in a profit and loss statement. It cannot tell you that your client views a debt covenant ratio above three times as a deal killer, or that this target just crossed that line.

The fix

Give the AI the client's three underwriting rules before asking it to review the financials, then read the red flag report yourself and verify each one against a conversation you have had with the credit team.

An AI tool can research which buyers might be interested in this target and estimate their capacity to bid. It does not know that the most likely buyer exited this sector two years ago, or that the second-place buyer is facing a regulatory review that will tie up their capital.

The fix

After the AI generates the buyer list, cross-reference it against the last two years of your own deal flow and call the partners you know at the three firms the AI rated highest to reality-check whether they are actually in the market.

Client Presentations and Advisory Differentiation

Tools like Copilot and ChatGPT can produce polished presentation decks with charts, narratives, and logical flow. Your client will ask you one specific question about one number, and if that number came from the AI and you cannot defend it, you have lost the advisory relationship.

The fix

Before any client presentation, choose the three slides where the client will push back hardest, and manually recalculate the underlying numbers using only data or logic you can explain out loud.

A language model can construct a compelling narrative around synergies, market positioning, or strategic fit by pattern-matching to other deals. It cannot know that your client mentioned in passing that their real goal is to reduce capex intensity, not grow market share.

The fix

After the AI writes the deal case, map each point back to a specific client conversation or email where the client signalled what matters to them, then rewrite the opening to lead with their actual priority.

Capital IQ or Kensho can show you the median EV/EBITDA paid in similar deals. Your client's deal involves a customer concentration that previous buyers did not have to manage, so the multiples are structurally not comparable.

The fix

When presenting valuation benchmarks to the client, explicitly say which comparable deals you excluded and why, so the client sees that you did the filtering, not the tool.

When Copilot drafts a ten-page summary, you tend to skim it and push send because the structure is correct and the writing is polished. The AI may have used a precedent that assumes this deal is a roll-up when it is actually a bolt-on, and you miss it.

The fix

Read any AI-drafted client document out loud before sending it, and mark any sentence that uses a term or assumption not rooted in your own deal conversations with this client.

A twenty-tab Excel model built with AI assistance looks rigorous to a client and will look rigorous to your managing director. Neither of them will know that the WACC calculation used an average beta, not the company's actual leverage profile.

The fix

Before presenting any model as your own work, manually recalculate the cost of capital and the terminal value, then write a one-paragraph note explaining why you got to that number rather than the alternative approach.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.