For Financial Advisers and Wealth Managers
Financial advisers often accept Morningstar AI recommendations or BlackRock Aladdin outputs as starting points rather than testing the logic underneath. When you skip the critical evaluation step, you move from adviser to order-taker, and your fiduciary duty becomes theoretical rather than real.
These are observations, not criticism. Recognising the pattern is the first step.
BlackRock Aladdin optimises for Sharpe ratio and correlation matrices within its programmed constraints. You assume the output reflects market truth when it reflects the model's weighting of risk factors you may not have set correctly. The cost is recommending allocations that fit the machine's logic, not your client's actual circumstances.
The fix
Before running a client through Aladdin, document the specific constraints and risk parameters you are inputting, then manually walk through why those constraints match this client's situation and timeline.
ChatGPT generates plausible-sounding rationales for sector exposure or fund selection using training data that ends in April 2024. You quote the thesis to a client without noticing it excludes recent earnings, interest rate moves, or regulatory changes from the last four months. You then defend a recommendation based on analysis that is already outdated.
The fix
Copy ChatGPT's thesis into your research notes, then spend ten minutes checking whether the underlying data points and market conditions it cites still hold today using Bloomberg or your data provider.
Morningstar AI flags a fund dropping from Gold to Silver based on updated performance and risk metrics. You immediately begin the conversation with a client about repositioning without asking what specific threshold moved the rating, or whether the move reflects real deterioration in the fund manager's approach or just recent underperformance in a single asset class. You act on the signal without understanding its meaning.
The fix
When Morningstar AI changes a rating, pull the detailed methodology report for that fund and identify the exact metric that shifted, then compare that metric to the fund's three-year and five-year history before discussing any change with the client.
Bloomberg AI shows you a correlation snapshot between equities and bonds as negative 0.4. You assume your portfolio is correctly diversified based on this screen-level data. Your client actually holds three of the same emerging market bonds and two of the same equity funds, creating hidden concentration that the headline correlation does not capture.
The fix
When Bloomberg AI shows a correlation or risk decomposition, manually map it back to the exact holdings in your client's portfolio to confirm the AI model is measuring what you think it is measuring.
Vanguard AI projects a 7% annualised return on a balanced portfolio over the next decade. You present this projection to your client without checking whether the model assumes your client's 15-year investment horizon or a generic 10-year horizon. You recommend a strategy based on return assumptions misaligned with when your client actually needs the money.
The fix
Before presenting any AI-generated return projection to a client, confirm the exact time horizon the model used and state that assumption clearly in your written advice.
You copy Morningstar AI's market commentary or ChatGPT's fund summary into your quarterly client letter without editing. The language is generic, the examples are generic, and the client no longer hears a distinct adviser perspective. The client begins to wonder whether they could get the same report from an automated service.
The fix
Write your own two-sentence interpretation of each AI-generated insight, including one specific reference to your client's portfolio or goals, before including anything in client communication.
A client asks why you are recommending a particular fund. You reply that ChatGPT analysis showed strong alpha potential or that Aladdin flagged it as underweight. The client then asks you to explain what alpha means or what underweight means relative to their goals. You cannot articulate the reasoning in plain language because you never did the translation from algorithm output to actual advice.
The fix
For every significant investment decision influenced by AI output, write down the reason in one sentence you could explain to someone with no finance background before discussing it with your client.
You set up ChatGPT or similar tools to generate responses to client questions about portfolio performance. The responses are timely but generic. A client feels contacted but not understood. Over time, the client perceives you as a service provider rather than a trusted adviser, and will price-compare you with robo-advisers.
The fix
Reserve AI-generated templates only for initial acknowledgement or scheduling responses; write your own response to any client question that touches on investment strategy, performance explanation, or life changes.
You recommend moving a client's allocation based on analysis from Aladdin or Morningstar AI but present the recommendation as your own independent judgement. The client believes they are paying for your expertise. If the recommendation underperforms, the client loses trust not just in the recommendation but in your honesty about your process.
The fix
In any recommendation memo or conversation, state clearly whether and how an AI tool informed your analysis, and why you chose to act on its output.
An AI planning tool generates a comprehensive financial projection and recommendation. You present it to the client as 'the plan' rather than as your first draft based on the assumptions the tool made. When the client's situation changes or the client disagrees with an assumption, you cannot adapt the plan without re-running the entire AI analysis. You become an operator of the tool rather than a planner.
The fix
After running any AI financial plan, spend one hour documenting the three to five key assumptions the tool made, then discuss those assumptions with your client before treating the plan as an agreement.
Aladdin recommends a portfolio shift. You forward the recommendation to the client with your signature. Your compliance file contains the AI output but no separate memo showing your analysis, testing, or independent judgement applied to that output. If the trade performs poorly and a regulator asks how you reached your recommendation, your evidence is an algorithm's output, not your reasoning.
The fix
Create a one-page evaluation memo for any recommendation significantly influenced by AI, stating what the AI said, what you tested or questioned about it, and why you agreed or disagreed before passing it to the client.
You use Bloomberg AI to summarise earnings reports and financial health. Over time, you stop reading actual 10-K filings or balance sheets yourself. When a client asks a specific question about a company's debt structure or earnings quality, you cannot answer it without running another AI query. Your interpretive skill has atrophied.
The fix
Once a quarter, pick one holding in a significant client portfolio and read the company's actual quarterly filing or annual report without using AI summary tools first.
You ask ChatGPT about suitability requirements or best execution rules. The response is generally accurate but misses a recent regulatory update or misses a detail specific to your jurisdiction. You file advice or make a recommendation based on outdated or incomplete guidance. You have a compliance exposure.
The fix
Never use ChatGPT or similar tools as your primary source for compliance or regulatory questions; use your compliance officer, your industry body guidance, or direct regulatory statements only.
Morningstar AI or Vanguard AI flags that a portfolio has drifted 5% beyond target allocation. You rebalance automatically without considering whether that drift reflects genuine market opportunity, whether the client has a near-term cash need, or whether tax consequences of selling make rebalancing inefficient. You have ceded discretion to the algorithm.
The fix
When any AI tool flags a rebalancing opportunity, pause and manually check three factors before rebalancing: tax consequences, client cash needs in the next 12 months, and whether the drift reflects a market view you want to maintain.
You have recommended a particular fund based on years of experience and recent performance analysis. Bloomberg AI or Aladdin suggests an alternative with a different risk profile. You second-guess your own recommendation rather than trusting your analysis. You either reverse your decision or present the client with unresolved doubt. Either way, you have weakened your adviser credibility.
The fix
When AI output conflicts with your recommendation, write down the specific reasons you disagree with the AI suggestion; if you cannot articulate them clearly, then reconsider your recommendation, but if you can, present your reasoning to the client alongside the AI alternative and explain why you chose differently.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.