For the Finance Sector
Protect Your Judgement: AI in Finance Without Losing Your Edge
Your regulators demand you explain every credit decision, every trade rationale, every risk assessment. Your AI tools cannot explain themselves. Meanwhile, efficiency mandates push you toward automated consensus outputs that look identical across your entire sector. When everyone uses the same model, everyone fails the same way.
These are suggestions. Your situation will differ. Use what is useful.
Know What Your AI Tool Cannot See
Bloomberg AI and Aladdin excel at pattern recognition across structured data. They will miss the sector-wide exposure that emerges from conversations with your counterparties, from reading earnings call transcripts for tone shifts, from noticing which fund managers have quietly reduced positions. Your models train on historical data and cannot weight unprecedented market conditions. A credit analyst who reads 20 prospectuses per week will spot a legal phrase that signals hidden leverage. A Palantir dashboard will not.
- ›After Aladdin recommends a trade, ask your most sceptical analyst to find what the model missed. Make that dissent part of your audit trail.
- ›Run your Bloomberg AI output through someone who still reads raw documents. Document what they flagged that the tool did not.
- ›Before you automate a decision, run it with and without the AI tool for six months. Record the differences. Those differences are your judgement premium.
Build Model Failure into Your Risk Framework
Your risk management protocols were written for human traders and portfolio managers. They did not anticipate correlated model failure across your institution and your competitors. When Copilot processes earnings data the same way at your bank and three others, you all make similar bets on the same stocks. Regulators now scrutinise concentration risk in AI recommendations, but most organisations have not updated their stress tests to model simultaneous AI tool failure. Your fiduciary duty requires you to know what happens when the model breaks.
- ›Add a scenario to your quarterly risk review: what if this AI tool gave the same output to every large player in your market? What is your exposure then?
- ›Require your model governance team to document the failure mode of each tool you use. ChatGPT hallucinates on numerical data. Palantir can miss edge cases in sparse datasets. Write these down.
- ›Maintain a shadow book for high-stakes decisions. Let the AI tool recommend a portfolio. Build an alternative portfolio using only human judgement. Track both. When they diverge sharply, investigate why.
Make Your Regulators' Explainability Job Easier
Your compliance officer needs to explain to the regulator why you made a credit decision or a trade. If you cannot point to human reasoning, you cannot satisfy the requirement. AI tools generate outputs, not explanations. ChatGPT can summarise a model's decision but that summary is itself an AI tool making a claim about what the model did. You need a human analyst who can articulate the logic in plain language that a regulator will accept. This person must not be the same person who built the model.
- ›For every significant credit or investment decision above a threshold you set, require a written memo from an analyst who did not touch the AI tool. That memo is your regulatory evidence of human judgement.
- ›When you use Bloomberg AI or Aladdin, export the raw outputs and store them separately from any summaries. Your audit should show the tool output, then show the human interpretation, then show the final decision. Three separate artefacts.
- ›Train your junior analysts to write explanations that work backwards from the decision. 'We approved this credit because the debt-to-EBITDA was 3.2x, the sector was stable, and management had five years of relevant experience.' That structure satisfies a regulator. A tool summary does not.
Protect the Analyst Who Disagrees with the Consensus
The analyst who caught the mortgage-backed securities crisis in 2007 was the one who questioned the consensus model. Today, when a Palantir dashboard or Microsoft Copilot output shows that a portfolio position is aligned with market consensus, you face pressure to accept it as correct. Institutional incentives now run against dissent. An analyst who argues against the AI tool output looks inefficient. You must create a structural reason for that analyst to speak up, and you must record that they did. Without it, you will move in lockstep with your competitors into the same trades and the same errors.
- ›Establish a formalised dissent role. One senior analyst per portfolio is tasked with arguing against the consensus recommendation. Their job is not to win but to document why the AI tool's output might be wrong. Record these dissents in your investment committee minutes.
- ›After a trade or decision goes wrong, review it against what the AI tool recommended. If an analyst objected to the tool's recommendation in writing and was overruled, that dissent becomes valuable evidence in post-mortems and regulatory inquiries.
- ›Pay and promote analysts who develop independent views, even when those views contradict the tool. Make it clear that disagreeing with a model is not a career liability.
Preserve the Skills You Will Need When the Tool Fails
Your junior analysts now spend their time prompting Copilot and reviewing Aladdin outputs. They are not learning to build a financial model from scratch, to challenge a management team on cash flow assumptions, or to spot accounting anomalies in a prospectus. In five years, when a new tool fails and your industry must revert to manual processes for two quarters, you will have no one who remembers how to do the work. Your fiduciary duty includes maintaining the capability to operate without the tool. That capability lives in your people, not in your systems.
- ›Rotate your best junior analysts through a project each year where they cannot use AI tools. They build a model by hand. They conduct due diligence on a credit without ChatGPT summaries. This is expensive and feels inefficient. It is not.
- ›Document the manual version of your critical processes. How do you assess a company's credit quality without Aladdin? How do you screen equities without Bloomberg AI? Write it down so new hires can learn it.
- ›When you hire analysts, look for people who ask why. They question the AI output because they understand what the model sees and what it does not. That curiosity is the judgement you need when the tool fails.
Key principles
- 1.Independent analysis that contradicts consensus is your most valuable defence against systemic failure.
- 2.Explainability for regulators means a human analyst can articulate the reasoning in plain language, not a tool summarising itself.
- 3.When every institution uses the same AI model, you fail together, so your risk framework must model correlated model failure across your sector.
- 4.The judgement that matters most is the ability to recognise what the tool cannot see, which requires analysts who still know how to work without it.
- 5.Your fiduciary duty requires you to maintain human capability to operate the core processes even after the tool has become routine.
Key reminders
- After every significant decision, ask one analyst to work backwards and explain the logic to a regulator. If they cannot, your decision is not defensible.
- Run a quarterly stress test where you assume your primary AI tool has given the same output to three competing institutions. What is your correlated loss?
- Create a dissent log in your investment committee. Record who disagreed with the tool's recommendation and why. When decisions go wrong, that log becomes evidence that you maintained independent judgement.
- For junior analysts, preserve one project per year where they solve the problem without AI. This is training. It is also a backup capability your organisation will need.
- Keep raw model outputs separate from human interpretations in your audit trail. Show the tool said X, you interpreted it as Y, you decided Z. Three distinct steps make regulatory review straightforward.