For the Finance Sector

Protect Your Judgement: AI in Finance Without Losing Your Edge

Your regulators demand you explain every credit decision, every trade rationale, every risk assessment. Your AI tools cannot explain themselves. Meanwhile, efficiency mandates push you toward automated consensus outputs that look identical across your entire sector. When everyone uses the same model, everyone fails the same way.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Know What Your AI Tool Cannot See

Bloomberg AI and Aladdin excel at pattern recognition across structured data. They will miss the sector-wide exposure that emerges from conversations with your counterparties, from reading earnings call transcripts for tone shifts, from noticing which fund managers have quietly reduced positions. Your models train on historical data and cannot weight unprecedented market conditions. A credit analyst who reads 20 prospectuses per week will spot a legal phrase that signals hidden leverage. A Palantir dashboard will not.

Build Model Failure into Your Risk Framework

Your risk management protocols were written for human traders and portfolio managers. They did not anticipate correlated model failure across your institution and your competitors. When Copilot processes earnings data the same way at your bank and three others, you all make similar bets on the same stocks. Regulators now scrutinise concentration risk in AI recommendations, but most organisations have not updated their stress tests to model simultaneous AI tool failure. Your fiduciary duty requires you to know what happens when the model breaks.

Make Your Regulators' Explainability Job Easier

Your compliance officer needs to explain to the regulator why you made a credit decision or a trade. If you cannot point to human reasoning, you cannot satisfy the requirement. AI tools generate outputs, not explanations. ChatGPT can summarise a model's decision but that summary is itself an AI tool making a claim about what the model did. You need a human analyst who can articulate the logic in plain language that a regulator will accept. This person must not be the same person who built the model.

Protect the Analyst Who Disagrees with the Consensus

The analyst who caught the mortgage-backed securities crisis in 2007 was the one who questioned the consensus model. Today, when a Palantir dashboard or Microsoft Copilot output shows that a portfolio position is aligned with market consensus, you face pressure to accept it as correct. Institutional incentives now run against dissent. An analyst who argues against the AI tool output looks inefficient. You must create a structural reason for that analyst to speak up, and you must record that they did. Without it, you will move in lockstep with your competitors into the same trades and the same errors.

Preserve the Skills You Will Need When the Tool Fails

Your junior analysts now spend their time prompting Copilot and reviewing Aladdin outputs. They are not learning to build a financial model from scratch, to challenge a management team on cash flow assumptions, or to spot accounting anomalies in a prospectus. In five years, when a new tool fails and your industry must revert to manual processes for two quarters, you will have no one who remembers how to do the work. Your fiduciary duty includes maintaining the capability to operate without the tool. That capability lives in your people, not in your systems.

Key principles

  1. 1.Independent analysis that contradicts consensus is your most valuable defence against systemic failure.
  2. 2.Explainability for regulators means a human analyst can articulate the reasoning in plain language, not a tool summarising itself.
  3. 3.When every institution uses the same AI model, you fail together, so your risk framework must model correlated model failure across your sector.
  4. 4.The judgement that matters most is the ability to recognise what the tool cannot see, which requires analysts who still know how to work without it.
  5. 5.Your fiduciary duty requires you to maintain human capability to operate the core processes even after the tool has become routine.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.