For the Finance Sector

40 Questions Finance Should Ask Before Trusting AI

Your regulators require you to explain every material decision, yet your AI models cannot explain why they chose what they chose. Your fiduciary duty demands independent judgement, yet your team is trained on the same Bloomberg AI outputs as your competitors. These 40 questions help you keep human judgement in control when AI pressure is high.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Model Dependency and Systemic Risk

1 How many other institutions in our sector use the same Aladdin model configuration that we do for portfolio construction?
2 What happens to our risk models if the underlying training data becomes unrepresentative, as it did during the 2008 crisis or COVID shock?
3 Can we identify which specific Bloomberg AI recommendations our competitors acted on in the last quarter?
4 If Palantir's data integration fails silently, how many decisions downstream would be made on corrupted inputs before we notice?
5 What sector-wide positions would unwind simultaneously if all AI models recognised the same risk signal at the same time?
6 Are we monitoring whether our credit scoring AI and our competitors' credit scoring AIs are rejecting the same loan applicants for the same reasons?
7 If our Microsoft Copilot investment analysis tool goes offline, what percentage of our analysts cannot complete their weekly reports without it?
8 How would our risk committee identify a failure mode that affects all instances of a widely adopted model simultaneously?
9 Do we have an analyst or team member whose job is explicitly to produce recommendations that contradict the AI consensus?
10 What manual processes would need to restart immediately if all our AI tools failed at once?

Regulatory Compliance and Explainability

11 Can we write a paragraph explaining why ChatGPT recommended this specific investment to a regulator without using the phrase 'the model learned patterns'?
12 Does our AI output include the confidence level required by our regulator, or only the recommendation?
13 How do we prove that a Palantir flagged transaction was not rejected based on protected characteristics when the model itself cannot articulate why it flagged it?
14 If a regulator demands we show our work for a portfolio decision, can we produce an audit trail that does not simply say 'the AI said so'?
15 What is our documented process for overriding an AI recommendation, and how often do we actually do this compared to how often we should?
16 Does our risk appetite statement explicitly address which decisions AI can make alone and which require human sign-off?
17 How do we comply with MiFID II suitability rules when the AI recommendation came from a model trained on data outside the EU?
18 Can our compliance team explain to auditors why we chose this particular AI tool over another, beyond cost or convenience?
19 If we relied on an AI output that turned out to be wrong, does our insurance cover it, or does the use of AI void our professional indemnity?
20 What internal governance approval was required before deploying this AI tool, and who remains accountable if it fails?

Risk Management and Model Failure

21 Does our market risk framework have limits built in for AI recommendations that diverge sharply from human analyst consensus?
22 What happens to our Value at Risk calculation if the AI model that feeds into it starts producing outlier predictions?
23 Have we stress tested our portfolio against scenarios where all AI-driven sell signals trigger simultaneously?
24 If Bloomberg AI recommends overweight in a sector we have already flagged as concentrated, how does that conflict get resolved?
25 When was the last time we tested whether an AI tool still works correctly with data it was never trained on?
26 Can we identify specific decisions we made based on ChatGPT analysis that we would reverse if we knew the model had been fine-tuned on publicly available market commentary?
27 What is our procedure if a Palantir data pipeline produces silently corrupted outputs, and how quickly would we catch it?
28 How do we detect model drift in our credit risk AI, and how frequently do we run that detection?
29 Which of our key risk metrics rely on AI outputs, and what manual checks validate those outputs?
30 If our insurance pricing AI was trained on historical claims from a period of abnormally low losses, how do we adjust for that bias now?

Analyst Judgement and Decision Quality

31 How many of our portfolio decisions this quarter were made because Aladdin recommended them, versus because our analysts independently reached the same conclusion?
32 Does our investment process include a mandatory step where an analyst must explain why they disagree with the AI recommendation before approving it?
33 When was the last time an analyst on our desk spotted a risk that the AI models missed, and what was it?
34 Are we still hiring analysts who think differently from the consensus, or have we optimised for candidates who work well with AI tools?
35 If a junior analyst questions an AI recommendation, what is our process for escalating that concern above the algorithm?
36 How do we ensure that our credit committee is not simply ratifying AI decisions rather than genuinely evaluating the loan application?
37 What percentage of our analysts have been trained on how to identify when an AI output is plausible but wrong?
38 Do we track whether our Copilot-assisted research reports reach different conclusions than pre-AI reports on the same topic?
39 How many of our traders still maintain independent market views, or have they converged toward the consensus generated by shared AI tools?
40 When we hire a new analyst, do we first teach them how to think independently, or how to use the AI tools?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.