For Insurance

40 Questions Insurers Should Ask Before Trusting AI Underwriting and Claims Decisions

Your actuarial judgement took years to build. An AI model can replicate it in seconds, but cannot explain why it rejected a case you would have approved. These questions help you keep human discretion at the centre of underwriting, claims assessment, and risk decisions.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Questions to Ask About Underwriting AI Models

1 Can the SAS or Guidewire model tell me which specific case attributes triggered a decline, or only a confidence score?
2 If the model recommends a 40 percent higher premium for a medical history flag, what historical cases drove that decision?
3 Does the model weight recent underwriting decisions more heavily than older ones, and if so, does that mean it learns my recent biases as truth?
4 When the model sees a postcode, occupation, or age group, am I relying on patterns that correlate with protected characteristics?
5 How does the model handle cases that do not fit its training data, and what does it do instead of refusing to score them?
6 If I override the model's recommendation ten times in a month, will the next version train itself on my overrides as if they were correct?
7 What happens when two applicants have identical attributes but the model scores them differently due to processing order or timing?
8 Does the AI give me the second-best option or reason for a case, so I can compare its logic to my own instinct?
9 When the model recommends further enquiries before a decision, are those enquiries ones I would have asked, or ones that serve the model's confidence threshold?
10 If a regulator challenges an underwriting decision made by your team on the model's advice, can your organisation explain it without the vendor's help?

Questions to Ask About Claims and Fraud Detection AI

11 When Watson or Azure flags a claim as potentially fraudulent, what percentage of flagged claims are actually fraudulent versus false positives?
12 Does the fraud model treat all claim types equally, or does it have higher sensitivity for certain claim amounts or categories?
13 If the model recommends immediate denial of a claim, what is your process for reviewing that recommendation before telling the customer no?
14 Are claimants in certain postcode areas, age groups, or occupations flagged more often, and if so, is that because they commit more fraud or because they were overrepresented in your training data?
15 When the AI recommends further investigation, does it tell you what you should be looking for, or just that something feels wrong?
16 If a claims assessor disagrees with the AI's fraud score, what happens to that disagreement, and does it improve the model?
17 Does the claims AI account for the fact that honest claimants from certain backgrounds may have more difficulty providing documentation, and therefore appear suspicious to the model?
18 When settlement recommendations come from Guidewire AI, are they based on case law and policy wording, or on what similar claims have settled for historically?
19 Can you explain to a customer why their claim was denied or delayed based on an AI recommendation without sounding like you are hiding behind automation?
20 If your claims AI has learned from cases settled under old policies or outdated practises, how do you prevent it from recommending settlements outside your current risk appetite?

Questions to Ask About Risk Assessment and Customer Data

21 When the AI model uses customer credit score, financial behaviour, or prior claims to assess future risk, is it measuring risk or measuring poverty?
22 Does your organisation have permission under FCA and data protection law to feed the specific attributes the AI model uses into its decision-making?
23 If the model recommends different premiums for customers in the same risk category, can you justify that difference to a customer service team member?
24 When the AI recommends non-renewal of a customer, have you verified that the reason is not circular logic based on premium they were charged last year?
25 Does the AI model have access to external data sources like social media or purchasing history, and if so, has legal reviewed whether that crosses into unfair discrimination?
26 If the model recommends pricing that reflects statistical risk but contradicts your organisation's underwriting appetite, who decides which one to follow?
27 Can you run a fairness audit on your AI's decisions to compare acceptance rates, premium levels, and claim handling across demographic groups?
28 When the model recommends a higher premium for a customer based on patterns in their data, would you make that same recommendation if you were sitting across a desk from them?
29 If an underwriter or claims assessor consistently overrides the AI in the same direction, is that a sign they know something the model misses, or a sign the model is learning their blind spots?
30 Does your organisation track whether customers who received an AI-driven decision were treated fairly compared to customers who received human decisions made in the same period?

Questions to Ask About Governance, Compliance, and Your Organisation's Liability

31 Who in your organisation is accountable when an AI underwriting or claims decision is wrong, and does that person understand how the decision was made?
32 If a customer complains about an underwriting or claims decision and refers it to the FCA, can your organisation produce evidence that a human reviewed the AI recommendation?
33 Does your AI model governance policy require that high-value decisions, sensitive customer groups, or unusual cases receive human review before an AI recommendation is acted on?
34 When the vendor releases a new version of the AI model, how do you test whether it makes more biased decisions than the version it replaces?
35 Do your underwriters and claims assessors receive training on when to trust the AI and when to override it, or does the organisation culture treat the model as the source of truth?
36 If a regulator asks you to audit AI decisions from the past two years, can you access the inputs, the model version, and the reasoning for every decision?
37 Does your organisation have a process for identifying and correcting systemic bias in the AI before it scales harm across thousands of customers?
38 When you deploy a new AI tool, does your organisation measure whether underwriting consistency improved or whether underwriters simply learned to defer to the model?
39 If the AI model was trained on historical decisions made by your organisation, and your organisation has a history of discriminatory practise, is the model now codifying and scaling that history?
40 Does your organisation's insurance liability cover errors made when following AI recommendations, or are such errors treated as your organisation's responsibility alone?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.