For Insurance

Protecting Underwriting Judgement While Using AI in Insurance

Your best underwriters built their judgement through thousands of case decisions. When Guidewire AI or SAS AI automates these decisions, that institutional knowledge disappears into a black box. The risk is not just regulatory exposure when you cannot explain why a claim was denied, but the loss of the human discretion that once caught the actuarially correct decision that was ethically wrong.

These are suggestions. Your situation will differ. Use what is useful.

Download printable PDF

Keep Underwriting Judgement in the Loop

AI tools like SAS AI and Azure AI can score risk quickly, but underwriters who have spent years learning to spot context that models miss should still see and approve high-value or ambiguous cases. Your senior underwriters have pattern recognition that goes beyond what training data shows. When you remove them from the decision, you lose their ability to catch the edge cases where the model fails or produces a fair but cruel outcome.

Audit for Bias That Models Hide

When you move underwriting to Guidewire AI or IBM Watson, bias that was once visible in individual underwriter decisions becomes systemic and scaled. A model trained on historical underwriting data will replicate historical unfairness at machine speed. You need to test whether your AI tool denies coverage to certain groups at higher rates than others, even if the model does not use protected characteristics as inputs.

Demand Explainability Before Implementation

Regulators now expect you to explain underwriting and claims decisions. Models from Watson or Azure AI that use deep learning or ensemble methods often cannot tell you why they said no. Before you deploy any AI tool for underwriting or claims triage, test whether you can produce a plain-language reason for each decision that would satisfy a regulator or a customer complaint. If you cannot, you have a compliance problem on day one.

Protect Claims Discretion in Fast-Moving Cases

Claims assessment tools powered by Guidewire AI or ChatGPT can triage thousands of claims and suggest approvals at speed, but claims adjusters have always used discretion to find fairness in ambiguous situations. A catastrophe claim with missing documentation, a dispute over coverage, or a case with partial liability all require human judgment. Speed of processing is not the measure of good claims handling if you are denying fair claims because the model did not know how to handle complexity.

Build Feedback Loops Between AI and Actuaries

Your actuarial team has built loss models based on years of claims data and underwriting outcomes. When you deploy AI for underwriting or fraud detection, those models will change behaviour, and that changes the data that feeds your loss models. If your actuarial team does not see the decisions the AI is making, they cannot track whether experience is aligning with projections. Disconnect between model predictions and actual claims creates hidden risk.

Key principles

  1. 1.Underwriters and claims adjusters should approve high-stakes or ambiguous decisions even when AI recommends an outcome.
  2. 2.Fairness audits must run monthly and compare approval rates across demographic groups to catch systemic bias that individual decisions hide.
  3. 3.Every AI underwriting and claims decision must come with a plain-language explanation that would satisfy a regulator or customer.
  4. 4.Actuarial teams must see the decisions AI is making so they can verify that experience matches loss projections and catch hidden risk.
  5. 5.When AI speeds up claims or underwriting, protect the cases that need human discretion instead of moving them faster.

Key reminders

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.