For Social Workerss

40 Questions Social Workerss Should Ask Before Trusting AI Risk Assessments

When an AI tool flags a family as high risk or generates assessment language, your professional judgment must come first. These questions help you identify when algorithmic bias, incomplete data, or poor tool design might lead you to harm the people you are trying to protect.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Questions About the Data Training Your AI Tool

1 Does the AI tool's training data reflect decisions made by past social workers, and if so, do those past decisions contain known discriminatory patterns against particular ethnic groups or postcodes?
2 When Palantir or ORCA flags a family as high risk based on previous cases, does it tell you whether those previous cases actually resulted in harm, or just in more involvement from services?
3 Has anyone tested whether the AI tool gives different risk scores to families from different ethnic backgrounds when presented with identical case details?
4 Does the tool disclose which specific factors in a family's history it weighted most heavily, or does it only give you a final risk score without showing its reasoning?
5 If the training data is older than two years, does the tool account for changes in policy, legislation, or what constitutes best practice in your local authority?
6 Was the training data collected from cases where social workers had more time for assessment, meaning current pressures on your caseload might produce different patterns?
7 Does the AI disclose whether it was trained on cases that resulted in child deaths or serious harm, potentially weighting rare tragic events too heavily?
8 When the tool flags poverty, housing instability, or parental mental health as risk factors, does it distinguish between these as direct causes of harm versus markers of families needing support?
9 Has the tool been tested on cases from your specific local authority, or is it generic software that may not reflect your population's actual risks?
10 Do you know whether the AI vendor has published independent audit results showing the tool's performance across different demographic groups?

Questions About How the AI Score Will Influence Your Judgment

11 If you disagree with a high risk score from ORCA or Palantir, what evidence would your manager require to override the algorithmic recommendation?
12 Does your organisation have a written policy that permits you to document a lower risk level than the AI suggests, and does that policy protect you from liability if something goes wrong later?
13 When you present a risk assessment to a multi-agency safeguarding hub or child protection conference, will colleagues assume the AI score is objective fact rather than one input among many?
14 Has your organisation trained your managers to recognise when they are unconsciously deferring to algorithmic risk scores instead of your professional assessment?
15 If you spend less time on direct conversation with a family because the AI tool has already categorised them as low risk, how will you notice changes in circumstances that the algorithm cannot detect?
16 When you use ChatGPT or Microsoft Copilot to draft risk assessment text, are you aware that the language it generates may subtly emphasise risks in ways that influence the reader's perception?
17 Does your organisation have a process for you to flag when an AI tool's recommendation conflicts with your professional judgement, or does the high risk score remain the official record?
18 If a family challenges a high risk assessment based on an AI recommendation, will your organisation support you to explain the tool's limitations, or will you be expected to defend it as objective?
19 Are you required to document the AI tool's output in case notes in a way that makes it appear more certain or credible than your own assessment?
20 When you feel pressure to agree with an algorithmic risk score to avoid being questioned later, what permission do you have to slow down and think independently?

Questions About Documentation and Deskilling

21 If you use AI tools primarily to reduce documentation time rather than to improve assessment quality, are you losing the opportunity to develop your own risk assessment skills?
22 When Liquidlogic AI or similar tools auto-populate assessment templates, do you actively challenge the suggested text, or do you find yourself accepting it because time pressure is real?
23 Does your organisation track whether practitioners are spending the time saved on AI documentation in direct contact with families, or is it being absorbed into other administrative tasks?
24 If you have not conducted a face-to-face assessment but the AI tool has generated a risk profile, can you confidently explain to a court or inquiry why you relied on that profile?
25 When you use ChatGPT to draft summaries or risk narratives, do you verify that the language it uses matches what you actually observed, or do you sometimes spot that it has made inferences you would not make?
26 Has your organisation set a minimum amount of unstructured time you must spend with a family before you are permitted to use an AI tool to assess them?
27 If a new practitioner on your team relies heavily on AI to generate assessments, how will they develop the intuitive pattern recognition that experienced social workers use to spot risk?
28 When you generate case notes using AI assistance, do you retain a clear record of which text came from your own observation and which came from the algorithm?
29 Does your organisation provide training in how to critically evaluate AI-generated assessment language, or is the assumption that the tool output is reliable?
30 If you stopped using an AI documentation tool for one month, would you notice that your direct assessment skills felt rusty, or would they feel the same?

Questions About Accountability and Safeguarding

31 If an AI tool recommends a lower risk level and you follow that recommendation but a family later experiences harm, who bears legal and professional responsibility?
32 Does your organisation's indemnity insurance cover decisions that were influenced by a proprietary AI tool that your insurer has not specifically reviewed?
33 When Palantir or another vendor updates their algorithm, does your organisation test it against real cases from your population, or do you assume the update is an improvement?
34 If an AI tool flags a family that you know well and the recommendation contradicts your direct knowledge of them, what is the formal process for documenting that discrepancy?
35 Has your organisation conducted a specific audit of whether families from particular postcodes, ethnic groups, or family structures receive systematically higher risk scores from your AI tools?
36 If a serious case review or child safeguarding practice review criticises your use of an AI tool, will your organisation take responsibility or will the decision rest with you as the practitioner?
37 When you cannot explain why an AI tool has assigned a particular risk score because the tool itself does not provide transparent reasoning, how do you defend that assessment to a family or in court?
38 Does your organisation have a process for practitioners to report when an AI tool produces an output that seems discriminatory or ethically problematic, separate from your line manager?
39 If you discover that the AI tool has used a family's previous contact with police or mental health services as a risk factor without distinguishing between being accused and being convicted, what authority do you have to challenge it?
40 When your organisation adopts a new AI tool for risk assessment, are social workers consulted on whether it actually reflects how you make judgements in practice, or is the decision made by procurement?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.