40 Questions Social Workerss Should Ask Before Trusting AI Risk Assessments
When an AI tool flags a family as high risk or generates assessment language, your professional judgment must come first. These questions help you identify when algorithmic bias, incomplete data, or poor tool design might lead you to harm the people you are trying to protect.
These are suggestions. Use the ones that fit your situation.
1Does the AI tool's training data reflect decisions made by past social workers, and if so, do those past decisions contain known discriminatory patterns against particular ethnic groups or postcodes?
2When Palantir or ORCA flags a family as high risk based on previous cases, does it tell you whether those previous cases actually resulted in harm, or just in more involvement from services?
3Has anyone tested whether the AI tool gives different risk scores to families from different ethnic backgrounds when presented with identical case details?
4Does the tool disclose which specific factors in a family's history it weighted most heavily, or does it only give you a final risk score without showing its reasoning?
5If the training data is older than two years, does the tool account for changes in policy, legislation, or what constitutes best practice in your local authority?
6Was the training data collected from cases where social workers had more time for assessment, meaning current pressures on your caseload might produce different patterns?
7Does the AI disclose whether it was trained on cases that resulted in child deaths or serious harm, potentially weighting rare tragic events too heavily?
8When the tool flags poverty, housing instability, or parental mental health as risk factors, does it distinguish between these as direct causes of harm versus markers of families needing support?
9Has the tool been tested on cases from your specific local authority, or is it generic software that may not reflect your population's actual risks?
10Do you know whether the AI vendor has published independent audit results showing the tool's performance across different demographic groups?
Questions About How the AI Score Will Influence Your Judgment
11If you disagree with a high risk score from ORCA or Palantir, what evidence would your manager require to override the algorithmic recommendation?
12Does your organisation have a written policy that permits you to document a lower risk level than the AI suggests, and does that policy protect you from liability if something goes wrong later?
13When you present a risk assessment to a multi-agency safeguarding hub or child protection conference, will colleagues assume the AI score is objective fact rather than one input among many?
14Has your organisation trained your managers to recognise when they are unconsciously deferring to algorithmic risk scores instead of your professional assessment?
15If you spend less time on direct conversation with a family because the AI tool has already categorised them as low risk, how will you notice changes in circumstances that the algorithm cannot detect?
16When you use ChatGPT or Microsoft Copilot to draft risk assessment text, are you aware that the language it generates may subtly emphasise risks in ways that influence the reader's perception?
17Does your organisation have a process for you to flag when an AI tool's recommendation conflicts with your professional judgement, or does the high risk score remain the official record?
18If a family challenges a high risk assessment based on an AI recommendation, will your organisation support you to explain the tool's limitations, or will you be expected to defend it as objective?
19Are you required to document the AI tool's output in case notes in a way that makes it appear more certain or credible than your own assessment?
20When you feel pressure to agree with an algorithmic risk score to avoid being questioned later, what permission do you have to slow down and think independently?
Questions About Documentation and Deskilling
21If you use AI tools primarily to reduce documentation time rather than to improve assessment quality, are you losing the opportunity to develop your own risk assessment skills?
22When Liquidlogic AI or similar tools auto-populate assessment templates, do you actively challenge the suggested text, or do you find yourself accepting it because time pressure is real?
23Does your organisation track whether practitioners are spending the time saved on AI documentation in direct contact with families, or is it being absorbed into other administrative tasks?
24If you have not conducted a face-to-face assessment but the AI tool has generated a risk profile, can you confidently explain to a court or inquiry why you relied on that profile?
25When you use ChatGPT to draft summaries or risk narratives, do you verify that the language it uses matches what you actually observed, or do you sometimes spot that it has made inferences you would not make?
26Has your organisation set a minimum amount of unstructured time you must spend with a family before you are permitted to use an AI tool to assess them?
27If a new practitioner on your team relies heavily on AI to generate assessments, how will they develop the intuitive pattern recognition that experienced social workers use to spot risk?
28When you generate case notes using AI assistance, do you retain a clear record of which text came from your own observation and which came from the algorithm?
29Does your organisation provide training in how to critically evaluate AI-generated assessment language, or is the assumption that the tool output is reliable?
30If you stopped using an AI documentation tool for one month, would you notice that your direct assessment skills felt rusty, or would they feel the same?
Questions About Accountability and Safeguarding
31If an AI tool recommends a lower risk level and you follow that recommendation but a family later experiences harm, who bears legal and professional responsibility?
32Does your organisation's indemnity insurance cover decisions that were influenced by a proprietary AI tool that your insurer has not specifically reviewed?
33When Palantir or another vendor updates their algorithm, does your organisation test it against real cases from your population, or do you assume the update is an improvement?
34If an AI tool flags a family that you know well and the recommendation contradicts your direct knowledge of them, what is the formal process for documenting that discrepancy?
35Has your organisation conducted a specific audit of whether families from particular postcodes, ethnic groups, or family structures receive systematically higher risk scores from your AI tools?
36If a serious case review or child safeguarding practice review criticises your use of an AI tool, will your organisation take responsibility or will the decision rest with you as the practitioner?
37When you cannot explain why an AI tool has assigned a particular risk score because the tool itself does not provide transparent reasoning, how do you defend that assessment to a family or in court?
38Does your organisation have a process for practitioners to report when an AI tool produces an output that seems discriminatory or ethically problematic, separate from your line manager?
39If you discover that the AI tool has used a family's previous contact with police or mental health services as a risk factor without distinguishing between being accused and being convicted, what authority do you have to challenge it?
40When your organisation adopts a new AI tool for risk assessment, are social workers consulted on whether it actually reflects how you make judgements in practice, or is the decision made by procurement?
How to use these questions
Before you use an AI risk score, ask yourself what specific information about this family that you gathered directly would make you disagree with it. If you cannot answer, the score may be driving your thinking rather than informing it.
Insist on seeing the AI tool's reasoning, not just its output. If a vendor refuses to explain why a family scored high risk, treat that refusal as a red flag about the tool's reliability.
When you feel time pressure to accept an AI assessment, pause and ask: would I reach this conclusion if I had unlimited time to speak with this family? If the answer is no, trust your instinct.
Keep a record of cases where the AI tool's recommendation differed from your professional judgement and note what happened next. Use this data in supervision to show whether the tool is reliable for your population.
Never document an AI output as if it is objective fact. Always note that it is a tool output and state clearly which assessment conclusions came from your own observation and professional reasoning.