For Social Workerss

The Most Common AI Mistakes Social Workerss Make

Social workers often defer to AI risk scores because they appear objective and are already in the case record. This creates a dangerous gap between what the algorithm says and what your professional judgement tells you about the actual person in front of you.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Risk Assessment Errors

When Palantir or ORCA generates a high-risk flag based on pattern-matching to historical cases, social workers often treat it as validation of their concerns rather than asking what patterns the algorithm actually found. The algorithm may be matching families to historical injustices, not current danger.

The fix

Write down what specific current behaviours or circumstances worry you before looking at the AI score, then compare the two separately instead of using one to confirm the other.

Palantir and similar tools learn from years of documented cases. If those cases overrepresented certain communities in higher-risk categories due to policing patterns or surveillance, the algorithm will repeat that bias. You then inherit the discrimination without knowing it happened.

The fix

When a risk score surprises you or feels disconnected from what you are seeing, ask your manager whether that family's demographic group is overrepresented in your tool's training data.

AI tools like ORCA Social Care AI are trained to identify risk signals in data. They do not weight protective factors like extended family support, community connection, or a parent's demonstrated resilience the same way a social worker would. Your assessment becomes risk-heavy by default.

The fix

Always document protective factors and strengths separately from the risk assessment, and give them equal weight when you write your conclusion.

Liquidlogic and similar systems generate risk scores based on the data entered into them. If nobody has recorded recent positive changes, court outcomes, or completed interventions, the algorithm scores an outdated version of the case. You then make decisions about people who have actually moved forward.

The fix

Before relying on any AI-generated risk assessment, verify when the most recent data was entered and whether anything important has happened since.

Once an algorithmic risk assessment is in the case record, there is psychological pressure not to contradict it. Managers, courts, and other agencies see it as evidence. This pressure silences professional judgement even when the algorithm is clearly wrong for this particular family.

The fix

If your professional judgement differs from the score, document your reasoning in full and mark it as a formal challenge to the algorithmic assessment in the system.

Documentation and Deskilling Errors

Microsoft Copilot and ChatGPT can draft risk assessment language quickly, which is attractive when caseloads are high and documentation burden is crushing. But AI-generated text often sounds like it applies to anyone and erases the specifics that actually matter for this child or adult. You stop developing your own assessment skills.

The fix

Write your assessment by hand first or in point form. Use Copilot only to help with spelling, structure, or turning your notes into formal language. Never let the tool generate the risk reasoning.

When AI tools promise to reduce paperwork, organisations sometimes reduce the time allocated for seeing service users. The documentation gets faster but shallower, and you have less information to feed into any assessment, algorithmic or not. Social work becomes administration with brief check-ins.

The fix

Protect time with the person first. Use AI only for writing up what you already know from that contact, not as a substitute for having the contact.

ChatGPT and Copilot can generate plausible-sounding case notes that contain factual errors, mischaracterised behaviour, or made-up details. If you copy them into the official record without reading them carefully, false information becomes part of the legal file and influences future decisions.

The fix

Read every sentence of any AI-drafted text aloud and compare it directly to your own notes before saving it into Liquidlogic or your official system.

ORCA and similar tools often include assessment templates and prompted questions that pull information from existing data. Over time, social workers rely on what the template asks instead of developing their own sense of what questions each family needs. Your judgement atrophies.

The fix

For each new case, write three questions you want answered that are not in the template, and ask them before you fill out any automated form.

Accountability and Professional Judgment Errors

Palantir, ORCA, and similar tools often do not show you which specific case features drove a particular risk score. Social workers then cannot explain the recommendation to families, courts, or even to themselves. You become an intermediary for a decision you do not understand.

The fix

Ask your IT team or vendor for an explanation of which variables most influenced the score, and refuse to use the tool in decision-making until you can explain it in plain language.

When an AI tool generates a risk summary or recommendation, there is pressure to accept it and sign it as your professional opinion. But your signature means you stand behind it. If the algorithm is wrong and you have not checked, you share the liability.

The fix

Never sign an assessment until you have separately documented your own analysis, and keep that analysis distinct from what the AI tool produced.

You have professional training in reading behaviour, spotting signs of abuse or neglect, and understanding family dynamics. When an AI score contradicts your gut feeling, you often assume the algorithm is right. But pattern-matching by machine is not the same as contextual human judgment.

The fix

When you feel certain about a risk that the algorithm rates lower, name the specific reason for your concern in your notes and explain why you think the algorithm missed it.

Palantir and similar vendors often treat their algorithms as proprietary and resist questions about bias or accuracy. Social workers accept this as inevitable rather than pushing back. This leaves discriminatory patterns unexamined and uncorrected in the tool you use every day.

The fix

Join or start a conversation with other social workers in your organisation about what would need to be true to audit the algorithm for bias, and push your management to require that transparency from the vendor.

When AI tools reduce documentation time, there is organisational pressure to adopt them widely. But using Copilot to draft assessments faster or feeding case data into Palantir without clear consent from service users treats people as inputs to an efficient system, not as partners in their own care.

The fix

Ask service users whether they consent to their information being processed by AI tools, explain what the tool will do with it, and document their answer.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.