For HR and People Management
HR teams often trust AI hiring and performance tools because they appear objective, then discover they have automated the exact biases they meant to eliminate. The real problem is that when you hand judgement to an algorithm, you also hand away your ability to spot when that algorithm is wrong.
These are observations, not criticism. Recognising the pattern is the first step.
Video analysis tools claim to measure traits like communication style and engagement, but they train on historical hiring data that reflects your organisation's past hiring patterns. You end up screening out candidates who think differently or come from different backgrounds, just faster and at greater scale.
The fix
Audit what your video screening tool actually scored your last three cohorts of successful hires on, then check whether those traits predict job performance or just match who you hired before.
LinkedIn's tool shows you the background and skills of people already in similar roles at other companies. This locks you into hiring people identical to your current workforce, which reinforces existing skill gaps and limits how your team can evolve.
The fix
Define the capabilities you actually need for the role's future, not the profile of people who succeeded in the past version of that role.
These tools excel at matching resume keywords to job descriptions, but they miss candidates who can do the job despite having learned skills in non-standard ways. You reject people who could excel because their background does not match the algorithm's pattern.
The fix
When Eightfold flags a candidate as below threshold, have a hiring manager spend five minutes reading their actual experience before rejecting them.
Workday AI recommends candidates based on historical sourcing and hiring patterns. If your organisation has ever hired fewer women or underrepresented groups for a role, the system will suggest fewer of them, appearing to be efficient when it is actually replicating old choices.
The fix
Pull a monthly report from Workday showing demographics of candidates the AI flagged as top matches, and compare it to the full candidate pool to see what is being hidden.
When a tool gives a candidate a score out of 100, that score looks scientific. In reality it is a prediction based on patterns in your historical hiring data. If your hiring has ever been unfair, the algorithm learned that unfairness and now applies it consistently.
The fix
Require your recruitment team to review the reasoning behind any score below 60 or above 90 before a hiring decision is made, because those are the decisions where the algorithm is most confident and therefore most likely to be wrong in ways that matter.
BambooHR can track metrics like task completion speed, email response time, or meeting attendance. These are easy to measure but may have nothing to do with whether someone is actually doing their job well. You end up managing to the metric instead of managing performance.
The fix
Before using any metric to rate someone's performance, ask whether an excellent employee in this role would necessarily score high on that metric, or whether it just happens to be measurable.
When the system shows you patterns in who gets rated highly and who does not, you assume those patterns reflect real differences in performance. What you are actually seeing is a record of past subjective judgements, now made visible and therefore appearing objective.
The fix
When Workday shows you a performance distribution, ask your managers what specific work they observed that led to each rating, not what the data suggested.
Tools promise to identify flight risk by analysing communication patterns, meeting attendance, or other workplace behaviours. These correlate with turnover in aggregated studies but miss the context that actually determines whether a person stays: whether they feel valued, whether they have opportunity to grow, whether they trust their manager.
The fix
When an employee is flagged as high flight risk, have their manager schedule a conversation to ask directly what they need, rather than letting the alert replace that conversation.
AI can spot who completes tasks slowly or produces fewer outputs. It cannot see who is solving the hardest problems, mentoring others, or preventing future crises. When you use it to rank employees, you demote the people doing invisible work that matters most.
The fix
Before performance conversations based on algorithmic rankings, ask the person's manager and peers what outcomes they actually depend on this person for.
Performance systems can measure current skill levels easily. They struggle to understand what skills matter for the next role, or for the organisation's future direction. You end up developing people to be better at what they already do rather than preparing them for what comes next.
The fix
When an AI-driven development plan is suggested, check it against the actual skills required for the next role the person might move into, and adjust priorities if necessary.
Dashboards showing turnover by department, performance by team, or engagement by group look definitive. They reflect one way of measuring and categorising data, not objective reality. When you make strategy decisions based on them without questioning the measurement, you solve the wrong problem.
The fix
For any workforce metric driving a major decision, ask the analytics team to show you the three different ways they could have measured the same thing and how the results would differ.
Succession tools often identify high potential based on current performance, speed of advancement, or similarity to successful leaders. They miss people who could grow into leadership through mentoring, who have potential that has not been triggered yet, or who would be excellent in different kinds of leadership roles.
The fix
Run your AI succession plan past your most trusted senior leaders and ask them who they think is missing, then investigate why the algorithm did not identify those people.
When you have data showing that remote workers complete tasks faster or that certain teams have lower engagement, it is tempting to use that insight to make policy. You skip the step of actually understanding what is happening and why, and implement solutions that feel wrong to the people affected.
The fix
When workforce data suggests a problem, spend time with people in affected teams before deciding what to change.
Many tools measure communication frequency, meeting attendance, or survey responses as proxies for experience. These tell you something, but they miss most of what actually shapes how people experience working in your organisation: psychological safety, whether they feel heard, whether they can do their best work, whether they trust their manager.
The fix
Supplement any employee experience metric with regular conversations where you ask people directly what they need from the organisation that they are not getting.
Worth remembering
The Book — Out Now
Read the first chapter free.
No spam. Unsubscribe anytime.