For CHROs and People Leaders

The Most Common AI Mistakes Chief Human Resources Officers Make

CHROs often treat algorithmic recommendations as objective truth rather than tools requiring human review. This outsources the judgement that defines good HR work and creates organisations vulnerable to systematic bias in hiring, promotion, and development decisions.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Hiring and Talent Acquisition Mistakes

These tools flag candidates by pattern matching to historical high performers, but your organisation's best recent hires often deviate from those patterns. You lose visibility into why someone was rejected and cannot challenge the algorithm when it screens out people you would have hired.

The fix

Require hiring managers to review video or notes from every candidate the tool marks as below threshold before final rejection.

The tool shows you where competitors are hiring and what skills are trending, but it cannot tell you what skills your organisation actually needs or where hidden talent sits. You end up chasing the same candidate pools as everyone else instead of building differentiated talent sources.

The fix

Use LinkedIn Talent Insights to identify gaps in your current hiring, then pair it with conversations with hiring managers about what hidden competencies they value but cannot articulate.

The system decides how much to value years of experience versus certifications versus cultural fit markers, but you never see these weightings or whether they align with what your organisation actually prioritises. A poorly calibrated weighting shapes your entire hiring pipeline.

The fix

Before using Workday AI's ranking for any hiring round, ask your system administrator to show you exactly how the tool weights each criterion and manually adjust until the top five ranked candidates match your hiring manager's intuition.

Tools report that your pipeline is more diverse by gender or ethnicity, but algorithms can be biased in ways diversity reports do not catch, such as screening out people with career gaps, neurodivergent communication styles, or non-traditional educational backgrounds. You may have reduced one type of bias while introducing another.

The fix

Conduct an annual audit of rejected candidates by looking at actual rejection reasons (not just diversity counts) and flag any patterns that correlate with protected characteristics or disadvantaged groups.

Recruiters begin to treat tool suggestions as non-negotiable rather than starting points for their own judgement. Over time, they stop asking why someone is recommended and stop trusting their own assessment of candidate potential.

The fix

Include in recruiter performance reviews a requirement to document at least one candidate per quarter that they advanced despite the tool recommending rejection, and require them to explain their reasoning to their manager.

Performance Management and Development Mistakes

The system identifies employees with declining productivity metrics or low engagement scores, but it cannot see that someone is caring for a dying parent, recovering from burnout, or working in a team with a toxic manager. You may act on algorithmic signals that would disappear with human understanding.

The fix

When Workday AI flags an employee as at risk, make the manager's conversation with that employee your first data point, not your second one, and require the manager to document what they learned before any HR intervention.

You start tracking metrics like time in applications, emails sent, and calls logged because these feed Workday AI's analytics. But your best people may spend less time in the system because they are efficient, and your collaborative work becomes invisible because it happens in Slack or meetings.

The fix

For each metric Workday AI uses to assess performance, ask your top performers whether that metric correlates with their own sense of doing good work, and remove or reweight any that do not.

You prompt ChatGPT with an employee's Workday data and it generates a balanced feedback statement that sounds professional. But the feedback becomes generic and loses the specific observations that help people understand what they did well or where to improve. Employees sense the feedback is not from their manager and discount it.

The fix

Use ChatGPT only to structure feedback you have already written or to help managers organise their thoughts before a conversation, never to generate the feedback itself.

You invest in ChatGPT training and prompt engineering courses, believing you are future-proofing your employees. But you are not protecting the analytical thinking, written communication, or creative problem solving that AI is replacing. Employees become dependent on the tool.

The fix

For every AI skills programme, pair it with a programme teaching the underlying skills that AI automates (writing, analysis, decision making without data), and measure whether employees can still perform these skills without the tool.

You have configured Eightfold AI or similar tools to identify high potential employees and succession candidates, but the tool cannot assess whether someone has the character, judgment, or vision to lead. Promotion decisions become about pattern matching to past high performers rather than betting on people who could grow into something new.

The fix

After Eightfold AI generates a high potential list, ask your senior leaders to independently nominate three people they would bet on who do not appear on the list, and investigate why the tool missed them.

Organisational Strategy and Capability Mistakes

You see that Workday AI can handle certain HR transactions automatically, so you eliminate the roles that handled them. But you leave yourself with no one who understands context when the algorithm fails or recommends something harmful. Your organisation becomes fragile.

The fix

Before eliminating any HR function that an AI tool can handle, identify what human judgment that function required and ensure someone else in your team now owns responsibility for catching algorithmic mistakes.

You implement HireVue or Workday AI and expect administrators to run them according to vendor documentation. These people have no training in bias detection, no authority to question recommendations, and no relationship with hiring managers or department heads. Problems compound before anyone notices.

The fix

Assign one senior HR leader direct responsibility for auditing every AI tool you use monthly, with explicit authority to pause the tool if they identify systematic bias or poor recommendations.

You celebrate that Workday AI reduced time to hire from 45 days to 25 days, or that algorithmic screening reduced recruiter workload by 30%. But you have not measured whether the people you hired perform better, whether your retention improved, or whether your leadership bench strengthened. You may be moving faster in the wrong direction.

The fix

For each AI tool, establish a leading indicator (candidate quality, manager satisfaction with hires, diversity of backgrounds) and a lagging indicator (six month performance rating, two year retention, promotion rate), and review both quarterly.

HireVue and Eightfold AI publish studies showing their tools are bias tested, but bias testing happens on population data, not your data. Your specific implementation may encode the biases present in your hiring history or your organisation's current team composition.

The fix

Request your vendors' bias testing methodology and data, and conduct your own post-implementation audit using your actual hiring outcomes from the first six months the tool is live.

You implement HireVue or Eightfold AI for screening because the tool reduces workload and the business case is clear. But you have not involved legal, who should understand employment law risks around adverse impact, documentation, and candidate appeal rights.

The fix

Before any algorithmic hiring tool goes live, require written confirmation from general counsel or your employment law advisor that your implementation meets legal standards and documents how candidates can appeal or request human review.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.