For Recruiters and Talent Acquisition

The Most Common AI Mistakes Recruiters Make

Recruiters are outsourcing critical judgement to AI systems that can scale bias faster than any human screener ever could. The result is pipelines that look diverse on paper but eliminate unconventional candidates before you see them.

These are observations, not criticism. Recognising the pattern is the first step.

Download printable PDF

Screening and Assessment Mistakes

The system learns from your past hires, then replicates their patterns. If your last five senior hires came from Fortune 500 companies, the algorithm stops surfacing candidates from scaling startups or smaller firms, even when they have stronger relevant skills. You get a pipeline that looks safe but shrinks your talent pool.

The fix

Run parallel searches in LinkedIn Recruiters AI and manual Boolean strings monthly to catch who the algorithm excludes, then adjust your sourcing strategy based on what you find.

These systems flag candidates based on speech patterns, eye contact, and facial expressions coded as confidence or engagement. A candidate with a stutter, autism spectrum traits, or a different cultural communication style gets downranked automatically. You never see them because the AI rejected them before human review.

The fix

Request the raw scoring data from your video screening tool and randomly audit 20 rejected candidates per month to spot patterns in what gets flagged.

You feed ChatGPT a job description and ask for assessment criteria. The model produces questions that sound neutral but reward the communication style, confidence level, and background assumptions baked into your prompt. Candidates from different industries or education backgrounds answer differently, not because they are less capable.

The fix

After ChatGPT generates screening questions, have a colleague from a different hiring team rewrite them blind, then compare the two sets and keep only questions that produce the same right answer regardless of phrasing.

The system looks for keyword matches between candidate profiles and job descriptions. A product manager who managed community features gets ranked below someone with the exact job title elsewhere, even though the first candidate has fresher thinking. The algorithm optimises for pattern matching, not potential.

The fix

For any role filled in the last six months, compare Eightfold's top three ranked candidates to your actual hire and work backwards to understand what skills it weighted differently than you did.

The tool surfaces recommendations based on historical data about which candidates you have advanced before. If you have historically advanced candidates with certain educational credentials or career progression patterns, the AI amplifies that bias. Candidates outside that pattern stay buried in your pipeline.

The fix

Each week, pull the bottom 10 per cent of candidates Greenhouse flagged and spend ten minutes reading their profiles, tracking what made them invisible to the algorithm.

Job Description and Sourcing Mistakes

You prompt ChatGPT to make the description SEO-friendly for job boards. It fills the text with buzzwords like synergy, best-of-breed, and thought leadership. Candidates who can actually do the work think it is a corporate job that requires political navigation. You attract keyword-matchers and repel genuine problem solvers.

The fix

After ChatGPT drafts the job description, remove any phrase that does not appear in your actual day-to-day conversation with the person currently doing the role.

You filter for women in tech, or graduates from specific universities, or people with titles that match your role perfectly. These filters feel like diversity work. But they often just reproduce the same homogeneous pool in a different format. A woman who moved into tech from operations is invisible to you. A self-taught engineer never appears.

The fix

For your next two open roles, do not use any algorithmic filters on LinkedIn. Instead, source manually using Boolean strings that find people by skills and problems they have solved, then filter by gender and background after you have the full candidate set.

ChatGPT suggests the obvious places: job boards, LinkedIn, university career fairs, and direct competitor referrals. Those channels produce candidates who look like everyone else because they are designed for obvious candidates. The people who would transform your organisation are not on those channels.

The fix

Before asking ChatGPT for sourcing ideas, list three candidates you hired in the past two years who surprised you or came from unexpected places, then tell the model where each one actually came from.

You search for people with exact job titles or keywords around an emerging technology. The search returns very few results because the talent pool has not consolidated around standard titles yet. You conclude the talent is unavailable. Actually, the people who know the skill work under different titles or in adjacent industries.

The fix

For any role around emerging technology, spend time in relevant online communities or forums yourself first. See what terms people actually use to describe what they do. Use those terms in LinkedIn Recruiters alongside the obvious ones.

You ask staff to refer candidates, then share a job description that is full of jargon and generic responsibility lists. Your team thinks the role is for someone exactly like the last person you hired. Their referrals come from their own networks, which are usually similar to their own backgrounds. Referrals become a diversity bottleneck.

The fix

Before asking for referrals, rewrite the ChatGPT job description in language your own team would use to explain what the role actually involves day-to-day, then send that version to staff.

Candidate Experience and Judgement Mistakes

Candidates get routed through automated replies, chatbots, and AI-mediated assessments before any recruiter engagement. Candidates with legitimate questions about the role, visa sponsorship, or working arrangements get ignored because their message does not match a template. You lose good candidates before you ever meet them, and your employer brand suffers.

The fix

Route all initial candidate messages to a human recruiter first. Use AI to flag urgent messages or sort by urgency, but do not let AI respond directly.

A HireVue screening marks a candidate down for reserved responses or low energy on video. In a technical role, that candidate might be thoughtful and precise. In a client-facing role, reserved does not mean bad at building relationships. The AI cannot distinguish between communication styles. It just flags variance from whatever pattern it learned.

The fix

For any role where communication matters, conduct at least one live conversation with finalists before deciding, regardless of their video screening score.

You tell yourself you used objective screening tools. So the candidates who made it to your final interview panel are the truly best ones. But they actually passed through multiple AI systems that each encoded assumptions. By the time they reach you, you have a narrowed, homogeneous group. You feel confident in your choice because it feels objective, but the bias happened earlier.

The fix

After you hire someone, audit the full screening path they took through your AI tools. Note which screening steps they almost failed. Then audit a rejected candidate from the same role backwards and see where the paths diverged.

You used to read a CV and spot the person who learned quickly or had solved a hard problem in an unexpected way. Now you rarely see a CV until AI has already assessed it. You have outsourced the skill of recognising potential on paper. When you see finalists, you do not trust your own reading anymore because you assume the algorithm already filtered for the real potential.

The fix

Each month, block two hours to read CVs from candidates rejected by your screening tools. Pick five at random and ask yourself what you would have done if you had seen them first.

Eightfold tells you a candidate scored low on adaptability. HireVue marks another as lacking confidence. You treat these outputs as measurements, like a thermometer reading. They are not. They are the system's interpretation of behaviour. The interpretation can be wrong, and it usually reflects the training data more than the candidate's actual capability.

The fix

Before using any AI assessment result to make a decision, write down what behaviour or evidence the system claims to have measured, then decide if you would reach the same conclusion from that evidence alone.

Worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.