40 Questions Recruiters Should Ask Before Trusting AI Screening Decisions
AI screening tools can embed hiring bias before a human ever reads a CV. When you rely on algorithmic sourcing, you miss unconventional candidates who become your best performers. The questions you ask before acting on AI outputs determine whether your hiring gets smarter or just faster.
These are suggestions. Use the ones that fit your situation.
Questions about candidate screening and assessment
1When HireVue or similar tools flag a candidate as low-fit, what specific behaviours or answers triggered that score?
2Has the AI tool been tested on candidates hired by your organisation in the past five years to see if it would have rejected your best performers?
3Does the screening tool treat career gaps the same way across all demographic groups, or does it penalise certain groups more heavily for the same employment history?
4When assessing soft skills like leadership or communication, is the tool measuring actual capability or familiarity with specific speech patterns and vocabulary?
5What happens to candidates who answer interview questions in non-native English or with different cultural communication norms? Are they downscored?
6If a candidate was rejected by the AI tool, can you pull their full application to manually review what the algorithm missed?
7Does the screening tool weight recent experience and current role title more heavily than demonstrated ability to do the job differently before?
8Are candidates from non-traditional backgrounds (bootcamp graduates, career-switchers, self-taught specialists) being filtered out before reaching human review?
9When the tool scores candidate potential, does it use the same criteria for internal promotions as external hires, or does familiarity bias change the assessment?
10What percentage of candidates rejected by AI screening in your last hiring cycle would have passed initial human phone screening?
Questions about AI-generated job descriptions and sourcing
11When ChatGPT or your platform wrote the job description, did it copy language from similar roles that may have attracted the wrong candidates before?
12Does the AI-written job description emphasise years of experience and specific tool names, or does it describe the actual problems the role solves?
13If you used Eightfold or LinkedIn Recruiters AI to find similar candidates to previous successful hires, could that simply replicate the same demographic profile?
14When the sourcing algorithm returns candidates, are they ranked by algorithmic fit or by your organisation's actual hiring success patterns?
15Does the job description written by AI inadvertently exclude candidates from underrepresented groups by using language that signals poor cultural fit?
16Has anyone without AI involvement reviewed the job description to spot jargon or requirements that are nice-to-have, not essential?
17When AI sources candidates, is it only showing you people who match your current team composition, or is it also finding people who fill skills gaps?
18If a candidate's background looks unconventional on paper, does the AI tool allow human recruiters to still see them, or are they buried in results?
19Does the AI sourcing tool weight location, university, or previous employer as strongly as actual capability for the role?
20When the AI suggests you reach out to passive candidates, does it have visibility into your actual hiring outcomes to know if those suggestions work?
Questions about bias and systemic impact
21Has your organisation tested the AI screening tool specifically to see if it rejects candidates from protected characteristics at different rates?
22When the tool learns from past hiring data, is it learning from your best decisions or from your historical biases?
23If women, candidates of colour or candidates with disabilities were underrepresented in your past hires, will the AI tool now systematically reject more of them?
24Does your AI vendor publish transparency reports showing rejection rates by demographic group, or do they keep this data hidden?
25When you reject a candidate based on AI feedback, are you documenting why? Could you defend that decision to a lawyer if challenged?
26Has the AI tool been trained on hiring data from your industry, your organisation specifically, or just general internet data that may not reflect your actual talent needs?
27If the AI tool recommends certain candidates and rejects others, who decided what makes a good fit? Did hiring managers, HR, and diverse team members all agree?
28Are you using AI screening to make final rejection decisions, or only to rank candidates for human review?
29When the algorithm suggests candidate profiles that are similar to past hires, have you checked whether it is simply cloning your existing team?
30Does your organisation have a process to catch and correct AI screening errors before they systematically reject entire candidate groups?
Questions about your recruiting instinct and candidate experience
31When you read a CV that looks unconventional, can you still interview that candidate, or does the AI tool prevent them from reaching you?
32Have you noticed gaps between what the AI tool rates as high-potential and which candidates actually succeed once hired?
33If a candidate has a skill that is not listed on their CV but came up in conversation, can you add them back into consideration, or has the AI already rejected them?
34When candidates receive automated rejection emails from AI tools before a human reads their application, how is that affecting your employer brand?
35Are you spending less time actually talking to candidates and more time interpreting AI scores and dashboards?
36If you feel a candidate has genuine potential but the AI tool ranked them low, do you have permission to override the score and interview them anyway?
37Have you tracked how many of your best recent hires would have been rejected by the AI tools you now use?
38When the AI tool flags red flags in a candidate's background, does it explain what the flag is, or do you just see a low score?
39Are candidates able to understand why they were rejected, or does the AI screening happen in a black box that even you cannot explain?
40Has anyone measured whether using AI screening tools is making your hiring faster, or just making it feel faster while filtering out valuable candidates?
How to use these questions
Before implementing any AI screening tool, run it against your organisation's past five years of successful hires. If it would have rejected more than 5 per cent of them, question whether it is learning from your actual success patterns or from systemic bias.
Always keep the option for human override. If a recruiter's instinct conflicts with an AI score, interview the candidate. Track how those interviews go. You are teaching yourself when to trust AI and when to trust your judgement.
Request demographic breakdowns of rejection rates from your AI vendor. If they will not provide this data, that is a warning sign that the tool may not have been tested for bias.
When writing job descriptions, write them yourself first without AI help. Then use AI to check if the language is clear and jargon-free. Reverse the order, and you risk copying biased language from similar roles.
Set a rule: AI tools rank and filter. Humans decide. Never let an algorithm make a final rejection decision without a recruiter reviewing the candidate first.