By Steve Raju

For HR and People Management

Cognitive Sovereignty Checklist for HR and People Management

About 20 minutes Last reviewed March 2026

When AI tools make the first pass at candidates or shape performance ratings, your team stops seeing the full picture of who people are. You inherit the biases baked into training data while losing the ability to spot talent that doesn't match a statistical profile. The organisations that keep human judgement in control are the ones that still know why they hired someone.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for HR and People Management: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Protecting Hiring Judgement Against Algorithmic Screening

Audit what your AI hiring tool actually rejected and whybeginner
Pull a sample of candidates your system flagged as unqualified in the last month. Read their CVs yourself. Note when the tool rejected someone you would have interviewed. This shows you what patterns your algorithm favours that you might not agree with.
Set clear rules for which hiring decisions the algorithm can make alonebeginner
Decide upfront that screening for basic qualifications (degree, years in role, required licence) can be automated, but ranking, fit assessment, and final selection must involve a human recruiter or hiring manager. Write this down in your hiring policy so it stays true when time pressure hits.
Train recruiters to distrust high-confidence AI scoringintermediate
When HireVue or Eightfold gives a candidate a low score with high certainty, that confidence signal comes from pattern-matching, not understanding. Teach your team that an AI score of 92 per cent does not mean the person is objectively better. It means they match the training data closely.
Require hiring managers to meet candidates the system ranked lowintermediate
Pick one candidate per hiring round that your AI tool ranked below your usual interview threshold. Meet them anyway. Track whether you ever hire them and what they go on to contribute. This keeps your hiring managers honest about what the algorithm might be filtering out.
Document candidate sources by outcome, not just by toolintermediate
Track which candidates came from your AI-filtered pool versus which came from referrals, university partnerships, or direct applications. Compare hiring rates, retention at 18 months, and promotion rates by source. If your AI sourcing produces worse long-term outcomes, you have proof to change your process.
Create a hiring review meeting before offers go outbeginner
Before you make an offer, one person who has not yet formed an opinion reads the full candidate file fresh. They flag any candidate where the AI score disagrees with the interview notes. This catch happens before bias becomes a hire.
Review your AI tool's training data and test results for demographic biasadvanced
Ask your vendor (LinkedIn, HireVue, Workday) to share which groups their system was tested on and what their accuracy rates are by gender, ethnicity, and age group. If they refuse, you have your answer about whether they know they have a problem.

Restoring Context to Performance Management

Keep performance ratings a human decision, not an algorithmic outputbeginner
If BambooHR or Workday AI suggests a rating based on metrics, treat it as one input only. The rating itself must come from a manager who knows the person, their project constraints, team changes, and what they actually delivered. Do not publish algorithmic ratings as final scores.
Identify the data your system ignores about each personbeginner
Write down what your performance AI measures (tasks completed, hours logged, emails sent, projects closed). Then list what it cannot see (mentoring, knowledge sharing, how they handled a crisis, their growth from the year before). Share this list with managers so they remember what matters that is invisible to the algorithm.
Set a rule that managers must write why they rated someone, not just enter a scorebeginner
Make written justification mandatory in your performance system. If a manager writes only a number with no explanation, the review goes back for revision. This forces the person doing the rating to think about their own bias and not rely on what the algorithm suggested.
Review performance ratings for algorithmic drift within teamsintermediate
Pull performance data by department and manager. If one team's ratings cluster differently from others, or if ratings for similar work vary wildly, investigate whether managers are over-relying on AI scores or letting the system override their judgement. Call it out directly.
Create a formal challenge process for algorithmic performance flagsintermediate
If an employee is flagged as low-performing by your system but their manager disagrees, set up a structured review with HR and the manager. Document what the algorithm saw and why the human assessment differs. Make space for that disagreement to matter.
Compare individual performance ratings to team outcomesadvanced
If your AI rates someone as low-performing but their team ships projects on time, or their department has high retention, that gap tells you the algorithm is measuring the wrong thing. Use this evidence to push back on algorithmic performance assessments.

Keeping Human Relationships and Trust Real

Audit which conversations in your organisation are now AI-mediatedbeginner
Map where AI tools now handle communication that used to be human. Chatbots for benefits questions, automated feedback, templated offer letters, algorithm-decided shift schedules. Pick one category and return it to human contact for a pilot period. Measure whether trust improves.
Require managers to deliver difficult feedback in person, not through a system messagebeginner
When someone is put on a performance improvement plan or told they did not get a promotion, a manager must have that conversation face to face or over a video call. If that conversation is mediated through a platform or summary, the relationship breaks and the employee has no one to understand why.
Create a human escalation path for employees who distrust an algorithmic decisionintermediate
If an employee believes their hiring rejection, performance rating, or shift assignment was unfair, they should be able to request a human review quickly and without penalty. Make this path simple and advertise it. Use these requests as signals that your algorithm is not trusted.
Have managers reintroduce themselves to direct reports quarterlybeginner
Schedule a 30-minute conversation where a manager and employee discuss what the manager is learning about the employee, what the employee needs from the manager, and what the data systems are missing. This keeps the human relationship primary and stops the algorithm from becoming the relationship.
Track where algorithmic decisions made good people leaveintermediate
When someone resigns, ask them in the exit interview whether they felt their work was fairly assessed and whether they trusted the systems used to evaluate them. Flag exits where employees specifically mention algorithms, automation, or feeling like data points. These are warning signs.
Build a case file of algorithmic mistakes in your organisationintermediate
When a hiring algorithm rejected someone who would have been great, or a performance system flagged someone who then won a promotion, document it. Keep these stories visible. They remind your team why human judgement still matters and give you evidence when a vendor claims their system is unbiased.

Five things worth remembering

Related reads


Common questions

Should hr and people managements audit what your ai hiring tool actually rejected and why?

Pull a sample of candidates your system flagged as unqualified in the last month. Read their CVs yourself. Note when the tool rejected someone you would have interviewed. This shows you what patterns your algorithm favours that you might not agree with.

Should hr and people managements set clear rules for which hiring decisions the algorithm can make alone?

Decide upfront that screening for basic qualifications (degree, years in role, required licence) can be automated, but ranking, fit assessment, and final selection must involve a human recruiter or hiring manager. Write this down in your hiring policy so it stays true when time pressure hits.

Should hr and people managements train recruiters to distrust high-confidence ai scoring?

When HireVue or Eightfold gives a candidate a low score with high certainty, that confidence signal comes from pattern-matching, not understanding. Teach your team that an AI score of 92 per cent does not mean the person is objectively better. It means they match the training data closely.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.