By Steve Raju

For Sports Coaches and Trainers

Cognitive Sovereignty Checklist for Sports Coaches and Trainers

About 20 minutes Last reviewed March 2026

AI performance tools like Catapult and Hudl give you useful data, but they can push you to train what the algorithm measures rather than what your athletes actually need. Your direct observation of an athlete's psychological state, motivation, and readiness often contradicts what the data dashboard shows. Protecting your coaching judgement means knowing when to trust your eyes and experience over the numbers.

Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
Cognitive sovereignty insight for Sports Coaches and Trainers: a typographic card from Steve Raju

These are suggestions. Take what fits, leave the rest.

Download printable PDF
0 / 19 applicable

Tap once to check, again to mark N/A, again to reset.

Separate your observation from the data

Record what you see before checking the metricsbeginner
Watch your athlete's movement quality, decision-making, and effort level during training. Write down your assessment before opening Catapult or Hudl. This keeps your independent judgement from being anchored by the software's numbers.
Name the specific coaching observation that conflicts with the AI metricbeginner
When Catapult shows high readiness but you observe poor movement mechanics or low motivation, write down exactly what you saw. Being specific prevents you from dismissing your observation as just a feeling.
Ask what the metric is actually measuringintermediate
WHOOP tracks heart rate variability, but that is not the same as psychological readiness or confidence. Hudl AI measures movement patterns, but not decision-making quality under pressure. Know the boundary of what each tool measures.
Test your observation against the data by watching the recorded footageintermediate
When you disagree with what Second Spectrum says about an athlete's positioning or decision-making, watch the video yourself. The algorithm's interpretation is one perspective, not the truth.
Track your own accuracy over a seasonintermediate
Keep a simple log of decisions you made based on your observation rather than the AI metric. Did those athletes progress? Did they stay healthy? This shows you the real value of your coaching judgement.
Document times when AI metrics led you astrayadvanced
If Catapult recommended increased intensity but you observed the athlete was mentally fatigued and the session went poorly, record it. Recognising patterns of AI error teaches you when to trust yourself.

Protect the coaching relationship from data mediation

Talk to your athlete about their state before mentioning any metricsbeginner
Ask how they feel, how they slept, what their stress is like. This direct conversation reveals factors that WHOOP or Catapult will never measure. Your relationship with the athlete is where coaching happens.
Design your weekly programme based on your coaching plan, not the AI dashboardbeginner
Build your training week around what you believe the athlete needs to improve. Then check whether the AI metrics support your plan. Let your expertise lead, not the algorithm's suggestions.
Give feedback to athletes based on what you observed, not what the software saysbeginner
Tell an athlete what you saw in their performance. Do not rely on Hudl AI's automated feedback or Second Spectrum's positioning analysis to replace your coaching voice.
Resist showing athletes every metric dashboardintermediate
Too many numbers can make athletes anxious about data they do not understand. Show them only the metrics you have personally verified and can explain.
Hold regular offline conversations about long-term developmentintermediate
Do not let your relationship with the athlete become a conversation about numbers. Talk about their goals, their frustrations, their progress in ways that ChatGPT training plans cannot address.
Identify the coaching skills that cannot be digitisedadvanced
You read an athlete's confidence, recognise when they need pushing versus rest, adjust communication based on personality. These coaching competencies are not measured by any AI tool. Remind yourself of their importance.
Create a decision protocol for when you override the AI recommendationadvanced
Decide in advance what combination of your observations, athlete feedback, and contextual factors would lead you to reject an AI suggestion. This makes your reasoning transparent and defensible.

Maintain control of your training design and athlete wellbeing

Do not let ChatGPT-generated training plans replace your expertisebeginner
A language model can assemble generic training principles, but it does not know your athlete's injury history, their learning style, or what worked for your last 10 similar players. Use the AI output as raw material, not as your final programme.
Cross-check AI wellbeing recommendations against the athlete's actual behaviourbeginner
When WHOOP suggests the athlete is overreaching, check their sleep quality, mood, concentration in training, and confidence. One sensor reading is not a diagnosis.
Define which athlete wellbeing factors you will not delegate to the algorithmbeginner
Mental health, confidence, motivation, sense of belonging to the team. These are too important to let an AI system handle. You must assess them directly.
Require manual verification before using AI data to cut an athlete from a sessionintermediate
If Catapult says an athlete is at injury risk or too fatigued, you must personally assess them before removing them from training. Do not let the algorithm make that coaching decision.
Compare Catapult data across multiple athletes to spot systematic biasintermediate
If the system consistently underrates certain athletes' load or performance, it may have a blind spot. Your comparison and adjustment is necessary.
Set a clear limit on how much training design you will automateadvanced
Decide what percentage of your weekly programme you will design yourself versus what you will generate with AI help. Keep the high-stakes decisions in your hands.

Five things worth remembering

Related reads


Prompt Pack

Paste any of these into Claude or ChatGPT to pressure-test your own judgment. They work best when you respond honestly before reading the AI reply.

Test your observation against the data

I have just assessed an athlete and I want to check my independent judgment before I look at any performance data. Ask me questions about what I observed in training today: movement quality, attitude, communication, and any concerns I had. Then I will compare my assessment to the data.

Understand what your AI tool is actually measuring

I use [name the tool] regularly in my coaching. Help me map exactly what it measures, what it infers from those measurements, and, most importantly, what aspects of athlete readiness and performance it cannot capture at all.

Design a protocol for overriding AI recommendations

I want to create a clear protocol for when I will override what the AI performance system recommends. Ask me questions about the situations where I have already done this, what my reasoning was, and what factors I consistently find the algorithm misses for my athletes.

Audit your athlete relationship quality

I want to examine whether my relationships with my athletes have changed since I started using AI performance tools. Ask me questions about how I communicate with athletes about their state, how they talk to me, and whether data is mediating conversations that used to be more direct.

Rebuild your unassisted session planning

I want to design a training session for [athlete or group description] based purely on my coaching knowledge and direct observation, without consulting any AI dashboard first. Ask me questions that help me build the session from my own expertise, then I will check how the AI metrics would have influenced my decisions.


Reading List

Five books that give this topic the depth it deserves. Each one is genuinely worth reading, not just citing.

1

Bounce

Matthew Syed

The science of deliberate practice and the development of expert performance, a useful frame for understanding what data tools can and cannot tell you about athlete development.

2

Thinking, Fast and Slow

Daniel Kahneman

The cognitive biases that make coaching judgment unreliable. And that AI data can reinforce rather than correct if you do not understand how you read information.

3

Range

David Epstein

Why broad, flexible thinking and adaptive skills matter more than narrow optimisation, with direct implications for how you think about athlete development beyond metrics.

4

Mindset

Carol Dweck

How a growth orientation in athletes develops through the right kind of challenge and feedback. The part of coaching that no performance algorithm can replicate.

5

Cognitive Sovereignty

Steve Raju

A framework for protecting independent coaching judgment as AI performance tools become embedded in every level of sport.


Questions to ask yourself

Use these before your next AI-assisted decision. Honest answers are more useful than comfortable ones.


Common questions

Should sports coaches trust AI performance data over their own observations?

No. And the question itself reveals the problem with how AI tools are often introduced. Catapult, Hudl, and WHOOP give you one dimension of athlete state, measured in the specific variables the sensors capture. Your direct observation captures psychological readiness, motivation, and contextual factors that no sensor measures. The data should inform your judgment, not replace it.

How accurate are AI sports analytics tools like Catapult and Hudl?

These tools are highly accurate at measuring what they measure, GPS tracking, heart rate variability, movement patterns, video-based positioning. The limitation is not the accuracy; it is the boundary. Catapult tells you physical load. It does not tell you whether an athlete who slept badly, had a difficult week at home, or is anxious about selection is truly ready. That part is yours to read.

Can AI replace coaching judgment?

Not in any meaningful sense. Coaching is fundamentally about relationship, motivation, and the real-time reading of what an athlete needs in a specific moment. AI tools can surface patterns across large datasets and alert you to trends you might have missed. But the decision about whether to push or protect, challenge or support, based on everything you know about this athlete today, is irreducibly human.

What are the risks of data-driven coaching?

Training what the algorithm measures rather than what the athlete needs. Over-relying on readiness scores in ways that undermine athlete self-awareness and autonomy. Creating an environment where athletes focus on metrics rather than their actual physical and mental state. And developing a false sense of certainty that quantified data is more reliable than experienced coaching observation.

How do coaches maintain independent judgment when using AI performance tools?

The most effective habit is to form your own assessment before looking at the data. Watch the warm-up. Talk to the athlete. Write down what you observe. Then check the AI metrics and use the comparison to calibrate both your judgment and your understanding of what the tool does and does not capture.

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.