By Steve Raju
For Therapists and Mental Health Professionals
Cognitive Sovereignty Checklist for Therapists and Counsellors
About 20 minutes
Last reviewed March 2026
AI note-taking tools and client matching systems can pull your attention away from the therapeutic relationship when you need it most. Your clinical judgement develops through attuned presence in the room. When AI patterns replace your listening, you lose the capacity to notice what clients are avoiding or what their body is telling you underneath their words.
Tool names in this checklist are examples. If you use different software, the same principle applies. Check what is relevant to your workflow, mark what is not applicable, and ignore the rest.
These are suggestions. Take what fits, leave the rest.
Tap once to check, again to mark N/A, again to reset.
Protect Your Clinical Observation
Record your own clinical impression before reading the AI summarybeginner
Write down what you noticed about the client's tone, hesitations, and what remained unsaid before you check what Eleos or Nabla generated. This keeps your own observational skill sharp and gives you a baseline to spot where the AI missed the emotional texture.
Notice when AI summaries feel neat but your gut felt messyintermediate
AI tools excel at pattern matching across many sessions. Your clinical instinct picked up on contradictions, shifts in energy, or defensive moves that may not fit a tidy summary. Trust the feeling that something was off even if the AI found a plausible explanation.
Review client matching recommendations only after forming your own viewbeginner
When tools like Heliia suggest which clients might benefit from certain interventions, you have already assessed their readiness based on relationship, pacing, and what they can tolerate. Compare your thinking to the tool's, not the other way around.
Ask yourself what the AI could not have heardintermediate
Woebot and similar tools work from pre-built decision trees. They cannot pick up the specific way your client made a joke that signalled shame, or the family history they mentioned in passing three sessions ago. After each session, note one thing only you would have caught.
Test AI interpretations against your relationship history with the clientadvanced
An AI system may categorise a client's behaviour as avoidant based on this session alone. You know whether that pattern is new, or whether today they were protecting something specific you recognised together last month. Your longitudinal knowledge is your edge.
Maintain a written clinical observation log separate from AI notesintermediate
Keep a brief private record of your clinical impressions, hypotheses about what is happening below the surface, and your assessment of therapeutic progress. This trains your observation muscle and gives you a record that is purely your thinking.
Decide in advance which clinical judgements you will never delegate to AIadvanced
Identify the calls that are core to your practice. Assessment of suicide risk? Detecting trauma responses? Gauging readiness for confrontation? State these clearly so you do not unconsciously start to rely on a tool to flag what you should be feeling in the room.
Defend Time for Therapeutic Presence
Measure how much session time you actually spend writing versus relatingbeginner
Before adopting an AI note tool, count how many minutes you spend on documentation in a typical session. If you are typing during client speech, AI will not give you back the presence you have lost. Calculate whether the time saved is worth the cognitive shift.
Set rules for when the note-taking tool is offbeginner
Decide which moments in a session must be AI-free. The first five minutes of reconnection? The moment a client discloses something difficult? Early trauma processing? Using ChatGPT to draft notes after the session instead of during it protects your face-to-face attention.
Resist the pressure to have perfect notes ready for supervision immediatelyintermediate
AI tools can generate session summaries instantly, but your own reflection takes time. Write notes by hand or type them after the session ends. This gives you space to think and prevents the tool from becoming the primary record of what happened.
Schedule specific time to review AI-generated suggestions outside of working hoursbeginner
If you use tools like Nabla for intervention ideas or Heliia for case conceptualisation, review them in your planning time, not between sessions. This stops you from drifting into tool consultation while you should be preparing mentally for the next client.
Track whether AI tools are reducing your confidence in real-time clinical decisionsadvanced
After a few weeks using a tool, notice whether you are second-guessing your instincts more. If you find yourself thinking 'What would the AI suggest?' during a session, that is a sign the tool has moved from support to interference.
Use AI notes as draft only, not finished recordintermediate
Treat Eleos summaries as a first pass you will revise, not as your case notes. Add your own clinical formulation, your reading of the relational pattern, and what you are hypothesising about the work ahead. The notes belong to your thinking, not the algorithm's.
Maintain Clinical Authority Over Your Practice
Identify where you are outsourcing judgement instead of using a tooladvanced
Ask yourself honestly: are you using ChatGPT to explore your own formulation, or are you using it because you do not trust your own clinical reasoning? Using a tool as a sounding board for your thinking is different from letting it replace your thinking.
Decline client matching recommendations that feel off the evidence of the relationshipintermediate
When Heliia or similar systems suggest a client is ready for a particular therapy model or intervention, you may know something the data does not capture. Your assessment of the therapeutic alliance, the client's stability, or their previous experience matters more than pattern-matching across cases.
Establish clear criteria for which clients can benefit from AI-assisted tools like Woebotintermediate
Some clients need your presence and cannot engage with a chatbot between sessions. Others may find it helpful but only after sufficient stability in your relationship. Make this decision clinically, not administratively. Do not use tools because they save time.
Document your reasoning when you disagree with an AI suggestionadvanced
If Nabla flags something or Eleos proposes a clinical direction and you reject it, write down why. Over time, this log shows where the tool struggles with your client population or where your experience is more reliable than the algorithm.
Review whether AI is changing how you listen to clientsadvanced
Notice if you are scanning for symptoms or patterns the tool can recognise instead of listening to the whole person in front of you. If you catch yourself thinking in the AI's categories during a session, that is a sign to step back and reconnect to your own clinical frame.
Teach clients explicitly about any AI use in their carebeginner
Clients have the right to know if ChatGPT, Eleos, or Woebot is involved in their treatment. Transparency protects the relationship and gives clients agency. Some may opt out. Honour that choice as part of clinical authority.
Five things worth remembering
- The most dangerous moment is when an AI note feels accurate. That is when you stop checking your own clinical sense against it. Build a habit of disagreement.
- Your ability to sit with a client's silence, confusion, or contradiction is irreplaceable. AI tools cannot tolerate ambiguity the way therapeutic presence can. Protect this capacity fiercely.
- If a tool saves you time on documentation but costs you attunement in the room, the trade has failed. Measure the real cost, not just the efficiency gain.
- Keep your supervision separate from your AI tools. Your supervisor knows you and your clients. ChatGPT does not. Use human consultation for the decisions that matter most.
- Clinical judgement is a skill you develop through thousands of hours of practice and reflection. Any tool that distances you from that direct experience is degrading your licence to practise, even if it saves an hour a week.
Prompt Pack
Paste any of these into Claude or ChatGPT to pressure-test your own judgment. They work best when you respond honestly before reading the AI reply.
Test your formulation before AI input
I have just finished a session with a client I am finding complex. Before I look at any AI-assisted formulation tools, ask me questions that help me articulate my own understanding of what is happening for this client, what patterns I am noticing, and where I feel uncertain.
Examine your session notes for clinical thinking
I have a set of session notes. Ask me questions that help me examine whether these notes reflect genuine clinical observation and thinking, or whether they have become a record of what happened without the analysis that makes them useful for future sessions.
Audit your risk assessment process
I have used an AI tool to support a risk assessment for [type of presentation]. Challenge me: what is my own clinical judgment about this client's risk? What does my direct observation of this person tell me that the tool cannot capture?
Rebuild your attunement to client presentation
Describe a clinical scenario without any assessment attached. Ask me to observe and reflect on what I notice. The presenting material, what I sense beneath it, and what questions I would want to explore. Only after I have responded, offer a clinical perspective.
Challenge your therapy model defaults
I tend to rely on [describe modality] and I have been using AI resources to support my practice. Ask me questions that challenge whether my chosen approach genuinely fits this client or whether I am fitting the client to my model. And to my convenient AI tools.
Reading List
Five books that give this topic the depth it deserves. Each one is genuinely worth reading, not just citing.
1
The Body Keeps the Score
Bessel van der Kolk
A foundational text on what actually happens in the therapeutic relationship. And why the somatic, relational dimensions of healing are irreducibly human.
2
On Becoming a Person
Carl Rogers
The original articulation of presence, empathy, and unconditional positive regard. The conditions that make therapy work and that no AI system replicates.
3
Thinking, Fast and Slow
Daniel Kahneman
The cognitive biases in clinical judgment that AI tools can make worse rather than better, essential reading for any practitioner using algorithmic tools in assessment.
4
Reclaiming Conversation
Sherry Turkle
What the shift toward mediated, text-based communication does to our capacity for genuine human connection. The capacity that therapy depends on.
5
Cognitive Sovereignty
Steve Raju
A framework for protecting independent clinical judgment as AI tools become embedded in assessment, documentation, and practice management.
Questions to ask yourself
Use these before your next AI-assisted decision. Honest answers are more useful than comfortable ones.
- When did I last formulate a complex client presentation entirely from my own clinical observation and thinking?
- Is my documentation a genuine record of my clinical thinking, or has it become a structured summary generated with minimal reflection?
- What do I notice about a client in session that I have never captured in any note or AI-assisted assessment?
- Am I developing as a clinician, or am I maintaining a steady state enabled by AI tools?
- What would my supervisor notice has changed in my clinical thinking over the past year of using AI tools?
Common questions
Can AI replace therapists or counsellors?
AI chatbots can provide psychoeducation, structured CBT exercises, and low-intensity support at scale. But therapy is fundamentally relational. The therapeutic alliance is itself a mechanism of change, not just a delivery vehicle for techniques. The attunement, repair of ruptures, and genuine presence that effective therapy requires are not achievable through AI interaction.
What are the risks of using AI tools in therapy practice?
The main risks are: using AI-generated session notes that misrepresent what actually happened; relying on AI risk assessment tools that have not been validated for your client population; and the erosion of the careful, slow observation that builds clinical skill over time. Any AI tool in clinical practice requires your professional judgment as a filter, not as an afterthought.
Should therapists use AI for clinical documentation?
AI documentation tools can reduce administrative burden significantly. But session notes are also how you process and make sense of a session. The act of writing is part of your clinical thinking. Therapists who delegate note-writing entirely to AI risk losing the reflective practice that sharpens their clinical understanding of each client.
How is AI affecting the mental health field?
AI is expanding access to mental health support at lower cost and lower barrier. Particularly for mild-to-moderate presentations and between-session support. For the clinical workforce, it is changing what administrative and assessment tasks look like. The question for practitioners is which tasks AI can legitimately assist with and which require the kind of presence and judgment that only human therapists provide.
What does AI get wrong about mental health?
AI systems trained on text cannot read a client's hesitation, notice a subtle shift in affect, or sense when something important is being avoided rather than addressed. Mental health care depends on moment-to-moment attunement to what is happening in the room, including what is not being said. These are not gaps that better AI will eventually close; they are fundamental to why therapy requires a human.