For Therapists and Mental Health Professionals

30 Practical Ideas for Therapists and Counsellors to Stay Cognitively Sovereign

AI tools promise to free you from paperwork so you can focus on clients. In practice, they risk moving your attention away from the room and into systems that pattern-match instead of listen. Your clinical judgement comes from noticing what clients do not say. You need to protect that work.

These are suggestions. Take what fits, leave the rest.

Download printable PDF

Protecting Your Clinical Attention During Sessions

Write session notes after the client leaves, not duringbeginner
Resist the temptation to use Eleos or similar tools that record and transcribe in real time. Your presence in the room is the primary tool. Notes can wait thirty minutes.
Keep a handwritten observation pad separate from documentationbeginner
Use a small notebook during session to track only what you notice about tone, silences, and body language. This stays your private record and never enters AI systems.
Notice when you are checking the AI tool instead of the client's faceintermediate
Set a specific rule: if you are looking at a screen, you pause the session or ask permission first. Most therapists who use Eleos find they check it more than they expect.
Ask yourself before each session whether AI documentation serves this client or your workflowintermediate
A client in crisis needs your full attention. A routine follow-up with a stable client may be the only session where AI note-taking makes sense. Be deliberate about which sessions get which approach.
Test AI summaries against your own memory of the sessionintermediate
After using Eleos or ChatGPT to draft notes, reread them and notice what the AI included and what it missed. Over time you will see what it cannot capture: the client's hesitation about a decision, the shift in their shoulders, the thing they almost said.
Use AI for administrative notes, not clinical formulationbeginner
Let ChatGPT help with appointment reminders, billing codes, and session duration. Write your own clinical impressions, hypotheses about what the client needs, and your concerns about risk.
Record one session per month by hand without any digital assistintermediate
Write full notes the old way, from memory, immediately after. This keeps your skill at observation sharp and reminds you what you lose when you rely on transcription.
Never use ChatGPT to interpret a client's behaviour or motivationintermediate
When you are puzzled about why a client acted as they did, sit with the question or discuss it with a supervisor. Do not ask an AI system to pattern-match their behaviour to typical presentations. That work is clinical judgment.
Keep the video off when using AI-powered client matching toolsbeginner
If you use Heliia or similar systems, review the suggestions on paper or in your head before you see the client's profile photo. First impression of the actual person matters more than algorithmic pairing.
Schedule a monthly review of your own clinical judgment against AI recommendationsundefined
Pull three recent cases where AI suggested something (a risk flag, a diagnosis pattern, a matching preference). Compare it to what you actually found in the room. Track where the AI was wrong.

Maintaining Your Independent Diagnostic Thinking

Write your initial formulation before checking what Nabla or ChatGPT suggestsbeginner
Form your own hypothesis about what brought this client in and what they might need. Only then look at what the AI system recommends. Notice whether it confirms your thinking or pulls you toward something different.
Question AI diagnostic suggestions that fit too neatlyintermediate
If Woebot or another tool flags your client as a textbook case of depression or anxiety, pause. Real clients are messier. Ask yourself what the AI might be missing: cultural context, trauma history, grief, practical stress.
Keep a record of times you disagreed with an AI tool and what you found insteadintermediate
When your clinical instinct runs counter to an AI recommendation, document it. Over months you will see whether AI tools systematically miss certain presentations or populations.
Do not let AI efficiency push you toward earlier dischargeintermediate
Algorithmic thinking often favours efficiency. If a client could be discharged in eight sessions instead of twelve, that might be true. It might also be what an AI system would predict for a pattern. Trust your sense of readiness, not the recommendation.
Use AI tools to surface data, not to replace your judgment about riskbeginner
AI can flag that a client mentioned suicidal thoughts. You decide whether this is passive ideation, active planning, or a thought they are working through. Your risk assessment cannot be delegated to a pattern-matcher.
Notice when you are starting to think in AI categories rather than your ownadvanced
After weeks of using ChatGPT to structure case notes, your thinking can begin to follow its categories. If you notice yourself reaching for its language or structure during supervision, step back and do your own formulation again.
Challenge AI summaries that reduce a client to a symptom listintermediate
Eleos or similar tools will extract symptoms and complaints efficiently. But a client is not their symptoms. After the AI summary, write one sentence about who this person is when they are not struggling.
Discuss difficult AI recommendations with your clinical supervisor before following themintermediate
If an AI tool recommends something that does not fit your clinical sense, bring it to supervision. Your supervisor knows you and knows the client. They are the right person to sense-check the algorithm's advice.
Test whether AI suggestions are based on what the client said or how they said itadvanced
Ask the AI tool to show you which words or phrases led to its conclusion. Often you will find it missed tone, context, and the actual meaning beneath the words.
Refuse to use an AI tool if you cannot explain why you trust itundefined
Before you integrate Nabla or ChatGPT into your practice, you must understand how it works and why you think it is reliable. If you cannot articulate this to a client or colleague, you should not be using it clinically.

Protecting the Therapeutic Relationship from Mediation

Tell your client explicitly if you are using AI tools in their carebeginner
Honesty about tools is part of informed consent. When you use Eleos, ChatGPT, or any AI system for anything to do with their case, they deserve to know.
Ask your client whether they are comfortable with AI note-taking before you use itbeginner
Some clients will object on privacy grounds. Some will feel less able to be open if they know a machine is listening. Respect that preference. Their therapeutic safety comes before your documentation efficiency.
Never use an AI matching system to assign clients to therapists without human reviewintermediate
Heliia and similar tools can surface useful options. But the final decision should involve a senior clinician who knows both the client's needs and the therapist's strengths. Algorithms miss relationship fit.
Protect the detail that only you and your client knowintermediate
Some of what clients tell you should stay in the room. Not everything needs to be documented or summarised by AI. Ask yourself whether the session details you are feeding to Eleos are necessary for their care.
Use AI tools only for the parts of your work that do not require human judgmentbeginner
Billing, scheduling, generic psychoeducational content: AI can help. Safety assessment, therapeutic planning, relational rupture repair: only you can do that. Do not blur the boundary.
Notice if you are becoming less curious about your client because the AI has summarised themintermediate
Once ChatGPT or Eleos has generated a neat description of a client's presentation, you may feel you already know them. Resist that. Curiosity and genuine not-knowing are core to the therapeutic relationship.
Check whether your client is becoming aware they are being processed by algorithmsintermediate
Some clients will sense that their words are being fed to systems. They may withdraw or become more guarded. Watch for signs that mediation through technology is changing how safe they feel.
Reserve your most attentive presence for clients where the relationship is most fragileintermediate
A client with trauma history, a young person testing trust, someone from a group that has been harmed by institutions: these relationships cannot afford mediation through AI. Give them your full human attention.
Do not use Woebot or similar tools as a substitute for the therapeutic relationshipbeginner
These tools can supplement therapy. They cannot replace it. If you are recommending an AI chatbot instead of more session time, ask yourself whether you are solving a genuine clinical need or a capacity problem.
After using an AI tool to draft notes, reread them and restore what is missingundefined
AI will strip away the texture: the client's wit, their self-awareness, the moment of connection. Add these details back in. Your notes should make the person visible to your future self.

Five things worth remembering

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.