For Therapists and Mental Health Professionals
Therapists: Using AI Without Losing Clinical Judgment
Session notes are eating your time. AI tools promise to fix this. But when you use Eleos or ChatGPT to draft your documentation, you risk missing the moment where a client's tone shifts or where what they do not say becomes the real work. The question is not whether to use AI, but how to use it in ways that keep your judgment sharp and your presence with the client intact.
These are suggestions. Your situation will differ. Use what is useful.
Session notes are not the same as clinical judgment
Eleos and similar tools can capture what was said. They cannot capture what you noticed in the silence, or why you chose not to challenge something in that moment. When you let AI draft your notes, you are outsourcing the act of deciding what matters. Instead, use AI to handle the admin work: transcription, formatting, structuring. Keep the part where you decide what to document and why. Your notes should reflect your clinical thinking, not the algorithm's guess at what might be important.
- ›Use Eleos to transcribe and organise, then write your own formulation of what happened in the session
- ›Do not let AI select which moments are significant. You make that choice before the documentation starts
- ›Check any AI-generated summary against what you actually noticed. If it misses the shift you felt, rewrite it
Client matching is not the same as knowing who needs you
Tools like Heliia and Nabla offer to match clients to therapists using algorithms. The logic sounds efficient. In reality, you know things about your practice that no algorithm captures: your energy for certain presentations, your effectiveness with specific trauma histories, the boundaries you need to keep. An algorithm sees patterns in data. You see the actual person. When you outsource matching to AI, you lose the chance to say no to cases that would genuinely be better with someone else, or to recognise a client who could benefit from your particular way of working.
- ›Keep client matching in your own hands. Use AI to manage the logistical data, not the judgment about fit
- ›Notice when you feel uncertain about a potential match. That uncertainty is often clinical information, not a problem to solve with more data
- ›If using an AI matching tool, always have a conversation with the client before confirming. Let your direct sense of the person guide the final decision
Therapeutic presence cannot be automated, even when it looks like it can
ChatGPT and Woebot can hold space. They can ask questions that sound curious. But the work you do as a therapist includes reading the room in ways machines cannot: noticing who is struggling to make eye contact, sensing when someone needs you to sit with them before offering anything, recognising the client who says yes but means no. When you begin to rely on AI responses or AI prompts to shape your sessions, you are training yourself to miss these signals. The client senses this shift in your attention, even if they cannot name it.
- ›Use AI to prepare session plans, not to run them. Read the AI input before the session, then put it aside
- ›When you feel tempted to check an AI tool during a session for what to say next, stop. That impulse means you are doubting your own clinical eye
- ›After sessions where you relied on AI prompts, notice what you did not see. Use that as data about when to trust yourself instead
Pattern matching is not the same as understanding context
One of the biggest risks with AI in therapy is that it pattern-matches at scale. Your client presents with anxiety and depression, so the algorithm suggests the standard protocol. But you know this particular person developed anxiety after a betrayal, not from neuroticism, and that context changes everything about how you work. When you start letting AI tools like Nabla or ChatGPT suggest formulations or interventions based on symptom clustering alone, you are accepting a version of the client that flattens their story. Your job is to hold the context that makes them unique.
- ›When an AI tool suggests a diagnosis, formulation, or intervention, ask yourself what context it is missing about this specific person
- ›Document the contextual factors that change your clinical approach. This makes your judgment visible and challengeable, not hidden
- ›Treat AI suggestions as hypotheses to test against what you know, not as findings to confirm
Your documentation licence is professional, not administrative
You document sessions not to satisfy insurance companies or create a legal record, though those things matter. You document so that you can think clearly about your work and refine your judgement over time. When you hand that thinking over to an AI tool, you stop developing as a clinician. Each time you decide what to write, you are making a micro-decision about what the work actually is. That accumulated series of decisions is how you become more skilled. Outsource the typing, not the thinking.
- ›Review your own notes from three months ago. If you cannot remember why you wrote what you wrote, your documentation is too automated
- ›Keep a brief reflective note after difficult sessions, written by you. This keeps your clinical learning active
- ›If you use an AI tool for admin, spend the time you save doing your own clinical reading, not seeing more clients
Key principles
- 1.Use AI to eliminate administrative friction, not to replace the moments where your judgment actually happens.
- 2.Your clinical eye reads what the algorithm cannot: context, contradiction, the weight of what was not said.
- 3.When you feel doubt about your own sense of a client, that is the moment to trust yourself more, not to reach for AI confirmation.
- 4.Documentation is where you do your own thinking. Protect that process even when it is slower than AI.
- 5.The therapeutic relationship depends on a particular kind of attention. Any tool that erodes your capacity to give that attention is a cost, however efficient it appears.
Key reminders
- After using an AI tool to draft notes, rewrite the clinical formulation yourself. Notice what you add or change. That difference is your actual judgment.
- Set a boundary: AI handles formatting, scheduling, and searching old notes. You handle the clinical decisions about what matters and why.
- When Eleos or Nabla offers a suggestion, pause and ask what you would say if the AI was not there. If the answer is different, trust your answer.
- Keep one session per week where you take your own notes on paper. Notice the differences in what you remember and what you see.
- At supervision, bring examples of where an AI tool suggested something different from what you decided. Use that gap as the focus for learning.