For Therapists and Mental Health Professionals

40 Questions Therapists Should Ask Before Trusting AI Session Tools

AI session tools promise to save you time on documentation and matching, but they make decisions about what matters in your client's story. These 40 questions help you spot when an AI output might miss what only your trained attention can catch.

These are suggestions. Use the ones that fit your situation.

Download printable PDF

Session Notes and Documentation

1 When Eleos generates session notes, does it flag moments where the client said one thing but their body or tone suggested something else?
2 Has the AI tool been tested on notes from your specific client population, or is it trained on generic therapy transcripts?
3 If you use ChatGPT to draft progress notes, how would you know if it had omitted a risk indicator because the language was indirect or metaphorical?
4 Does your documentation tool tell you when it is uncertain about clinical significance, or does it present summaries as equally confident?
5 When you review AI-generated notes, are you spending the time you saved on documentation checking the AI's work instead?
6 Can you easily override or correct the AI's interpretation of what your client said, and does that correction feed back into the system?
7 If the AI summarises a client's trauma history, who verifies that the summary does not minimise or reframe what the client shared?
8 Does the tool document your own clinical observations separately from the client's reported experience, or does it blur that boundary?
9 When Nabla or Eleos suggests a diagnosis code or treatment category, does it show you the reasoning, or just the recommendation?
10 Are you using AI documentation to save time for better presence with clients, or has the time saving disappeared into other administrative work?

Client Matching and Allocation

11 When Heliia or similar tools suggest matching a new client to you, what criteria are they actually using, and do you know if they weight fit the same way you do?
12 Has the tool been shown to match clients in ways that reflect your actual competence, or does it match based on surface factors like presenting problem?
13 If the AI recommends that a client would be better served by a different therapist, does it base this on your capacity or on a claim about therapeutic fit?
14 Do you know whether the matching algorithm accounts for complexity, or whether it treats two anxiety presentations as equivalent?
15 When you decline a client match, does the system learn from your decision, or does it keep suggesting similar cases?
16 If a client's stated need is depression but their presentation suggests trauma, would the AI recommend matching based on the stated need alone?
17 Does the matching tool factor in your lived experience and identity in ways that matter to therapeutic relationship, or only in ways that satisfy compliance?
18 Are you aware of whether the AI has been trained to recognise when clients need a specialist you are not, or does it optimise for keeping cases internal?
19 If the matching recommendation contradicts your intuitive sense of fit, do you have permission and time to explore why before accepting?
20 When you work in a team, does the matching tool account for your specific relationships with colleagues, or does it treat all therapists as interchangeable?

Clinical Judgement and Risk Assessment

21 When Woebot or ChatGPT generates a response to a client message, how would it know whether offering immediate reassurance is therapeutic or avoidant?
22 Does the AI tool distinguish between a client expressing suicidal thoughts as history and expressing current intent?
23 If a client discloses something that changes your assessment of risk, does the tool update its recommendations, or does it only flag what it was trained to flag?
24 Has the AI been trained on cases where the highest risk was conveyed through understatement or through what was not said?
25 When a therapeutic tool like Nabla suggests an intervention, does it account for the specific stage of your relationship with this client?
26 Does the AI recognise when a client is testing the therapeutic relationship or expressing ambivalence about change, or does it interpret these as simple requests?
27 If you use AI to help with case formulation, how would you catch it if the AI connected patterns that are coincidental rather than clinically significant?
28 Does the tool alert you when a client's presentation falls outside the data it was trained on, or does it extrapolate anyway?
29 When you override an AI recommendation based on clinical judgement, do you document your reasoning in a way that holds you accountable?
30 Has anyone measured whether using these tools has changed the accuracy of your risk assessments compared to your unaided judgement?

Therapeutic Presence and Relationship

31 When you are looking at an AI summary of the last session while the client is in the room, what are you not noticing about them in that moment?
32 Does using Eleos or similar tools change how you listen, making you listen for categories the AI is trained to recognise rather than for what is alive in the room?
33 If a client asks you a question and you check ChatGPT before answering, what have you communicated about who is doing the thinking in this relationship?
34 When you use an AI tool to generate between-session messages, does it reflect your actual voice and relationship with this client?
35 Has the tool's presence in your sessions changed what clients feel safe enough to disclose, even if only slightly?
36 Do you know whether the therapeutic relationships of clients who receive AI-assisted care differ in outcome or depth from those who do not?
37 When you are documenting with an AI tool, are you more focused on what the documentation system needs than on what actually happened in the session?
38 If a client senses that an AI tool influences your recommendations, how does that affect their sense of being known by you personally?
39 Does using these tools allow you to be more present in sessions, or are you split between the tool and the person in front of you?
40 When you introduce an AI tool to a client, are you transparent about what it does and what it does not do, or do you avoid the detail?

How to use these questions

Related reads

The Book — Out Now

Cognitive Sovereignty: How To Think For Yourself When AI Thinks For You

Read the first chapter free.

No spam. Unsubscribe anytime.