AI in Therapy: Already in the Room (Whether We Like It or Not)
- AI Therapy Companion Team
- Oct 4
- 5 min read
Updated: Oct 18
Picture this: you’re in session, your client is nodding along, and you’re thinking, “Wow, they’re really connecting with this intervention today.” Then, they casually drop:
“Oh yeah, ChatGPT told me something similar last night…”
And suddenly, there it is: AI has pulled up a chair in your therapy room.
It doesn’t matter if you invited it or not. AI is already there… sitting quietly in the corner, sometimes helpful, sometimes confusing, occasionally blurting out something that makes your client question whether their therapist has been replaced by a laptop.
The question isn’t if AI is in the therapy space. It’s how our clients are using it, and how much of that use is quietly shaping (or derailing) the work we’re trying to do together.

Why Therapists Can’t Pretend AI Isn’t Happening
Let’s be honest: therapists didn’t go into this profession because we were excited to compete with chatbots. We came for the human connection, the nuance, the subtlety of body language, the therapeutic relationship.
But AI isn’t waiting for our permission. It’s already woven into everyday life. Clients use it to:
Write emails, make grocery lists, or generate Instagram captions.
Brainstorm how to tell their boss they’re quitting (without burning bridges).
Look up their “symptoms” at 2 AM and spiral into a new self-diagnosis.
Vent to an AI “companion” because it’s available 24/7 and never gets tired.
And yes, sometimes they’re even using AI to “practice” therapy skills… or worse, replace therapy altogether.
If we don’t ask about it, we’re missing a whole piece of their coping system.
The Counter-Productive Side of Client AI Use
Here’s where it gets tricky. AI can be supportive — sure. But it can also quietly sabotage the work:
Avoidance disguised as productivity.
Instead of journaling raw feelings, a client might ask AI to “process” them (and bring you a neat summary instead of the messy truth).
Reassurance loops.
Late-night “Am I okay?” chats with AI can feel comforting… until the client is up until 3 AM chasing answers instead of sleeping.
Over-reliance.
“I didn’t make that decision, ChatGPT did.” Cool, except now therapy has to work through why they outsourced agency to a bot.
Companionship confusion.
Some clients are forming emotional or even intimate bonds with AI companions. It may feel safe, but it risks making real-world vulnerability harder.
Risky advice.
AI isn’t a licensed clinician (no matter what your client says Replika “told them”). Taking medical, legal, or financial advice from an algorithm is a gamble.
Therapy is about building capacity, not outsourcing it. When clients lean too hard on AI, it can short-circuit the very skills we’re trying to strengthen.
Why Being Informed Helps Us (Not Just Them)
I get it — adding “AI use” to your intake list feels like one more thing on an already crowded plate. But here’s why it’s worth it:
It builds trust.
When you bring it up, you show clients you’re not out of touch. You’re curious, not judgmental.
It catches blind spots.
You may discover a client is using AI in ways that completely undermine your work (like bypassing emotional processing or doom-scrolling for AI reassurance at 2 AM).
It sets boundaries.
You can make it clear: therapy isn’t interchangeable with an app, and AI is not a substitute for crisis support.
It protects your license.
Regulations are evolving fast (hello, Illinois’ ban on “AI therapy” marketing). Staying ahead means you’re less likely to get blindsided by legal or ethical questions later.
It gives you leverage.
Instead of AI being the “elephant in the room,” you can reframe it: “Let’s explore how this tool is showing up in your life, and whether it’s supporting or blocking your goals.”
Tools That Make the Conversation Easier
This is exactly why we built the AI Use Snapshot. It’s a quick, optional questionnaire you can hand to clients, covering:
How often they’re using AI.
What they’re using it for (productivity, creative work, emotional support, even relationship-style companionship).
Whether it feels helpful, harmful, or unsettling.
If their AI use is interfering with therapy goals.
It’s short, collaborative, and designed to spark conversation without judgment. And honestly? It saves you from fumbling through “Sooo… do you… talk to chatbots a lot?”
The point isn’t to pathologize client AI use. It’s to understand it — because if AI is part of their coping system, it belongs in the therapy picture.
A Note on Humor (and Sanity)
Sometimes the only way to deal with all this is to laugh.
Therapist: “How have you been handling your Sunday night anxiety?”Client: “I asked my AI coach to pep-talk me. It sent me a playlist, a breathing script, and a motivational quote from Taylor Swift. Honestly, it was kind of a vibe.”Therapist (internally): “Great. I’ve just been replaced by a chatbot with a Spotify Premium account.”
Humor aside, these conversations matter. Clients experiment with AI because it feels quick, personal, and always “on.” That doesn’t make it evil — but it does mean therapists need to know where the risks are hiding
Acceptance Is the First Step
We can’t stop AI from being in the room. Pretending it’s not there won’t make it go away.
What we can do is:
Accept that AI is part of the new therapeutic landscape.
Stay informed about how our clients are using it.
Set guardrails where needed.
Use curiosity, not fear, to guide the conversation.
AI doesn’t replace us. It highlights how much clients still need the human element: empathy, nuance, accountability, and genuine presence. No algorithm can replicate that.
Final Thought
AI in therapy isn’t the enemy. But ignoring it is.
As therapists, we thrive on being curious about the forces shaping our clients’ lives. Right now, AI is one of those forces — whether it’s quietly helping or quietly harming.
So let’s stop pretending the chatbot isn’t in the chair. Pull it into the light, name it, explore it, and keep therapy human.
Because the future of our work isn’t about replacing the therapist. It’s about making sure AI never replaces the relationship.
About the Author:AI Therapy Companion was co-founded by Anissa Bell, LMFT, a licensed therapist with more than 15 years of clinical experience in anxiety disorders, insomnia, relationship stress, and the ways modern life reshapes mental health. In her work, Anissa began noticing a shift: clients were already bringing AI into the therapy room. Sometimes it helped them cope, other times it became a distraction, and often it carried risks they didn’t fully recognize.
Rather than ignoring this change, Anissa leaned into it. As co-founder of AI Therapy Companion, she designed practical tools that help therapists address AI use with clarity and confidence. The goal is to keep therapy firmly human while recognizing how AI is showing up in our clients’ lives.
✨ Try it yourself: Download the AI Use Snapshot Free Starter Guide, a simple and collaborative way to see how your clients may already be engaging with AI.





Comments