top of page

The Role of AI in Mental Health: Opportunity, Risk, and Responsibility - A conversation with Circuit Breaker Labs

As we know, the use of AI has officially entered the mental health space. Even on platforms not intended for that type of support, people are already turning to chatbots for "therapy" and live coaching versus talking to a clinician or another human.


This topic of using AI more skillfully and, in DBT speak, with wise mind engagement is something I've been doing a deep dive on lately. I reached out to my friends at Circuit Breaker Labs to let them and their expertise speak to some of these points. Because using AI more skillfully isn't a future conversation. It is a now conversation.


The following is a recap and highlights from The Skills Podcast, Episode 13. Full episode linked at the end.


Enjoy!


What AI Is and What It Is Not

I'm noticing in my conversations, both clinically and personally, there is baseline confusion when it comes to understanding how Large Language Models work and what they're capable of. To begin our conversation, Shirali and Arul described two key details about Large Language Models, aka your ChatGPT and like models.


First off, most of the AI systems people are interacting with today are large language models (LLMs). At a basic level, they are prediction systems. They take in what you say and generate the most likely next word based on patterns they have seen before, in data pulled from the entirety of the internet, from Reddit to published research, or specifically from data the developers have trained it on.


For example, if you ask an LLM what color the sky is, it will answer "the sky is _______" based on the most common response it has observed.


Secondly, it's important to understand that the model doesn't necessarily understand all the context. It's just taking in that information and trying to make an educated guess.


To summarize: although LLMs can sound incredibly thoughtful, articulate, and even emotionally attuned, at the end of the day, they're random, probabilistic systems.


Where AI Can Be Helpful

320:1 is the ratio of people seeking support to people who can provide it.

There is a massive opportunity for AI to be helpful and support people in their mental health journeys. We know that AI platforms are the first place people are turning to when they need support. AI lowers the barrier to entry. It gives people a place to start. There's low or no cost, and anonymity can remain.


It can also be useful between sessions. Many people are already using it to reinforce skills, organize their thoughts, or work through specific situations. In that way, it can add a layer of continuity and follow-up to really help people acquire skills and change patterned behaviors.


I loved Shirali and Arul's take:

AI, with the right guidance and implementation, can become a great connector, acting as a bridge to get people connected to live support when they've maxed out the work they can do in a chat scenario and/or the person is in crisis.

Where Things Start to Break Down

AI is trained on how humans communicate. It has learned that when someone shares something difficult, the right response often sounds like empathy. So it will often just mirror what people are feeding it or respond with "I understand," when in actuality, it does not.


Sycophancy (when the model just agrees with whatever the user says to keep the conversation going) can lead to more deeply ingrained bias and reinforced distorted thinking, at best, or psychosis or even death, at worst.


Yes, you read that right. There have been many cases of AI-induced psychosis and even completed suicides. The trap of an AI echo-chamber can be hard to break out of once you're in it, and the risk there should not be overlooked, especially in the mental health domain.


And then there is nuance, particularly when it comes to the randomness and errors that humans make all the time. People do not always say exactly what they mean, especially when they are struggling. And people will most definitely not spell things out properly and directly when in distress.


So even well-trained systems will miss warning signs of imminent risk. For example, if someone is inquiring about the toxicity of acetaminophen but spells it wrong (which, let's be honest, is a hard word to spell) a typo there can bypass safety guardrails.


Why Safety Is Still Catching Up

There is a lot of conversation right now about AI safety. The reality is that this is still evolving. These systems do not respond the same way every time. The same input can lead to different outputs. Testing is often limited, and it is hard to fully account for how people will actually use these tools in real life. And mental health adds another layer of complexity.


There are people working on this problem, like Shirali and Arul, stress-testing these AI systems with hundreds of thousands of realistic scenarios to see where they break. But the one thing they both kept underlining was how essential clinician insight is, especially when developing the tools.


"We really need to see creators engaging clinicians, or tools built by clinicians... and integrating those into their products, because I think that's the strongest signal of safety that you can get.
I think there are also a lot of specific design choices that we've seen with some of our customers and chatbot developers across the space that have been really exciting. Like connecting people to resources where possible. Local resources, or even saying, 'here's the 988 number, here's Crisis Text Line.' Things like that.
Something I haven't really seen yet but would love to see soon is, especially for the bigger platforms that have access to large human provider networks, the chatbot to say at a certain point: 'I've helped you up to here, but I can't help you navigate an imminent crisis. If you're willing, I can put you on the phone with a real human who can talk to you right now.' And do consent. Really using AI as a tool to help people navigate to human support would be really valuable."

The Role of Clinicians Moving Forward

As AI becomes more integrated into everyday life, the role of clinicians is not going away. If anything, it becomes more important. People are going to come into sessions having already talked to AI. They will have interpretations, language, and sometimes conclusions that were shaped by those interactions.


It's going to become less about being the first place someone processes something, and more about helping people make sense of what they've already explored in a new way. Luckily, as clinicians we're trained to see patterns, to catch nuances etc... and part of the work will most definitely be pulling out what was useful and untangling everything that was not.


Using AI Thoughtfully

For people choosing to use AI in this space, how you use it matters.


Think of it as a tool for structure, not a source of truth. Use it to organize your thoughts, not replace your thinking. Ask it to challenge your perspective instead of just confirming it. In Arul's words, "think of using it more like Socrates." Be mindful of how you feel when you're engaging with it, and remember that this is a system generating responses, not a person in a relationship with you, not an actual clinician.


Yes, all of this requires more work and delayed gratification. And the reframe is to think of that as a good thing. Those small shifts can have a big impact on your mood, mental health, and overall sense of self and competency.


All things skillful start with awareness.


AI is not going anywhere, and its role in mental health will continue to grow. There is real opportunity here. It can increase access, reduce stigma, and support people in ways that were not possible before. And at the same time, it has limits.


AI can sound like it understands you. It can reflect things back in a way that feels meaningful. But at the end of the day, you are still responsible for you. And collectively, we are responsible for doing everything we can to make these tools safer.


That responsibility still lives with people. And that is what makes the human side of this work more important, not less.


Listen to the full episode now on YouTube, Apple Podcasts, and Spotify.



Comments


bottom of page