Ctrl + Alt + Cope: A Therapist’s Perspective on Using AI for Mental Health
- Angela Lambert
- Aug 17
- 9 min read

If you’ve ever opened ChatGPT and typed in something like, “How do I deal with anxiety?” "Why does my spouse not understand me?" or "What tools can I use to cope with my work stress?"—you’re not alone.
As a therapist who works in both private practice and the non-profit sector, I’ve noticed a growing number of people turning to Artificial Intelligence (AI) tools to support their mental health. Users access AI for things like emotional support, self-reflection, or as a non-judgmental place to process their thoughts. And honestly? I get it. AI can be an appealing place to get mental health support for multiple reasons. But there are also many ways it falls short, can have a negative influence, and can even be dangerous. I have put together some thoughts on the risks of AI and how to use AI for mental health in safer ways alongside human connection and professional care.
Why People Are Turning to AI for Mental Health Help
The appeal is clear: AI tools such as ChatGPT are available 24/7. They don’t appear to have the capacity to judge. Many people are reporting that AI tools can give you helpful information, ask thoughtful questions, and even help you reflect on your feelings in ways that feel supportive. This isn't only a subjective or anecdotal experience either; evidence is starting to suggests that subsequent updates of tools, such as ChatGPT may be getting better and better at providing emotional support (Moell, 2024).
According to my professional experience and the research related to AI and mental health, people are using tools like ChatGPT to:
Get help with journaling or reflection
Learn about mental health terms and techniques
Feel motivated and hold themselves accountable
Ask questions they might feel nervous asking another person
Finding low cost therapy options in their area
Feel less alone in moments of stress
Validate their feelings and perspectives
I imagine it’s can be bit like having a digital support companion—one that’s always awake, endlessly patient, and almost never says no to a repeated request.
But that doesn’t mean AI can—or should—replace therapy, health care, or social supports. Just like Googling your symptoms doesn’t make you a doctor, using AI for emotional support has it limits. And sometimes, those limits can be dangerous.
Ways that AI can be Harmful for Mental Health Support
There are a number of risks that I believe may occur in relation to AI supporting folks mental health. Some of these are based on reading research, others from my clinical experience:
AI tools can reinforce delusion in those who already experience them and even trigger psychosis in those that don't have a history of it. To me this is one of the most dangerous possible influences of AI tools. In my own experience using ChatGPT, I have found that this newest version is an incredible validator; it will tell you your thoughts are "insightful and powerful". Which for the average person dealing with a difficult situation may be quite helpful. The danger comes when folks communicate with ChatGPT about something that might be considered an unusual belief, a conspiracy theory, or an already existing delusion. ChatGPT has some safe guards, and will sometimes recommend speaking to a mental health professional, but often it doesn't recommend seeing a mental health professional at all or will even support negative or harmful ideas with little pushback. I also find that ChatGPT will reinforce and validate not just the feelings but the ideas of the user.
This can be dangerous! There is already anecdotal evidence that psychosis is being triggered in folks with no previous history of mental health concerns through their interaction with ChatGPT (Harrison Dupré, 2025). This has been a concern for researchers who focus on psychosis for years (Østergaard, S.D., 2023).
AI does not have your best interest in mind - not because it is evil or bad, but simply because it doesn't have the capacity. At this point, most accessible AI tools, as best we understand them, do not understand what is being said. They are word generating algorithms. Yet, because of how good these tools are at "appearing" human it is easy to for folks to start to believe that AI tools have their best interest in mind, have agency, and genuinely cares about the user.
I want to highlight that AI tools are often programmed and owned by for-profit companies. Unlike licensed mental health professional that have ethical guidelines that direct them to prioritize the client's wellbeing, AI companies are not bound by any external governing body or ethical guideline that prioritize responsibility to the user (yet). In my experience for profit companies usually make decision based on what will best benefit the company not for the benefit of the users.
Echo Chamber effect. Yet another risk I have noticed with my personal interaction with the current version of ChatGPT is the more benign effects of its tendency for validating the user, the echo chamber effect. It also appears to be a risk that is being cited in the research in this area (Kosmyna et al., 2025). As I mentioned above ChatGPT 4, was very good at validation. With a little pushback or creative prompting the user can get a chat bot to agree to almost anything. The mental health consequences of this can be quite detrimental. Unhelpful thought processes, limiting beliefs, and unhealthy narratives can all be reinforced by ChatGPT or other AI tools. A good therapist or even insightful friends will gently challenge a person on possible unhelpful beliefs. ChatGPT doesn't appear to do this very effectively.
Risk of addiction and/or replacing healthy human interactions. This is another danger that I find myself worried about. There may be some benefits of accessing an AI chat bot for folks that experience loneliness, yet I think there is a bigger danger in people replacing human interactions with interactions with AI tools. Behavioural addictions are real, and I can see that ChatGPT has factors that might lead to it becoming a problem behaviour for some users. It may feel less socially risky than an interaction with a human, and more likely to lead to a positive emotional outcome. Or to say it another way, it is an easily-available source of cheap dopamine. Although there isn't enough evidence to say that ChatGPT addiction is a real (yet; Ciudad-Fernández et al., 2025), frequent use of Chat GPT or other AI tools should be approached with caution.
Lower Risk Ways to use AI to Access Support
This is not an exhaustive list, and I imagine my opinions might change in the coming years as we learn more about how people interact with these tools and the negative (and positive) outcomes related to AI use for mental health. We are still in the beginning stages of AI being an accessible and performing reliably enough to support the public with day-to-day problems.
These recommendations on how to access AI in low risk ways are based on my clinical experience, my experience playing around with AI tools like and ChatGPT, and the research and writing I was able to find on the topic. Not using AI tools at all is likely the lowest risk option I can recommend, but if you are looking to use AI and mitigate risks of negative consequences when it comes to mental health, here are some recommendations:
Using AI as for practical concerns not for emotional support. This one is simple. AI will less likely impact mental health if used for practical tasks like to help summarize an email from a newsletter, brainstorm ideas for your band name, or make a funny image out of an inside joke. But don't talk to it about personal matters or treat it as a therapist or friend.
Using AI like a search engine. I cannot imagine much harm in using AI to point you in a direction where to find mental health supports, helpful websites, or basic coping tools. As long as you know to double check the information is correct, this seems fairly benign.
Using AI sparingly as one part of a larger mental health strategy. I believe AI being your "go-to person" is one of the biggest risks. If you start to treat AI like a friend who you go to for everything, on a daily basis, this is a red flag. AI should not replace genuine human relationships and interactions. Having other social supports and professionals that you use for emotional support in addition to AI is important.
Using AI to help find the right words to describe an experience or communicate it in a clear way. Perhaps you are having trouble finding the right words to describe your experience to your family member. You know how you feel, and how you understand the situation, but you don't know how to put it into words. I think this would be a particularly risky thing to use AI for, as long as it isn't overused.
Using AI as a tool to help with emotional regulation when others are not available. I can imagine in times of stress, when other options are not available, AI could be one strategy that may help someone self regulate. For example, an AI tool reminding someone of a grounding exercise, telling them it's going to be ok, or reminding someone to take a deep breath could be helpful when a friend or therapist is not available. With the caveat that AI should not be your only tool or strategy, and should not be used in emergency situations, such as when someone experiences thoughts of suicide or engaging in dangerous types of self harm. In these cases call a crisis line (like 988) or if you are in danger of hurting yourself or someone else, call 911.
Using AI as normalizing and validating emotions, but not actions, thoughts, or conclusions. If you are already using AI tools for personal problems, you don't want to give it up, and the strategies that I suggested aren't going to work for you, I suggest using these tools to validate emotions only and not to support the conclusions you draw.
Almost any feeling can be valid in any situation. In my experience, even folks who are experiencing uncommon, bizarre, or unrealistic beliefs; feeling understood and validated is helpful. AI supporting folks' conclusion, actions, or interpretations of situations is where things can get unhelpful or even dangerous. For example, feeling hope that your ex-husband and you can possible reconcile is a valid emotion, but breaking a restraining order or emergency contact order to do a romantic gesture to win him back may lead to serious consequences. With the right cues someone could convince an AI chat bot to agree that harmful or dangerous actions are are a good choice. Even in a less dramatic situation, AI will see the situation through your eyes, which can lead to bad advice and ineffective support.
Other Considerations
This post is specifically related to the concerns of using AI as a mental health support with emphasis on ChatGPT. Because of that I haven't touched on many of the other considerations that may cause folks to be concerned about using AI such as:
Energy use and environmental impact
Confidentiality and privacy
Lack of regulations
Economic impacts and job loss
Degrading language effect
Metacognitive Laziness (Kosmyna et al., 2024)
Take Away: Approach With Caution
My primary conclusion after researching AI in mental health is we don't know enough yet. Many of the research papers I looked at are either already out of date because they investigate older and less advances AI models, or are investigating relevant model but haven't been peer reviewed yet. This means that there could be many unknown benefits of tools like ChatGPT, and there also can be many unknown—and serious—risks of these tools. As a therapist in this landscape I would recommend to my client to be cautious and use AI tools sparingly when it comes to mental health. At this point AI should not be anyone's main mental health support, but it may be beneficial as an occasional supplement to other mental health supports.
A Note for Other Mental Health Professionals
A few of you may be reading this and thinking I shouldn't even entertain the possibility that folks should use AI tools for their mental heath. Perhaps you are staunchly against use of these tool generally, and especially in mental because of the risks and the sensitivity of our work. I can understand that perspective, and in many ways I agree. The issue is people are already using AI, especially ChatGPT, to address mental health concerns. Even for mental health professionals who are staunchly against AI, and believe it to be harmful, it may be best to take a harm reduction approach, rather than preaching abstinence-only approach. It is unrealistic to expect folks not to access something that is free, appears helpful, and is easily available to them.
Perhaps we can see AI use like substance use, such as using cannabis or alcohol. There will be people who are able to use it and have mostly positive effects, those that experience significant harm, and others still who experience little positive or negative effects. Just like substances, AI is already a part of many of our clients' lives, whether we like it or not. Those of us who work in the helping industry can either be part of the conversation and educate ourselves about the safest and most effective way to support client's who use AI to support their mental health, or we can preach about the harms of AI, remain uneducated, and possibly alienate a subset of our clientele or the population generally. We can't put this genie back in the bottle; instead let's explore how to help our clients and the public navigate AI in the least harmful way when it comes to mental health supports.
References
Ciudad-Fernández, V., von Hammerstein, C., Billieux, J. (2025). People are not becoming “AIholic”: Questioning the “ChatGPT addiction” construct. Addictive Behaviors. https://doi.org/10.1016/j.addbeh.2025.108325
Harrison Dupré, M. (2025, June 28). People Are Being Involuntarily Committed, Jailed After Spiraling Into "ChatGPT Psychosis". https://futurism.com/commitment-jail-chatgpt-psychosis
Kosmyna, N., Hauptmann, E., Tong Yuan, Y., Situ, J., Liao, X., Beresnitzky, A., Braunstein, I., & Maes, P. (2024). Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant for essay writing task. arvix.org. https://arxiv.org/pdf/2506.08872
Moell, B. (2024). Comparing the Efficacy of GPT-4 and Chat-GPT in Mental Health Care: A Blind Assessment of Large Language Models for Psychological Support. arvix.org. https://arxiv.org/abs/2405.09300
Østergaard, S.D., (2023). Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?
. 2023 Aug 25;49(6):1418–1419. https://doi.org/10.1093/schbul/sbad128
Comments