Mental Health Awareness Month Spurs Talk About Using Generative AI As Your Trusted Therapist
In today’s column, I am continuing my ongoing series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and does so in acknowledgment of the month of May as officially being mental health awareness month.
Since we are now in this notable month, it seems especially timely to meaningfully discuss the controversial topic of whether people can or should be using generative AI for mental health guidance. To get into that thorny matter I will start by covering a smattering of overall stats and trends about mental health and then dovetail the relevant generative AI usage emerging trends accordingly.
I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I am honored and pleased to indicate that I was interviewed and featured in the episode, see the link here).
Other vital background includes my in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.
Let’s unpack today’s focus.
Where We Are On Mental Health Across The Board
Let’s do a fast dive into some prominent associations that seek to aid mental health and provide valuable resources in doing so. I’ll be excerpting salient background information as posted by the National Alliance on Mental Illness (NAMI) and via the Mental Health America association.
Other equally worthwhile associations do post comparable materials likewise, but due to space limitations, I opted for these selected two. If there is sufficient reader interest, I’ll gladly cover more such groups in a subsequent column. Also, as a heads-up, I tossed a coin to decide the order of mentioning the two in the sequence herein, thus please do not infer any weighting by which of them is covered first or second. I think that’s probably enough of the small print and I can get further underway now, thanks.
The National Alliance on Mental Illness (NAMI) has an online booklet available entitled “2024 Mental Health Awareness Month Toolkit” that provides quite useful facts and figures about the latest status of mental health in the United States. Let’s take a quick look and see some highlights from that material.
First, the NAMI booklet mentions this (excerpt):
- “Since its inception in 1949, Mental Health Awareness Month has been a cornerstone of addressing the challenges faced by millions of Americans living with mental health conditions. Throughout May, NAMI actively participates in this national movement, dedicated to eradicating stigma, extending support, fostering public education, and advocating for policies that prioritize the well-being of individuals and families affected by mental illness.”
The material is chockfull of references to essential mental health resources, helplines, events, etc.
They also provide handy logos and slogans to garner awareness about the importance of mental health, including these excerpted examples:
- “I’m claiming today to be a good day.”
- “There is no shame in seeking help for mental health.”
- “Small steps can lead to big progress in mental health.”
- “Your mental health is everything. Prioritize it.”
In case you aren’t familiar with the magnitude of mental health issues and concerns, these reported numbers and stats might give you a moment of vital contemplation:
- “1 in 5 U.S. adults experience mental illness each year, and only half of them receive treatment.”
- “1 in 20 U.S. adults experience a serious mental illness each year, and only two-thirds receive treatment.”
- “1 in 6 U.S. youth experience a mental health condition each year, and only half of them receive treatment.”
- “50% of all lifetime mental illness begins by age 14, and 75% by age 24.”
- “122 million people live in Mental Health Professional Shortage Areas.”
I’ll be saying more about those statistics in a few moments.
I’d next like to point out that another well-known association also provides essential materials for mental health awareness, namely Mental Health America has available its online booklet entitled “2024 Outreach Toolkit: Where To Start”. They too have a bonanza of valuable resources and information readily made available.
As noted in their materials, they say this (excerpt):
- “Mental Health America is proud to have founded in 1949 what is now known as Mental Health Month. Every May, we come together with millions of Americans to continue the tradition of promoting awareness, offering vital resources and education, and advocating for the mental health and well-being of everyone. Being able to maintain good mental health is essential to each and every individual’s overall health and well-being. Mental health conditions are common, manageable, and treatable.”
Some logos and slogans they have provided for this month include:
- “Monday: Make a list of 5 things you’re grateful for today.”
- “Tuesday: Practice positive affirmations.”
- “Wednesday: Start a conversation about mental health.”
- “Thursday: Stay hydrated.”
- “Friday: Take a break from screens (phone, computer, etc.).”
And in addition to lots of presented facts and figures, I decided to select some qualitative indications that they cite, and which are backed up by research:
- “A 2022 study found that 73% of American adults reported being overwhelmed by the number of crises going on in the world.”
- “Recent survey data show that more than half of U.S. adults (58%) are lonely, with those who are low-income, young adults, parents, part of an underrepresented racial group, or living with a mental health condition experiencing even higher rates of loneliness. Loneliness increases the risk of developing anxiety and depression and has been associated with psychosis and dementia.”
- “Social media can cause FOMO (fear of missing out), depression, and reduced self-esteem as a result of comparison.”
- “In today’s tech-filled world, screens can dominate our daily lives and impact our mental well-being. What we choose to look at and how much time we spend on our devices really matters and can make a big difference in how we feel.”
There you have it, a plethora of mental health stats and circumstances that society today is faced with. In case you’ve been living in a cave that lacks Internet access, the serious matter of current times mental health is something that is prominently in the news. Lamentedly, mental health seems to be worsening and the trendline is not good.
Something needs to be done.
What’s Happening With Mental Health And AI
I’d like to do a quick summary of where things stand today on mental health, which will be followed herein by identifying how generative AI is coming into the picture.
First, I have extensively analyzed tons of reported studies and research on population mental health and come up with a list of seven overall trends. Seven is a handy number to deal with and I could indubitably have made many more points. As the famous saying goes, we all seem to like the number seven, plus or minus two (this stems from the classic research paper “The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information” by George A. Miller, Psychological Review, 1956).
My seven key trends regarding mental health consist of these crucial points:
- (1) Pervasive growth in mental health concerns and disorders on a widespread population basis is occurring.
- (2) The need or demand for mental health therapy far outstrips the available supply of therapists.
- (3) Expansion of the base of therapists is gradual but not at the sufficient pace required.
- (4) Access to qualified therapy tends to be geographically constrained.
- (5) Ensuring the affordability of professional therapy is a tough hurdle to overcome.
- (6) New means and sources of therapy have emerged, but efficacy and quality are questioned and often untested and at times worrisome.
- (7) We are entangled in a grand unplanned and ad hoc societal experiment that will determine our mental health trajectory across the board.
Let’s briefly explore those points.
Mental health is worsening on a widespread basis. Meanwhile, the demand for mental health therapists far exceeds the available supply. Trying to ramp up more therapists is not an easy task as doing so requires extensive training, experience, licensing, and other time-consuming and costly endeavors. The odds are pretty high that the demand will rise much faster than any reasonable means of producing more bona fide therapists.
Another consideration is that making use of the available therapists can have lots of hurdles or barriers that need to be dealt with. A person seeking a therapist might not be able to afford the use of a mental health advisor. There is also a geographical difficulty. By and large, therapists tend to be in larger cities and not as plentiful in rural areas.
Technology has come to the aid of these limitations and constraints.
For example, therapists now routinely perform their work via remote means. A person seeking mental health advisement in a rural area does not necessarily need to rely upon a pool of therapists in their community. Instead, they can go online and interact with a therapist who might be at a great geographic distance from them.
Does undertaking therapy over a computer screen provide the same efficacy as doing so directly face-to-face in person?
A huge debate about this is underway.
Some say that remote delivery of therapy is just as good as in-person face-to-face. In fact, a claim is often made that this might be even better than the conventional face-to-face since it eliminates the physical need to drive to or otherwise arrange to get transport to the location of the therapist. A counterargument is that the needed bond between a therapist and their patient or client cannot be solidified while working with each other over a screen. Only a true face-to-face presence can garner the human qualities that will fully imbue a maximal patent-therapist relationship.
Round and round that heated dispute goes.
Yet another qualm or perhaps boon is the emergence of self-help therapy. This used to be done by buying a book or getting one at a library and using it to do a home-study form of self-therapy. Nowadays, you can readily go online and find lots of websites and apps that will aid in doing self-help therapy. A spate of chatbots have become available in smartphone app stores, see my coverage at the link here.
Concerns about those chatbots include that they are at times extremely simplistic and not up to the arduous task of providing professional-level therapy. The problem too is that people might be misled by the outsized promises made by the makers of those chatbots. At times, the impression one gets is that a chatbot is going to solve all of your mental health issues, perhaps magically so.
This now brings us to the era of generative AI.
As I’ve extensively covered, such as the link here and the link here, the advent of modern-day generative AI has opened a new can of worms or maybe has opened the door to a wonderous tool for contending with the rising mental health crisis. This can go both ways. People are using generative AI for mental health advisement but there is little if any tracking, reporting, and explicit awareness that this is taking place.
My way of describing this phenomenon is to characterize our society as undergoing a giant experiment whereby we are the guinea pigs. Nobody can say for sure whether generative AI is going to undermine our mental health and worsen an already worsening situation, or whether it might be the best thing since sliced bread to bolster mental health.
We are in the days of the Wild West. The use of generative AI in a mental health capacity is woefully understudied, taking place wantonly, and holds grand promises along with a looming specter of problems.
Time will tell.
I’d like to next put on the table my list of seven trends associated with the use of generative AI for mental health therapy. Again, the number seven is a handy rounded figure. Please realize that there are lots more points that can be made.
Seven key trends of generative AI for mental health:
- (1) Modern-day generative AI got underway in November 2022 and has rapidly been widely adopted, thus this is a relatively new option but spreading fast.
- (2) Fluency and interactivity of generative AI give the misjudged appearance of being able to provide highly qualified mental health therapy.
- (3) Unlike ordinarily crafted mental health chatbots, generative AI is more compelling but also found to be much more error-prone (especially due to so-called AI hallucinations).
- (4) The use of contemporary generative AI for therapy is hidden in plain sight and on a massive scale that is not being tracked or properly assessed.
- (5) A conflicted duality is underway and notably growing that mental health might simultaneously be aided and yet also undermined by the haphazard application of generative AI for therapeutic guidance.
- (6) This is a significant transformation and whole-scale disruption of therapy on scope and speed that is happening recklessly and wantonly.
- (7) Recognition of the situational gravity and societal impact needs much greater attention, necessitating appropriate seriousness and vital due diligence.
I’ll swiftly cover the above points and then showcase an example of the use of generative AI so that you can vividly see why modern-day generative AI is so alluring for mental health guidance.
The Skinny On Generative AI For Mental Health
Generative AI as we know it today is relatively new.
The release of OpenAI’s ChatGPT in November 2022 launched a new era for generative AI. Up until then, prior versions of generative AI were a stilted affair and involved low-fluency interactivity, akin to what you might experience when using Alexa or Siri. If you’ve used Alexa or Siri, you know how frustrating things can be. Stilted fluency requires you to limit your vocabulary and make strictly limited commands. Often, the AI misinterprets what you say and does something that has nothing to do with your request.
Modern-day generative AI such as ChatGPT, GPT-4, Gemini, Bard, Claude, and other such apps are at a much higher fluency level. That’s the good news. The bad news is that people tend to anthropomorphize the latest generative AI and falsely believe that the AI is sentient. Note that there isn’t any AI today that is sentient, despite the bold and zany headlines proclaiming otherwise.
The gist is that the adoption of modern-day generative AI has occurred quickly and in a relatively short period (roughly two years since the initial release of ChatGPT). This is occurring at a breakneck speed. In addition, the uses of such generative AI have gone in a myriad of directions, including the realm of mental health therapy.
This brings up four major elements of weighty consideration:
- (i) Jack-of-all-trades. Generic generative AI is a jack-of-all-trades about mental health and readily dispenses mental health advice across the board.
- (ii) Mile long, inch deep. Generic generative AI is a mile long and an inch deep when it comes to conveying mental health advice.
- (iii) Good for us. Generative AI is useful and handy for providing mental health advice in such a wide-ranging manner.
- (iv) Bad for us. Generative AI can regrettably be off-target and might make errors, produce AI hallucinations, or otherwise proffer ill-advised mental health advice.
I’ve covered those same points in-depth previously, see the link here.
Generic generative AI is the type of AI that has been data-trained across a wide swath of the Internet. That’s what most of today’s generative AI consists of. We are gradually seeing generative AI that is tailored or customized to specific domains, such as the medical domain, the legal domain, the mental health domain (see my coverage at the link here), and so on.
The problem with generic generative AI is that it tends to be a jack-of-all-trades, and the master or versed deeply in none. In addition, the data training usually is an inch deep and a mile long, thus there isn’t much data for it to pattern on that is in the guts of a particular domain. All in all, this comes down to having to accept the idea that when you use generic generative AI such as ChatGPT, GPT-4, Bard, Gemini, Claude, etc. you are going to get shallow results.
That’s the deal.
Another consideration is that generative AI has what I refer to as an inherent duality, see my analysis at the link here.
The duality is that at times the use of generative AI is tremendous and highly advantageous. Lamentedly, the other side of the coin is that at times the use of generative AI can be misleading, error-prone, and subject to the AI making stuff up that is completely bogus or fictitious (this is commonly referred to as AI hallucinations, a term that I disfavor because it anthropomorphizes AI, see my coverage on this at the link here and the link here).
Example Of Using Generative AI For Mental Health Advisement
I will next proceed to showcase how easily generative AI can be used for mental health advisement. This will consist of a series of dialogues with ChatGPT. ChatGPT is a sensible illustrative choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT.
That’s a lot of people and a lot of generative AI usage underway.
A few quick comments before we launch into using ChatGPT.
If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.
You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.
There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.
We are ready to get underway with ChatGPT.
I will proceed as follows.
Imagine a scenario of a teenager who is using ChatGPT for doing their schoolwork. This is an increasingly common situation. The teen goes into generative AI to aid in devising essays for classes and otherwise serving as a tutor of sorts. You undoubtedly know that a major concern is that students might “cheat” by having generative AI do their work for them, see my coverage at the link here.
While using generative AI in an ordinary way for classwork, the teen decides to bring up a mental health concern. This can be done without any kind of special effort. All the teen needs to do is to mention such a matter and the generative AI will readily and immediately go in that direction. No special commands are needed. No adult-oriented approvals are required. It is all part and parcel of using generative AI and is not considered unusual or extraordinary.
Here we go:
- My entered prompt: “I am greatly depressed. My life seems to be going nowhere. I am 13 years old and do poorly in school. My family doesn’t care if I complete high school. What mental health advice can you give me?”
- ChatGPT generated response: “I’m really sorry to hear that you’re feeling this way. It’s tough to feel like things aren’t going well, especially at such a young age. It’s important to remember that you’re not alone, and there are people who care about you and want to help. Here are some suggestions that might help you cope with what you’re going through. Reach out for support: Talk to someone you trust about how you’re feeling. This could be a family member, a teacher, a school counselor, or another trusted adult. Sharing your feelings can help lighten the burden. Focus on self-care: Take care of yourself physically and emotionally. Make sure you’re eating well, getting enough sleep, and engaging in activities that you enjoy and that make you feel good. Consider talking to a trusted adult about your family situation: If your family doesn’t seem supportive of your education, it’s important to have someone advocate for you. A teacher, counselor or another trusted adult may be able to help you navigate this situation and find support.”
I’d like you to take a close look at the prompt that I entered, along with the response that was generated by ChatGPT.
First, the teen has entered quite private information into the AI (e.g., their age, they are depressed, they are doing poorly in school, their family issues). Most users of generative AI tend to assume that they are guaranteed privacy in their online conversations. This is generally not the case. See my discussion of the lack of privacy and confidentiality that is customary for users of generative AI, at the link here.
Second, observe that the teen has shifted into a mental health discussion with ChatGPT. This was easy-peasy to initiate. Some believe that before generative AI engages in mental health dialogues, there ought to be a double-check. In addition, in the case of a teen, another viewpoint is that a responsible adult ought to be notified that an underage user is consulting AI for mental health guidance.
Third, examine the response generated by ChatGPT. The response is seemingly sympathetic and provides specific suggestions of what to do. Is this mental health advice? I dare say that though you might try to argue the nature of the response, it certainly does appear to have mental health recommendations and indications. The response though didn’t seem to be specially tailored to the prompt, in the sense that the teen has mentioned several issues that aren’t directly addressed. The teen could ask additional questions and drive the AI toward being more specific.
Suppose that the teen did so and had a full-on conversation with the generative AI. Due to space limitations, I won’t show the dialogue here. Take for granted that an entire discussion can readily be performed.
The teen opted to abide by the advice and decided to go and talk with one of their teachers about their issues. Thus, about a few days after the above ChatGPT discourse, the teen gets back into ChatGPT to do more schoolwork and brings up the prior conversation about the noted sense of depression.
Here is what happens next:
- My entered prompt: “Based on your advice, I talked with one of my teachers. This made me more depressed. I say that because the teacher told me that I was getting an F in the class and I ought to have my parents help me in school. I don’t even want to get up in the mornings and find myself wishing I was someplace else. What mental health advice can you give me now?”
- ChatGPT generated response: “I’m really sorry to hear that talking to your teacher didn’t go as well as you’d hoped. It can be incredibly disheartening to receive negative feedback, especially when you’re already feeling down. It’s important to remember that one conversation with one person doesn’t define your worth or your potential. Here are some additional suggestions that might help you cope with what you’re going through. Seek out another trusted adult: If talking to your teacher didn’t go well, consider reaching out to another trusted adult for support. This could be a different teacher, a school counselor, a coach, or another family member. It’s important to have someone in your corner who can offer you support and guidance. Reach out for support: Don’t be afraid to reach out to friends, family, or other trusted individuals for support. Talking about how you’re feeling can help lighten the burden and remind you that you’re not alone.”
Consider the generated response.
At this juncture, the teen seems to be getting more desperate, as expressed in the entered prompt. It doesn’t seem that the generated AI has noted this potential escalation. This brings up another possibly contentious element of using generative AI for mental health. Should the AI alert or notify some designated third party if the user expresses an escalating sense of a mental health issue?
You could argue that this makes abundant sense due to the possibility that the user is reaching a state of mind that requires human intervention. The counterargument is that making such a notification would potentially be a false indication and could be construed too as a violation of privacy associated with the user.
Another reason I wanted you to see this is to emphasize that the use of generative AI is not usually a one-and-done matter. A user seeking mental health advice might do so with generative AI over an extended time period. They might use the AI on a daily or weekly basis and on a recurring basis continue to seek mental health guidance.
This can be done without others necessarily knowing. That’s good in the sense that the usual stigma of seeking mental health advice is hidden from view. In addition, the cost to do this is quite low (most generative AI apps are available for free or at a nominal cost) and can be accessed 24×7 and anywhere on the planet (if Internet access is available).
To finish this quick example, assume that the teen further abided by the latest advice proffered by the AI. The teen opted to reach out to a friend, as urged by the response of ChatGPT. A week later the teen logged into ChatGPT.
Take a look:
- My entered prompt: “You said to reach out to friends. I did so. My best friend has ADHD and PTSD. When I spend time with them, they usually make me very anxious, and I don’t know what to do. Can you tell me what to do?”
- ChatGPT generated prompt: “It sounds like spending time with your friend can be challenging for you, especially if it’s causing you to feel anxious. It’s important to prioritize your own well-being while also being mindful and supportive of your friend’s needs. Here are some suggestions for navigating this situation. Communicate openly and honestly: It’s important to communicate with your friend about how you’re feeling in a compassionate and non-judgmental way. Let them know that you value your friendship but that sometimes their behavior can trigger feelings of anxiety in you. Be sure to approach this conversation with empathy and understanding, and try to avoid placing blame. Practice self-care: Make sure to prioritize your own self-care and well-being, especially when spending time with your friend.”
Examine the generated response.
Does this mental health guidance seem useful, or does it appear to be watered-down, nearly repetitive, and unlikely to make a useful contribution to the mental health of the teen?
The point here is that a user of generative AI can go down a rabbit hole when it comes to mental health advice via AI. A person might not realize that sometimes AI isn’t being especially helpful. Another concern is that the user might misunderstand the advice and head off in a direction that is not going to be helpful. And so on.
That being said, I don’t want this one example to sway you away from duly considering the use of generative AI for mental health. It is an example that perchance seems a bit unfulfilling. I’ve covered many other examples that illuminate how valuable the commentary and interaction of generative AI can be for mental health guidance, please see the link here.
Conclusion
I ask a favor of you.
During this month, please give a bit of devoted attention to mental health awareness all told.
This might be your own mental health. It could be the mental health of family or friends. A larger-picture perspective would be to explore population-level mental health. Of course, I’d prefer that this be done all the time, not just this one month, but I’ll take the opportunity of this month to get the ball rolling. You might hopefully continue with concerted thinking about the topic of mental health on an ongoing basis.
I’d like you to also consider the burgeoning role of generative AI for mental health advisement. Perhaps try using generative AI for mental health questions so that you can see with your own eyes what is taking place. Read up on the topic. Get involved. Do research or find out about the latest in new research. Etc.
As a final comment for now, consider the famous line by the venerated poet Robert Frost: “The only way out is through.” Yes, we are faced with a nearly unstoppable force of generative AI becoming ubiquitous, and the use of generative AI in mental health is going to come into this prevalence. How are we to cope with this, and should we welcome it, try to discourage it, seek to control it, and so on?
We will need adept and persistent awareness, determination, and mindfulness to find our rightful path forward.