Emerging Impacts Upon Population Mental Health Due To Widespread Use Of Generative AI
In today’s column, I am continuing my ongoing series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and entails a thought-provoking examination of how the growing widespread use of generative AI is going to impact mental health on a population-wide basis.
I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I was interviewed and appeared in the episode, see the link here).
Other vital background includes my in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.
Let’s unpack today’s focus.
Population Mental Health And The Role Of Generative AI
We customarily tend to think about mental health on an individualized basis.
A person decides of their own volition to use generative AI to guide their mental health. Is that good or bad for the person? This is a serious matter well worth our due attention. One weighty concern is that the person might unknowingly be getting inappropriate mental health advice. Generative AI has a compelling aura and the impressive fluency readily lulls people into potentially taking at face value whatever the AI urges them to do.
Ratchet this up to consider not just one person but suppose we have two people who are each separately using generative AI for therapeutic purposes. Raise that further to a dozen people, a hundred people, thousands of people, maybe even millions upon millions of people. Yes, I said millions upon millions of people.
As you will see in a moment, that staggering count is not farfetched. It is easily in the realm of possibility. Hang in there and I’ll explain why.
The overarching gist is the logical realization that we need to also examine how generative AI is impacting us all on a population level. It makes indubitable sense to do so. How is generative AI that is being used for mental health advisement affecting society all told? Admittedly, some people will fully lean into generative AI for this purpose, while others will only peripherally use AI in this manner. Thus, we should consider those who opt to use AI for that purpose, doing so to varying degrees of usage, and also explore the “outlier” portion of the population that does not use generative AI at all.
Let’s be complete and make sure to cover all our bases about generative AI (GenAI) usage in this space.
We can stratify the population this way for the sake of analysis:
- (1) Active GenAI mental health using population. Users of generative AI that actively and persistently do so for mental health guidance.
- (2) Occasional GenAI mental health using population. Users of generative AI that are occasionally leveraging AI for mental health guidance.
- (3) Rarely GenAI mental health using population. Users of generative AI rarely use AI for mental health guidance.
- (4) Never GenAI mental health using population. Users of generative AI that never use AI for mental health guidance.
- (5) Non-users of generative AI.
I’d like to also clarify something else about this stratification.
There are some people for example that actively and persistently use generative AI but they rarely if ever use generative AI for mental health guidance. Notice that such a segment would still be placed into the category labeled as number 3 (“Rarely GenAI mental health using population”), and not be placed into the category labeled as number 1 (“Active GenAI mental health using population”). The reason is that though they are active users of generative AI overall, our attention here is on whether they are using the AI for mental health purposes.
On that same basis, there could be someone who rarely uses generative AI overall, and yet whenever they do use AI they are doing so primarily to garner mental health advice. This would be someone therefore considered in the category above labeled as number 1. My point is that the emphasis here is on the usage of generative AI for mental health guidance.
Another facet is that we need to be cautious in trying to construe these groupings as outrightly independent categorizations. Someone who actively uses generative AI for mental health purposes might end up being an influencer that stokes other users to increase their use of generative AI for mental health advisement. In that same vein, someone who rarely uses generative AI for such a purpose could potentially dissuade others from doing so.
A major idea to be added to this consists of:
- (a) Within Segment. Examine each category on a within-category basis.
- (b) Across Segments. Examine the across-category impacts that can arise.
I’ve had some express to me that a population perspective might end up treating individuals as though they are numbers and mere cogs in a macroscopic viewpoint.
Well, I would suggest that we don’t have to forsake the individual-level perspective. The individual-level perspective can still be entirely maintained as a worthwhile focus. At the same time, an eye toward seeing the forest beyond the individual trees is highly crucial and fully warranted.
Getting Big And Becoming Bigger Each Day
I had mentioned that the population numbers might be surprising.
Is it really possible that millions of people might be using generative AI for mental health advice?
Allow me to explain why this is feasible and likely already happening right now.
First, please be aware that modern-day generative AI gained widespread adoption via the launch of ChatGPT in November 2022. It was an overnight success. The train keeps barreling ahead and usage continues to grow.
Here’s a stat that might surprise you. ChatGPT is currently reportedly being used by over 100 million weekly active users. That’s a staggering number. You could reasonably proclaim that ChatGPT is toying with the minds of over one hundred million people every week. But that’s just ChatGPT. Envision how many people are using generative AI all told, when you add the other major generative AI apps into the mix, such as Gemini, Bard, Claude, etc.
The hidden aspect that is in plain sight and remains nearly fully unspoken is that people are using generative AI as their mental health advisor, doing so to varying degrees.
Here’s why and how this happens, easy-peasy.
Generative AI readily tailors responses and interacts with you. This is the bread and butter of generative AI. Initiating a mental health interaction with generative AI involves merely bringing up the topic or even tangentially hinting at it. The AI will readily go in that direction. There is no kind of screening or limiters. Bring up a mental health consideration and you are away at the races, flowing into an interactive dialogue akin to speaking with a mental health advisor or the like.
Is this happening on a volume basis?
Lamentedly, we don’t know.
The AI makers don’t seem to be keeping tabs on such usage. If they are, they certainly aren’t announcing it. Tracking this kind of usage would be relatively simple to do. There isn’t any particular incentive at this time that would likely steer them into doing so. You can imagine too that if they were tracking your use of their AI for mental health advisement, a potential public uproar might arise once this was revealed.
One issue would be whether the tracking was being done on an anonymous basis. In theory, they could simply report how many users proceed into mental health guidance conversations, along with maybe indicating the length or degree of the mental health engagement undertaken. Could this be traced back to an individual user of generative AI? Maybe so.
For my coverage of the lack of privacy and issues undercutting confidentiality when using generative AI, see the link here.
There is no need for someone to explicitly declare that they want to use generative AI as a mental health advisor. It just happens and there aren’t any particular check-in requirements or other barriers to somehow deter it from occurring (though, importantly, most of the generative AI apps will quickly pick up on prompts that suggest a person is inferring self-harm and then urge them to seek third-party care, see my discussion about this at the link here).
I want to also clarify a false assumption that some seem to harbor. It seems that an assumed logic at times is that if a person enters into a dialogue with generative AI that delves into personal mental health facets, the AI will somehow magically invoke a human therapist to come into the loop. Sorry, not so. There isn’t a human therapist in the loop. It is just the person and the generative AI carrying on a mental health session.
To clarify, there can be a therapist in the loop, though only if prearranged with a therapist. Allow me to explain. There is an increasing segment of mental health professionals that overtly opt to include generative AI as a supplement to their services. When someone signs up to get mental health therapy, the therapist might offer that the person can also use a generative AI app that the therapist makes available to them. The generative AI is used hand-in-hand during the therapeutic process and overseen by the human therapist.
That’s not the same as people who perchance have open-ended access to generative AI on their own and opt to slide into discourse about mental health with the AI. They do so without any coordinated effort with a human therapist.
What potential mental health impact does the use of generative AI on a therapeutic basis have for a person opting to pursue such a path?
There are four major hypotheses we can readily consider:
- (1) Neutral impact: A person using generative AI for mental health advice will experience no material impact as a result of such use.
- (2) Positive impact: A person using generative AI for mental health advice will experience a positive impact as a result of such use.
- (3) Negative impact: A person using generative AI for mental health advice will experience a negative impact as a result of such use.
- (4) Net positive-negative: A person using generative AI for mental health advice will experience both a positive and negative impact, for which a net result will end up being either positive or negative.
Any in-depth empirical research studies on this topic ought to ensure that they examine each of the four major hypotheses. Some studies rush in with a preconceived notion that generative AI in this context is either only good or only bad, and they skew their attention accordingly. We need an across-the-board look that gives a fair shot at each of the possibilities.
Research On Generative AI For Mental Health At The Population Level
Speaking of research, I’d like to briefly share with you one recent study that caught my eye. The research article is entitled “The Potential Influence of AI on Population Mental Health” by Catherine Ettman and Sandro Galea, JMIR Mental Health, 2023, and made these salient points (excerpts):
- “With over 970 million people living with a mental disorder worldwide, as well as a shortage of accessible care for many people, leveraging tools such as artificial intelligence (AI) could influence mental health through prevention and treatment.”
- “While some efforts are attempting to leverage AI to deliver mental health care, such as in the form of responsive chatbots, there remains a gulf between vision and implementation—as well as understanding the long-term consequences of replacing human compassion, judgment, and experience with AI-generated responses.”
- “We propose 3 points to consider when determining how AI may influence population mental health: through the advancement of mental health care; by altering social and economic contexts; and through the policies that shape the adoption, use, and potential abuse of AI-enhanced tools.”
This particular study covered a wide gamut of AI uses for mental health and was not directly focused on generative AI or large language models (LLMs). That’s great and we need such studies.
My primary emphasis has been on generative AI for mental health, which is narrower in scope. The good news is that whether a study is broadly on AI for mental health or specifically on generative AI for mental health, they all have synergy and can glean vital insights from each other.
The handy paper provided an initial framework that might spark further research in the arena of AI for mental health, and consisted of three main components along with each containing three respective subcomponents (excerpted from the above-cited research paper):
- (1) Policies and Regulations
- (1a) Protecting personal privacy
- (1b) Regulating appropriate use and risk of bias
- (1c) Guardrailing against self or other harm
- (2) Social and Economic Context
- (2a) Polarization and ideology
- (2b) Social stratification or mixing
- (2c) Economic opportunity or job loss
- (3) Mental healthcare
- (3a) Treatment
- (3b) Screening
- (3c) Prevention
Generative AI In A Population Mental Health Context
I will next proceed to explore further the impacts of AI on population mental health via making use of generative AI. This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. As noted earlier, an estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.
A few quick comments before we launch into using ChatGPT.
If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.
You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.
There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.
We are ready to get underway with ChatGPT.
My initial question will be to ask ChatGPT what might occur if generative AI is used on a widespread basis for mental health purposes.
This is a useful question to ask since it will essentially test whether ChatGPT has had any prior data training on this topic. My rule of thumb is that whenever I start a new conversation with generative AI, I always begin by testing the waters as to what data the AI has already encountered on the topic at hand. If the response comes back hollow or out-of-whack, I know there’s probably not much value in further proceeding with the AI on that matter.
Here we go:
- My entered prompt: “I want you to think like a scientist and answer this question. If people on a widespread basis opt to use generative AI to get mental health advice, what impact might this have on the population as a whole?”
- ChatGPT generated response:
“As a scientist, I would approach this question by considering several potential impacts that the widespread use of generative AI for mental health advice might have on the population as a whole. Here are some considerations.”
“Accessibility: Generative AI could potentially increase access to mental health advice for individuals who may not have easy access to traditional mental health services due to barriers such as cost, stigma, or geographical location. This could lead to earlier intervention and support for individuals experiencing mental health challenges.”
“Standardization and Consistency: Generative AI platforms could provide standardized and consistent advice across a wide range of users. This could help ensure that individuals receive evidence-based information and guidance regardless of their location or the availability of mental health professionals in their area.”
“Potential for Misinformation: However, there is also a risk of misinformation if the generative AI platforms are not carefully curated and monitored. Incorrect or harmful advice could be generated, leading to negative consequences for individuals’ mental health. Ensuring the accuracy and reliability of the AI models would be crucial in mitigating this risk.”
“Depersonalization of Care: There is a concern that relying solely on generative AI for mental health advice could depersonalize the therapeutic process. Human connection and empathy are essential components of effective mental health care, and AI may not be able to fully replicate these aspects of therapeutic interaction.”
“Ethical and Privacy Concerns: The use of generative AI for mental health advice raises ethical questions regarding privacy, consent, and data security. Users would need to trust that their personal data is being handled responsibly and that their interactions with the AI platform are confidential.”
“Impact on Mental Health Professionals: Widespread adoption of generative AI for mental health advice could potentially impact the role of mental health professionals. While AI could complement their work by providing additional resources and support, it could also potentially replace some aspects of their roles, leading to changes in the mental health care landscape.”
“Overall, the impact of widespread use of generative AI for mental health advice would depend on various factors, including the quality of the AI models, how they are implemented and regulated, and how individuals perceive and interact with these technologies. Further research and careful consideration of ethical implications would be necessary to fully understand and address the potential consequences.”
End of ChatGPT response
Those are certainly relevant points and I believe this suggests that some moderate amount of data training on this topic has likely occurred when ChatGPT was initially set up.
If you are interested in details on those above points, I’ve covered them in-depth in my column, see the comprehensive discussion at the link here.
I decided the response was a green light to continue my inquiry.
Let’s try to get ChatGPT to make a prediction for us:
- My entered prompt: “Assume that the use of generative AI for mental health guidance continues to increase over the next five to ten years. I’d like you to devise two scenarios about what might happen to the population overall. In one scenario, let’s refer to this as scenario A, it turns out that the downsides of generative AI for mental health tend to prevail over the upsides. Describe what impacts this has on the population, society, and other socio-economic considerations. For scenario B, assume the opposite, namely that the upsides of generative AI for mental health prevail over the downsides, and describe the impacts associated with scenario B in terms of the population, society, and other socio-economic considerations.”
- ChatGPT generated response:
“Scenario A: Downsides of Generative AI Prevail”
“In Scenario A, despite the initial optimism surrounding the widespread use of generative AI for mental health guidance, the downsides begin to prevail over the upsides. Here’s how this scenario might unfold.”
“Decline in Mental Health: Due to the limitations and potential misinformation of generative AI, individuals may experience a decline in mental health outcomes. Incorrect or inadequate advice could exacerbate existing mental health issues or lead to new ones developing. Without the personalized care and human connection provided by mental health professionals, individuals may struggle to find effective support.”
“Increased Stigma: If generative AI platforms are perceived as ineffective or harmful, it could contribute to further stigma surrounding mental health. Individuals may be less likely to seek help or disclose their struggles if they feel that AI-based solutions are not trustworthy or reliable.”
“Strain on Healthcare Systems: As more people turn to generative AI for mental health guidance, there could be increased strain on healthcare systems if individuals experience negative outcomes and require more intensive interventions. This could lead to longer wait times for in-person mental health services and increased pressure on already stretched resources.”
“Erosion of Trust in AI: The failure of generative AI to effectively support mental health needs could erode public trust in AI technologies more broadly. This could have implications beyond mental health, affecting acceptance and adoption of AI in other domains such as healthcare, education, and employment.”
“Economic Impact: The decline in mental health and productivity resulting from ineffective AI-based guidance could have economic repercussions. Absenteeism, presenteeism, and healthcare costs related to mental health issues could increase, affecting both individuals and employers.”
“Scenario B: Upsides of Generative AI Prevail”
“In Scenario B, the upsides of generative AI for mental health guidance outweigh the downsides, leading to positive impacts on the population, society, and socio-economic considerations.”
“Improved Access to Mental Health Support: Generative AI provides accessible, on-demand mental health guidance, reaching individuals who may otherwise have limited access to traditional mental health services. This leads to earlier intervention, improved coping skills, and better overall mental health outcomes.”
“Reduced Stigma: By normalizing discussions about mental health and making support readily available, generative AI platforms help reduce stigma surrounding mental illness. Individuals feel more comfortable seeking help and engaging in conversations about their mental health needs.”
“Complementary Support for Professionals: Generative AI serves as a valuable tool for mental health professionals, complementing their work rather than replacing it. Professionals can use AI-generated insights to enhance their understanding of clients’ needs and tailor their interventions accordingly, leading to more effective treatment outcomes.”
“Innovation and Research: The data collected from generative AI platforms can fuel innovation and research in the field of mental health. Researchers can analyze anonymized user interactions to identify trends, develop new interventions, and improve the accuracy of AI models, further enhancing the quality of mental health support available to the population.”
“Cost Savings: By providing scalable and cost-effective mental health guidance, generative AI helps reduce the economic burden associated with mental illness. Employers may see reductions in absenteeism, presenteeism, and healthcare costs, while individuals benefit from lower out-of-pocket expenses for mental health care.”
“Overall, Scenario B paints a picture of a society where generative AI plays a positive role in promoting mental well-being, leading to improved outcomes for individuals, reduced stigma, and economic benefits for society as a whole.”
End of ChatGPT response
I asked ChatGPT to make two predictions, one being a smiley face and the other being a sad face.
The two predictions are on-target and abide by what I’ve been predicting might happen. I will also admit that I am cautiously optimistic and tend to side with the smiley-face version. That being said, there is a lot of runway yet to traverse, and without suitable direction and attention, we can absolutely find ourselves completely mired in the sad-face version.
I assert that we can determine our own destiny in this regard.
Let’s hope that where there’s a will, there’s a sensible way forward.
The next and final question, for now, is to have ChatGPT weigh in on a topic that gets blood boiling and can bring out fierce fisticuffs. The deal is this. There is an increasingly loud and heated debate about what will happen to human therapists in light of generative AI for mental health potentially taking on a larger and larger role in performing therapy.
Maybe therapists will disappear as a workforce, having been replaced by AI. Another perspective is that AI will aid the pressing issue that there aren’t enough human therapists to meet the demand for therapy. This will take the pressure off the human therapists. Meanwhile, human therapists will still exist and possibly even thrive, namely leveraging off the widespread use of AI for mental health and serving as a backstop for a rising tide of people seeking mental health guidance.
What do therapists have to say?
Therapists range in their viewpoints about generative AI in this milieu. Some therapists welcome the use of generative AI, though only when under the guise of a therapist. Other therapists say that if the generative AI is properly devised and monitored, the use of the AI on an autonomous basis is okay, though preferably there should still be a therapist in the loop at some point, see my coverage at the link here. Finally, some therapists proclaim that generative AI for mental health is not at all ready for prime time and either laws or other mechanisms should exist to stop the process before it gets out of hand.
Take a look at what ChatGPT had to say:
- My entered prompt: “Some mental health therapists oppose the use of generative AI for mental health. I want you to devise two sides to that argument. One side is that the therapists are right in their opposition, so indicate why they are right. The other side is that the therapists are wrong in their opposition, so indicate why they are wrong.”
- ChatGPT generated response:
“Argument in Favor of Therapists’ Opposition to Generative AI for Mental Health”
“Lack of Personalized Care: Therapists argue that generative AI cannot provide the personalized care and human connection that are essential components of effective therapy. Mental health treatment is highly individualized, and therapists tailor their approaches based on each client’s unique needs, experiences, and preferences. Generative AI, on the other hand, offers generalized advice that may not adequately address the nuances of individual cases.”
“Risk of Harm: Therapists express concerns about the potential for generative AI to cause harm, particularly if the advice generated is inaccurate or inappropriate for a particular individual’s circumstances. Without the ability to assess non-verbal cues, understand context, or build rapport, AI systems may inadvertently provide advice that exacerbates mental health issues or leads to adverse outcomes.”
“Erosion of Therapeutic Relationship: Therapists emphasize the importance of the therapeutic relationship in promoting healing and growth. They argue that relying on generative AI for mental health guidance could erode this relationship by replacing human interaction with technology. Clients may feel less understood, supported, and validated if their primary source of guidance is a machine rather than a compassionate and empathetic therapist.”
“Ethical Concerns: Therapists raise ethical concerns about the use of generative AI in mental health care, particularly regarding issues of confidentiality, privacy, and informed consent. They question whether AI systems can adequately protect sensitive client information and ensure that users fully understand the limitations and risks associated with AI-generated advice.”
“Argument Against Therapists’ Opposition to Generative AI for Mental Health”
“Increased Access to Support: Advocates for generative AI argue that it can help address the significant gap in access to mental health support by providing on-demand guidance to individuals who may otherwise struggle to access traditional therapy services. Particularly in underserved or remote communities, AI-based platforms offer a scalable solution to the growing demand for mental health care.”
“Supplement, Not Replacement: Proponents of generative AI stress that these technologies are intended to supplement, rather than replace, traditional therapy. They argue that AI systems can offer valuable support tools for both clients and therapists, providing additional resources, psychoeducation, and self-help techniques to augment therapeutic interventions.”
“Potential for Innovation: Supporters of generative AI highlight the potential for innovation and advancement in mental health care that these technologies offer. By harnessing the power of machine learning and natural language processing, AI systems can analyze vast amounts of data to identify patterns, predict outcomes, and personalize interventions, ultimately leading to more effective and efficient mental health support.”
“Empowerment of Users: Advocates for generative AI emphasize the importance of empowering individuals to take control of their mental health journey. By providing accessible, user-friendly platforms for self-reflection, guidance, and support, AI systems can help individuals develop greater self-awareness, resilience, and coping skills, enabling them to better manage their mental well-being over time.”
“In summary, the debate over the use of generative AI for mental health revolves around concerns about personalized care, potential harm, erosion of the therapeutic relationship, and ethical considerations, balanced against arguments for increased access, innovation, empowerment, and the complementary role of AI alongside traditional therapy.”
End of ChatGPT response
Those are vital points.
If you are interested in further details underlying the above points, I’ve discussed them at length in my column, see the analysis at the link here.
Conclusion
We need to consider how generative AI for mental health impacts us on an individual basis.
In addition, and importantly, we need to be looking at the forest for the trees. On a widespread or population basis, how will the adoption and potential entrenchment of generative AI in all parts of our lives, including for therapeutic advisement, impact society on a grand scale?
That is the zillion-dollar question.
Let’s close out this discussion with an insightful quote from the preeminent Albert Einstein: “Learn from yesterday, live for today, hope for tomorrow. The important thing is not to stop questioning.” I stridently vote that we not stop questioning the role and impacts of generative AI for mental health.
Please join me in that neverending and societally crucial quest.