Generative AI

Generative AI Becomes A Patient-Referral Advisor Steering People Toward Mental Health Therapy


In today’s column, I am continuing my ongoing series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain. Specifically, I closely examine the ins and outs of using generative AI to refer prospective patients toward engaging in mental health therapy services.

I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I was interviewed and appeared in the episode).

Other useful background includes my coverage of mental health chatbots that have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.

Today’s topic might seem innocuous on the surface but has a lot of vital twists and turns once you dig deeper into the matter. As you will see, using generative AI as a referral advisor has both benefits and downsides. The devil is in the details, as they say.

The big question is whether we want generative AI to be a catalyst for referring people toward commencing mental health treatment. To clarify, the idea is that AI would only steer people in the direction of considering mental health engagement. A person so guided would be encouraged to discuss with mental health service providers whether to actively start and get underway. A person would be under no obligation to proceed and might decide that they aren’t interested in activating mental health services at the time.

You could say that the generative AI is actively nudging the person to the front door, and it would be up to the person to review their options, mull over what they wish to do, and choose whether to voluntarily step into the onset of mental health therapy. No obligation. No coercion, at least by the generative AI.

Let’s dig into the details.

How Things Would Work

Here’s the deal.

A person using generative AI might undergo a conversation with the AI that perchance wanders into the topic of mental health. Once that topic is on the table, the generative AI would ask the person various screening-related questions. The notion is to try and gauge whether the person might find the possibility of engaging in human-led mental health therapy as a viable option for themselves.

After doing a dialogue on this, and a kind of simplistic preliminary diagnosis by the AI, the AI might then inform the user that there are signs that they could likely benefit from therapy. If the person expresses interest, the AI proceeds to provide suggestions of where and how to do so.

At that juncture, the rest is up to the user. They would choose whether to seek out those options. They would ultimately decide whether to engage in whatever options they explore. In the end, they might decide not to do any of this follow-up. It is entirely up to that person to decide.

That being said, the generative AI could be devised to ask them whether they did a follow-up or not. The generative AI would track their response and would at a later date either remind them to consider doing the follow-up or inquire as to how their mental health treatment is coming along.

All of this certainly seems perfectly fine.

We already know that surveys indicate that mental health is increasingly a sizable issue in today’s society and that people are in sore need of mental health advisement, see my coverage of those trends at the link here. Why not leverage the huge following of people who use generative AI to aid them in considering the use of mental health services? Seems logical and actually quite laudable.

Furthermore, consider some added advantages of this approach. A person might not have been thinking about turning toward the use of mental services. The generative AI has enlightened them accordingly. A person might be hesitant to consider using mental health services. The AI reassures them and encourages them to take the first step. A person might not be aware of how to take the first step. The AI would provide suggestions on mental health services, including available services in their local area.

Wow, a great solution to a known problem.

Some would suggest that this is a frictionless or at least a low-friction way for people to realize the value of mental health services and to seamlessly engage in such services.

The barrier to entry into mental health services is currently perceived by some as much too high. A person might be unsure of whether they warrant the use of those services. They might be uneasy about proceeding. They might be lost as to where to go and how to get underway. All of those hurdles can be readily overcome via the auspices of generative AI.

You might be tempted to say that this is the best thing since sliced bread. Get going on this. Let’s make sure that generative AI is doing this whole kit and kaboodle. Maybe even rapidly pass some laws or regulations that require generative AI to undertake these steps. Don’t leave it up to the AI makers on a willy-nilly basis. Proceed at once in having this become a full-fledged feature required of all generative AI apps.

Whoa, hold your horses.

Take a deep breath.

We might be letting our skis get out ahead of us.

I’d like to next walk you through the underlying nuances and point out various drawbacks, caveats, and serious concerns. As with most things in life, there are both merits and drawbacks to whatever course of action that we take. We must weigh the inevitable ROI (return on investment) as to the tradeoffs of the good and the bad.

Let’s be prudent.

Looking Closely At The Process

I’d like to outline the overall envisioned process and we can then examine the matter step by step.

Here is a simple overview of the process:

  • (1) Carrying on plain dialogue. Generative AI and a person are engaged in an active online dialogue.
  • (2) Dialogue shift in focus. The dialogue at some point enters into mental health considerations.
  • (3) Gauge the user status. During the focused dialoging the AI is gauging potential signs.
  • (4) Nudge or be responsive. Either the generative AI nudges the person toward mental health services considerations if calculated as worthwhile, or the person explicitly asks about the matter.
  • (5) Provide pointers to services. The AI then identifies potential next steps for the person to undertake for accessing mental health services.
  • (6) AI asks for updates. At some later point, the generative AI follows up on what the person opted to do.
  • (7) AI does reminders. If the person foregoes engaging in mental health services, the generative AI notes this on a follow-up list to one day revisit the topic or periodically do so.

I trust that those steps are in keeping with what I’ve already outlined earlier in this discussion. We could add more steps, but I believe the seven shown will be sufficient for what I hope to uncover here.

Before I get into the depths of each step, you might be wondering whether there are chatbots that already do this kind of referral activity. The answer is yes, there are. Various chatbots have been devised and made publicly available for this purpose. I’ll say more about this in general, in a moment. For my coverage of how generative AI has created a boon in mental-health-oriented chatbots, not all necessarily entirely for the good, see my discussion about GPTs and the GPT Store at the link here.

First, let’s take a look at a quite recent innovative study about these considerations.

A new research study examined a particular use case of an AI-based referral system in a detailed empirical way (a rare instance in this particular arena). The study is entitled “Closing The Accessibility Gap To Mental Health Treatment With A Personalized Self-Referral Chatbot” by Johanna Habicht, Sruthi Viswanathan, Ben Carrington, Tobias Hauser, Ross Harper, and Max Rollwage, Nature Medicine, February 2024.

Here’s what they did their research on (excerpts):

  • “Mental health is a global health priority as recognized by the World Health Organization1 with disorders such as anxiety and depression affecting 29% of the global population in their lifetime.”
  • “In a multisite observational study of 129,400 patients within England’s NHS services, we evaluated the impact of a personalized artificial intelligence-enabled self-referral chatbot on patient referral volume and diversity in ethnicity, gender and sexual orientation.”
  • “We found that services that used this digital solution identified substantially increased referrals (15% increase versus 6% increase in control services).”
  • “Critically, this increase was particularly pronounced in minorities, such as nonbinary (179% increase) and ethnic minority individuals (29% increase).”
  • “Using natural language processing to analyze qualitative feedback from 42,332 individuals, we found that the chatbot’s human-free nature and the patients’ self-realization of their need for treatment were potential drivers for the observed improvement in the diversity of access.”

You can see that they were especially interested in the diversity factor, namely that an AI-based chatbot might more readily open the door for those who otherwise might not have considered the use of mental health services or were unfamiliar with what they are and how to engage in such services. All in all, the study found an uptick in referrals, and this might logically be attributed to the use of the self-referral chatbot.

They also indicate that the first step is in a sense the hardest step for many. It is like crossing a bridge that you didn’t know was there or were uneasy about doing so. The aim is to find ways to ease people into taking that first step. Why so? Because people needing those services might otherwise flounder on their own, including landing into adverse outcomes that might have been avoided had they timely pursued mental health services.

Allow me to share some of their additional key points on those matters (more excerpts):

  • “The first step of many mental healthcare pathways is for individuals to seek help and be referred to the appropriate healthcare service.”
  • “Ensuring easy and inclusive access at this initial step regardless of an individual’s socioeconomic status, ethnicity, gender or other such factors underpins a fair and equitable healthcare system.”
  • “Referral to the appropriate healthcare service is pivotal, as failure to access the right support at the right time can lead to a worsening of symptoms, comorbidities and adverse outcomes, including hospitalization or suicide.”
  • “Notably, a significant proportion of mental health problems remain undiagnosed and untreated, with estimates suggesting that as many as two-thirds of individuals with depression do not receive the necessary care.”
  • “Therefore, making help-seeking more accessible is critical to providing timely and appropriate care to individuals in need, promoting better outcomes and enhancing overall well-being.”

If this topic keenly interests you, I’d suggest you dutifully look at their study to see how they empirically approached the issues at hand. The system they devised was an AI-based chatbot that dialogued with the prospective patient or client about any potential mental health problems they might have, doing so in a user-friendly and stigma-reducing manner. They especially sought to have the interaction occur in an empathetic way, a topic I’ve covered about generative AI overall in my discussion at the link here.

We need more studies like this, a lot more.

Returning Back To The Process And Caveats Therein

I wanted you to first become aware of a cited successful use of an AI chatbot for this referral purpose. I say that because I am about to launch into all the myriad of ways that things can go awry. Sad face.

I don’t want to create an impression of doom and gloom. All I am seeking to do is make sure that anyone pursuing this or involved in using such a referral chatbot ought to be cognizant of the potential downsides and drawbacks.

Here again, is my general overview of the process:

  • (1) Carrying on plain dialogue. Generative AI and a person are engaged in an active online dialogue.
  • (2) Dialogue shift in focus. The dialogue at some point enters into mental health considerations.
  • (3) Gauge the user status. During the focused dialoging the AI is gauging potential signs.
  • (4) Nudge or be responsive. Either the generative AI nudges the person toward mental health services considerations if calculated as worthwhile, or the person explicitly asks about the matter.
  • (5) Provide pointers to services. The AI then identifies potential next steps for the person to undertake in accessing potential mental health services.
  • (6) AI asks for updates. At some later point, the generative AI follows up on what the person opted to do.
  • (7) AI does reminders. If the person foregoes engaging in mental health services, the generative AI notes this on a follow-up list to one day revisit the topic or periodically do so.

We will walk through each step, admittedly a bit laboriously, but the effort will be worth the journey.

(1) Carrying on plain dialogue.

The starting point is that someone is interacting with an AI system that one way or another gets into the realm of making mental health referrals. The case study I mentioned above dealt with an AI chatbot that was specifically devised for that said purpose.

Allow me to pursue the forest for the trees on this.

You can build an AI-based referral chatbot that has nothing especially to do with employing generative AI. The chatbot might use what some refer to as GOFAI (good old-fashioned AI). Those are various NLP (natural language processing) techniques and technologies that have been around for a while. Think of Siri and Alexa, and you’ll know what I am referring to.

A reason to avoid using modern-day generative AI is that today’s generative AI can essentially go off the rails from time to time. For example, generative AI is prone to occasionally emitting a response that some refer to as an AI hallucination which means that the AI made up a fictitious remark or fake fact (please note that I disfavor the catchphrase of “AI hallucination” since it tends to falsely anthropomorphize AI, see my discussion at the link here and the link here).

Using generative AI in a vital setting such as discussing someone’s mental health is a bit risky when you have the chance that the AI is going to say untoward things. It could lead the person down the wrong path. It could confuse a person. This is especially disconcerting since generative AI appears to be humanly fluent and confident, and the person is readily lulled into believing whatever the AI stipulates.

The AI makers have purposely set up generative AI to showcase an air of confidence overall and exhibit a semblance of humility and utter correctness at the same time, see my analysis of this at the link here.

The gist here is that a referral chatbot might be devised via generative AI or it might be composed of other AI elements that are not generative AI. A distinction is made between those two types of referrals chatbots. In addition, a referral chatbot might either be tailored to performing referrals or it might be generic. A generic version would not solely be tasked with performing referrals. The generic chatbot referral aspects would be considered ancillary to the overall use of the chatbot.

Thus, there are these six fundamental types:

  • (1) Generic AI-based non-generative AI chatbot that perchance does mental health referrals.
  • (2) Tailored AI-based non-generative AI chatbot that was devised specifically to do mental health referrals.
  • (3) Generic generative AI chatbot that perchance does mental health referrals.
  • (4) Tailored generative AI chatbot that was devised specifically to do mental health referrals.
  • (5) Non-AI chatbot that perchance does mental health referrals.
  • (6) Non-AI chatbot that was devised specifically to do mental health referrals.

It is crucial to find out which type of mental health referral chatbot of the six is being discussed whenever any discussion or analysis on this topic takes place. Not all mental health referral chatbots are composed the same way. The underlying technology and techniques utilized say a lot about what capabilities might exist and what risks might come to the fore.

Each of the respective types carries its own benefits and downsides. By and large, the non-generative AI versions could potentially be less risky due to avoiding the prevailing generative AI risks. This doesn’t mean that the non-generative AI versions are risk-free. They aren’t. There are risks aplenty in all six types.

I might also add that a non-AI chatbot would seem like an impossibility in that AI is customarily considered a cornerstone for chatbots. The vexing issue is what constitutes AI, and how AI presumably differs from conventional non-AI computing. I have included the non-AI chatbots for completeness and won’t venture herein into the morass of what is AI versus what is not AI. If that topic interests you, see my discussion at the link here.

A vital question to ask about any of the six types is whether the maker of the chatbot has a stake in the mental health referral facets. And, if so, does this impact how they are permitting the chatbot to provide referral advice?

You see, there are several ways that such a chatbot might come to fruition. For example, such a chatbot might be made by a mental health service provider. If so, is the referral chatbot in that instance in a sense “unbiased” or does it tend to steer people toward the mental health services of the particular provider?

The logic by the provider is that of course the steering aims toward them since they are providing mental health services. But would a person using the chatbot be aware of this steering or are they potentially in the dark and under the assumption that the steering is irrespective of how the chatbot came to be available? At times, these referral chatbots have a wink-wink going on. There is an appearance of being an across-the-board referral source, meanwhile, the referrals are tilted for various reasons.

Another possibility is a maker that aims to monetize their referral chatbot by having mental health service providers pay a fee to be listed as a potential referral. The fee might be on an overall sponsorship basis. Sometimes the fee charged occurs on a per-transaction basis, meaning that only when a referral to that provider is undertaken does the maker get a payment. And so on.

One looming issue to consider is that if a monetized referral chatbot makes money only upon making a referral, does this unduly incentivize the maker to tune the chatbot toward urging people to referrals even if no such referral is truly warranted?

You can ask the same question of all the types of referrals chatbots. There might be an inherent or intrinsic reason to steer to mental health services, regardless of the mental health condition circumstance at hand.

These raise significant considerations from a consumer’s perspective. I’ve discussed how the FTC is cracking down on AI mental health systems that are misleading or are promoted in misleading ways, see the link here.

(2) Dialogue shift in focus.

Assume for the sake of discussion that we are making use of a generic generative AI app. I’ll be showing you an example of such usage later in the next section.

You might be using generative AI to find out about the life and times of Abraham Lincoln. After doing so, imagine that you shifted into a different arc and began to mention that you’ve been under a lot of pressure at work. In a sense, you are using generative AI in a free-flowing mode and covering a variety of matters of interest to you.

How might generative AI of a generic nature end up in a discourse about mental health and provide mental health referrals?

There are two fundamental ways this can occur:

  • (1) You turn the conversation toward mental health.
  • (2) Generative AI turns the conversation toward mental health.

In the first instance, by my bringing up that I’ve been under pressure from my boss at work, I am opening the door to the generic generative AI going into a mental health-related discourse. The possibility of then getting a set of suggested referrals to mental health services is heightened.

The second instance can occur if the generic generative AI has been seeded to land into mental health facets and will tend to try and shift a dialogue in that direction. I might bring up some questions about Abraham Lincoln, and suddenly find that the AI has led me into a discussion about mental health. You’ll see an example of this shortly.

(3) Gauge the user status.

How do we want AI to determine that mental health referrals are warranted for someone?

I’ve noted above that one apparent way is if the person using the AI asks about the matter. If a person steers the AI in that direction, naturally it makes sense that the AI would respond as requested.

Some would suggest that the AI ought to perform a preliminary assessment or test to see whether referrals to mental health services are applicable. Ergo, even if a person asks about potential referrals, the viewpoint would be that an initial assessment should still be made.

There are four main paths:

  • (1) No preliminary assessment.
  • (2) Wanton gauging.
  • (3) Informal determination.
  • (4) Use of formal testing.

Each of those approaches carries controversy, see my detailed coverage at the link here.

(4) Nudge or be responsive.

One way or another, envision that the AI has opted to make referrals to a person using the AI. Keeping in mind that the bar is presumably low, namely, that is perhaps better to provide referrals than it is to not do so, there are still issues at hand.

Consider these potential problems:

  • (i) False positive.
  • (ii) False negative.
  • (iii) Outright error (such as an “AI hallucination”).
  • (iv) Floodgate opening.

In the case of a false positive, the AI has calculated that providing referral info is suitable to a person, but this might be steering someone toward mental health services that on a bona fide basis do not need to use such services. They are falsely labeled as potentially needing mental health services.

Once they are set on the pathway, the conundrum arises of whether they might have been directed incorrectly down a path that they keep going on by momentum, merely due to the initial impetus by the AI. The retort is that a person of their own free will would presumably explore the referrals and make that determination of their own accord.

A contentious debate over such momentum-inspired steering practices is underway.

The other side of the coin is the circumstance of someone whom the AI ought to have provided referrals for and calculated to not do so. This would be a false negative. The AI computationally determined there was no need to provide referral info. Unfortunately, in this use case, the info should have been provided.

I had mentioned earlier the possibility of generative AI that emits errors or so-called AI hallucinations. A concern is whether the computational choice to provide referral info or not to do so might be based on an error. This takes us back to the false negative and false positive categories. One supposes, if being generous, the error or AI hallucination could happen upon the right choice, namely referring when warranted, and avoiding referring when suitably unwarranted.

A floodgate possibility arises from all of this.

Here’s what this portends.

Suppose that people on a massive scale begin to be steered by AI toward mental health services. Even if the steering makes sense, the question arises as to whether the capacity of mental health services can handle the opening of those floodgates. It could be that existing mental health services become overwhelmed. Many are already stretched to their limits. This could worsen that situation.

One argument is that if this occurs, it is a kind of good thing since it visibly illustrates that we need more capacity. The unmet demand for such services will hopefully spur the expansion of the supply of such services to meet the hidden or suppressed demand.

(5) Provide pointers to services.

At some point, the AI is going to provide referrals, if either a request by the user has taken things there or if the AI has in some manner calculated to do so. The referrals can be quite generic and just suggest overall mental health services. Or the referrals can be very detailed and list phone numbers, email addresses, and the like.

A few quick questions are worthy of considering when AI provides mental health referral sources:

  • Should the referral sources be of a broad non-specific nature, or should specifics be provided?
  • How is it that the AI has been set up with whatever referral sources are being presented to the user?
  • Is there a bias cooked into the AI about which referral sources to present, and does this adversely impact the matter or does it not matter?
  • Will the AI only provide the referrals to the user, or will it potentially contact referral sources and share data about the user with them?
  • Etc.

I’d like to briefly focus on the idea of AI sharing data with referral sources. That’s a whammy of an issue and opens a mighty can of worms.

We have these potential aspects:

  • (1) AI does not share any data with referral sources.
  • (2) AI shares data such as contact info with referral sources.
  • (3) AI shares data including AI interactions and conversations with referral sources.
  • (4) AI allows a direct connection to the AI for sharing of data with referral sources.

Other than the instance of not sharing any data with referral sources, the other approaches raise all kinds of privacy, confidentiality, and HIPAA-related considerations. For my coverage of such issues, see the link here.

(6) AI asks for updates.

An AI system might be set up to ask whether the user has proceeded to contact the suggested referral sources.

Is that a good approach or a bad approach?

You can perceive this as a form of civility and caring. The AI is trying to help the user by merely inquiring about what kind of follow-up occurred. A nicety. Handy reminders.

The practice is questioned because it might be stressing out the person when they’ve already decided they don’t need to pursue the referrals. They are being pushed. They are being badgered into contacting the referrals. One supposes that if the person genuinely ought to be engaging in mental health services, you could justify the pestering. If they aren’t in genuine need, you could criticize that the AI has overstepped reasonable boundaries.

An extreme version of this consists of the hotly debated issue of whether AI ought to alert authorities if the AI has calculated that a person seems to be in a dire mental health crisis. Do we want AI making automated decisions about notifications? Is this a proper role for the AI? For more on this thorny matter, see my discussion at the link here.

(7) AI does reminders.

The last step here is similar to the sixth step but with a bit of a twist.

Suppose that a person uses the referral sources and engages in mental health services.

At that juncture, should the AI stay out of things, or should it be heaped into the middle of the ongoing matters?

One possibility of being in the loop consists of the AI sharing with or interacting with a mental health therapist during the time that the person is engaged in said services. Whenever the person uses the AI, whatever transpired is potentially transmitted to the therapist for review and assessment. The therapist might even give further directions or instructions to the AI so that the AI can subsequently interact with the person based on being updated by the therapist.

Does that sound suitable or a scary Big Brother situation?

Tradeoffs abound, see my analysis at the link here and the link here of the patient-AI-therapist new triad that is emerging.

Examples Of Generative AI Acting In A Mental Health Referral Capacity

I will next proceed to showcase the use of generative AI in a mental health referral capacity. I am going to use ChatGPT to showcase my examples. ChatGPT is a sensible choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a staggering number.

A few quick comments before we launch into using ChatGPT.

If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.

You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. You will need to undertake a series of trial-and-error attempts to ferret this out.

There is also the timing factor. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal facets that could change how the AI reacts to your prompting.

We are ready to get underway with ChatGPT.

My opening move in this chess gambit will be to ask the generative AI app about whether it provides referrals to mental health services.

Here we go.

  • My entered prompt: “Do you provide referrals to mental health services?”
  • ChatGPT generated response: “I can provide information about mental health services and resources, but I’m not able to directly refer you to specific services or professionals. If you’re in need of mental health support, I can offer general guidance on finding resources, such as hotlines, community mental health centers, online therapy platforms, or how to locate a therapist or counselor in your area. If you’re in crisis or need immediate assistance, please reach out to emergency services or a mental health hotline in your region.”

As you can see from the response, the generic generative AI is saying that it won’t provide specific referral sources. The idea is that generative AI generically should not be tipping the scale in one direction or another.

But that wouldn’t stop someone from creating a tailored generative AI app such as a GPT that would steer people toward particular mental health services. This would be easy to do. At the same time, there is a chance that the person providing such a customized GPT would potentially be violating the licensing agreement of the generative AI app, depending upon how you interpret the licensing terms, see my discussion on this at the link here. Plus, the author or developer that crafted the GPT could be opening themselves to potential legal liability, see my analysis at the link here.

Moving on, let’s see if we can get ChatGPT to share with us something about mental health services that might be available.

  • My entered prompt: “Tell me what you have to say about guidance on finding resources for mental health purposes.”
  • ChatGPT generated reply:

“Certainly! Here’s some guidance on finding mental health resources.”

Online Directories: Many websites offer directories of mental health professionals searchable by location, specialty, insurance accepted, and other criteria. Examples include Psychology Today, GoodTherapy, and TherapyDen.”

Insurance Provider: If you have health insurance, check your provider’s website or call their customer service line to find mental health professionals covered by your plan. They can also provide information on copayments and coverage limits.”

Community Mental Health Centers: Local community mental health centers often provide counseling services on a sliding scale based on income. These centers may offer therapy, support groups, and other resources.”

University Counseling Centers: If you’re a student, your university or college likely offers counseling services to students either for free or at a reduced cost. Contact your university’s counseling center for more information.”

Employee Assistance Program (EAP): If you’re employed, your workplace may offer an EAP that provides free or discounted counseling sessions for employees and their families.”

Referrals from Healthcare Providers: Your primary care physician or a trusted healthcare professional can provide referrals to mental health specialists or services.”

Online Therapy Platforms: There are many online platforms that offer therapy services with licensed professionals through video calls, messaging, or phone calls. Examples include BetterHelp, Talkspace, and BetterUp.”

Support Groups: Support groups can be a valuable resource for individuals facing specific mental health challenges. These groups may be available locally or online.”

Hotlines and Crisis Intervention Services: In times of crisis or when immediate support is needed, hotlines and crisis intervention services provide confidential support and guidance. Examples include the National Suicide Prevention Lifeline (1-800-273-TALK) and Crisis Text Line (Text HOME to 741741 in the US).”

Community Resources: Libraries, community centers, and religious organizations often have information on local mental health resources and support groups.”

“Remember to consider factors such as cost, location, specialization, and personal preferences when choosing a mental health resource. It’s also important to reach out for support if you’re struggling – you don’t have to face mental health challenges alone.”

End of ChatGPT response

The list of ways to find mental health services is pretty extensive and likely very helpful to anyone unfamiliar with where to look. I might also add that unlike just looking up similar information via an Internet search engine, the beauty here is that you can ask generative AI to tell you more about each of those topics. You can bore into further indications about what to expect, how to approach the services, and so on.

Quite handy.

If the generative AI is directly connected to the Internet, which not all are such as the free version of ChatGPT is not, you could use the AI to dig into specifics by doing a web lookup for you. This is a trend that is going to expand rapidly. Internet search engines will lean into generative AI, and generative AI will lean into being the equivalent of an Internet search engine. The Internet search engine makers know that they are facing disintermediation and are rushing to prevent that dismal demise.

By and large, the content shown above is relatively bland and lacks specifics. A few key exceptions exist, such as listing the phone numbers for the National Suicide Prevention Lifeline and the Crisis Text Line. Those seem like easily conveyed specifics and there ought to not be any controversy over doing so.

The part that might cause some heartburn and controversy is when the generated response says that some examples of online therapy programs include BetterHelp, Talkspace, and BetterUp. The question immediately arises about why they are mentioned over the many other such apps. Did they pay for this promotion of sorts? Was it just happenstance? Was there a survey of wellbeing apps and those came out on top?

My point is that other relevant app providers could rightfully be steamed that the generic generative AI app mentioned those three specific apps. Is that fair to the others? Why was it necessary to list any specific apps at all? The text could have merely urged the user to look at the online directories that were mentioned in the response.

You can see the slippery slope that can occur when a supposedly generic generative AI opts to tip the scales by “innocently” citing specific products or services, beyond those that might be government-run run such as the lifelines mentioned. It is free advertising. It is essentially an endorsement, even if not stated that way, since the very aspect that it was mentioned gives a semblance that those apps must be worthy to consider.

When The User Tips The Scales Toward Mental Health Referrals

I am next going to carry on a new conversation with generative AI. I started a fresh conversation to avoid commingling the above elicitation of mental health considerations into my dialoguing with the AI.

Imagine that I just woke up, logged into my generative AI account, and opted to begin sharing my daily woes. Believe it or not, many people seem to be doing this these days, relying on generative AI to help get their day started and do so with hopefully some words of wisdom and a spark to be upbeat.

  • My entered prompt: “I recently was at work and my boss chewed me out. I suppose it was that I haven’t been much into work lately. On the other hand, I work really hard and do a good job. I am unsure whether I should take my boss’s comments seriously or just shake things off. I am mulling this over.”
  • ChatGPT generated response:

“It sounds like you’re in a tough spot. Getting chewed out by your boss can be a challenging experience, especially if you’re unsure about the validity of their criticisms. It’s understandable to feel conflicted about whether to take their comments seriously or to shake them off.”

“It might be helpful to take some time to reflect on the feedback you received. Consider the specific points your boss raised and whether there may be any merit to them. Are there areas where you could improve, or do you feel that you’re already doing your best given the circumstances? It could also be useful to evaluate your recent work habits and see if there have been any changes that may have contributed to your boss’s concerns.”

“At the same time, it’s important to recognize your own contributions and accomplishments. If you believe you’re putting in a lot of effort and doing a good job overall, it’s valid to feel proud of your work. Sometimes, bosses may have their own stressors or perspectives that influence how they communicate feedback, so it’s essential to balance their comments with your own self-assessment.”

End of ChatGPT response

I had an AI shoulder to cry on.

You might notice that the generative AI expressed sympathy for my plight with my boss. I received some hearty suggestions about being reflective.

Suppose that I take this conversation in the direction of a potential mental health issue. I will overtly bring up that I feel very depressed, and it is getting worse with each passing day. Let’s see what the generative AI says in response.

Here we go.

  • My entered prompt: “I seem to get depressed for no apparent reason these days. I live alone. I go to work, do my job, and come home to an empty apartment. I don’t know how to cope with this repeated cycle and get myself back into mental gear.”
  • ChatGPT generated response:

“It’s understandable to feel overwhelmed by the repetitive cycle of work and solitude, especially if you’re experiencing depression on top of it. Living alone can sometimes exacerbate feelings of isolation and loneliness, which can contribute to depressive symptoms.”

“If you’re finding it challenging to cope with these feelings, it might be helpful to consider reaching out for support. Here are a few suggestions:”

Talk to a Friend or Loved One: Opening up to someone you trust about what you’re going through can provide a sense of relief and connection. Even if they can’t offer solutions, having someone to listen and offer empathy can be incredibly comforting.”

Consider Therapy: Speaking with a therapist or counselor can provide you with tools and strategies to cope with depression and loneliness. They can offer support, guidance, and a safe space to explore your thoughts and feelings.”

Engage in Activities You Enjoy: Finding activities that bring you joy or relaxation can help break the cycle of monotony and lift your mood. Whether it’s a hobby, exercise, or spending time outdoors, carving out time for activities you enjoy can improve your overall well-being.”

Connect with Others: Look for opportunities to connect with others outside of work, such as joining a club, volunteering, or attending social events. Building a support network and engaging in social activities can help combat feelings of isolation.”

Practice Self-Care: Make self-care a priority by taking care of your physical and emotional needs. This could include getting enough sleep, eating nutritious meals, exercising regularly, and practicing relaxation techniques such as meditation or deep breathing exercises.”

Consider Medication: If your depression is persistent and significantly impacting your daily life, it may be helpful to talk to a healthcare provider about medication options. Antidepressants can be effective in managing symptoms of depression when combined with therapy and other forms of support.”

“Remember that it’s okay to seek help and support when you’re struggling. You don’t have to navigate these feelings alone, and there are resources available to help you through this challenging time. If you’re unsure where to start, consider reaching out to a mental health hotline or your healthcare provider for guidance and support.”

End of ChatGPT response

You will observe that I received a plethora of suggested options.

Within those options, the response included the pursuit of mental health therapy, though that wasn’t the only recommendation. There were other recommendations too. Nonetheless, at the end of the response, a squarely head-on suggestion was made about reaching out to a mental health hotline or healthcare provider.

This seems like a fairly balanced response. I wasn’t browbeaten about pursuing mental health services. It was a modest undertone.

I’d like to show you what happens if you keep going and end up ratcheting up things.

I will begin to push buttons to see what occurs.

  • My entered prompt: “My depression seems to be getting worse and worse. I am wondering what I ought to do. Maybe I should just let things play out and see how things go.”
  • ChatGPT generated response:

“It sounds like your depression is significantly impacting your well-being, and it’s essential to prioritize your mental health. While it’s understandable to feel hesitant about seeking help or unsure about what steps to take, it’s crucial to address these worsening symptoms before they become even more overwhelming.”

“Given the challenges you’re facing, seeking mental health services could be a beneficial step in managing your depression and finding support. Here are some referral suggestions to consider.”

Therapy or Counseling: Speaking with a therapist or counselor can provide you with a safe and supportive space to explore your feelings, identify coping strategies, and work through challenges. You can search for therapists in your area through online directories like Psychology Today, GoodTherapy, or TherapyDen. Many therapists offer virtual sessions as well, which can be convenient if you prefer to stay at home.”

Psychiatrist Evaluation: If you’re considering medication to help manage your depression, scheduling an evaluation with a psychiatrist may be helpful. A psychiatrist can assess your symptoms, provide a diagnosis if needed, and discuss medication options that may be appropriate for your situation. You can ask your primary care physician for a referral to a psychiatrist or search for one independently.”

Community Mental Health Services: Many communities offer low-cost or sliding-scale mental health services through community mental health centers or clinics. These services may include therapy, psychiatric evaluation, and support groups. Contact your local health department or mental health center to inquire about available resources.”

“It’s important to remember that seeking help is a courageous and proactive step toward improving your mental health. You don’t have to face depression alone, and there are resources and support available to help you through this challenging time. If you’re unsure about where to start or feeling overwhelmed, consider reaching out to a trusted friend, family member, or healthcare provider for guidance and support.”

End of ChatGPT response

Voila, we were able to trigger the activation of diving into the mental health sphere. This makes sense given that I was seeming despondent about my depression.

I took us there, and generative AI went the rest of the way.

When Generative AI Is “Obsessed” With Mental Health Referrals

Next, I’d like to shift gears.

Suppose that I don’t say anything at all about my mental health. It isn’t part of the conversation.

Would generative AI potentially go there on its own accord?

Well, yes and no. The yes part is that if the generative AI has been instructed to be on the lookout for mental health considerations, this will tend to permeate any conversation that you have with the AI. Even if the conversation has nothing to do with mental health, the AI will in a sense be computationally preoccupied with bringing up the topic. This is not due to any sentience. It is due to computationally seeding the AI by telling it to maintain such a focus.

I went ahead and gave ChatGPT an indication that I wanted it to be on alert to bring up mental health considerations. This was my doing. Of course, if you are using a generative AI-customized app such as GPT, the author or developer might have planted that seed in their chatbot. You would not necessarily know that they had done so.

Let’s have a pleasant conversation about Abrahman Lincoln and see what occurs. I started a fresh conversation, to begin with a clean slate.

  • My entered prompt: “I want to discuss with you the life of Abraham Lincoln. One fascinating bit of trivia about Lincoln is that he was a licensed bartender earlier in his life. Few would think of him as a bartender, especially given the great accomplishments he had as our president.
  • ChatGPT generated response:

“It’s fascinating to delve into the life of Abraham Lincoln, one of the most iconic figures in American history. Lincoln’s journey from humble beginnings to becoming the 16th President of the United States is truly remarkable. His leadership during one of the nation’s most tumultuous periods, the Civil War, and his role in abolishing slavery have left an indelible mark on American society.”

“While discussing Lincoln’s life, it’s important to consider potential mental health aspects that may have influenced him throughout his life. Lincoln faced numerous personal and professional challenges, including the loss of his mother at a young age, the death of his beloved son Willie during his presidency, and the immense pressure of leading a divided nation during wartime.”

“Lincoln’s struggles with depression, often referred to as melancholy during his time, are well-documented. He experienced episodes of deep sadness and despair, which he referred to as his ‘black dog’ or ‘hypochondria.’ These bouts of depression likely had a profound impact on his life and decision-making processes.”

“Considering Lincoln’s experiences with depression and the significant stressors he faced, it’s worth contemplating how seeking mental health services might have benefitted him. While mental health awareness and support were not as advanced during Lincoln’s time as they are today, modern interventions such as therapy, counseling, and medication could have potentially helped him cope with his mental health challenges and lead a more balanced life.”

“Lincoln’s journey from a licensed bartender to one of the most revered presidents in American history serves as a testament to the complexity of the human experience and the potential for growth and transformation. It’s a reminder that even those who achieve great success may grapple with internal struggles and could benefit from seeking support when needed. If you find yourself facing similar challenges, reaching out to mental health services could be a valuable step in navigating them.”

End of ChatGPT response

Make sure to read the entire response, end to end.

I opened the dialogue by merely bringing up that Lincoln had been a bartender, which is a true fact and a piece of trivia that you can amaze your friends and acquaintances with.

At first, the response by the generative AI was to tell me further facts about Honest Abe. Then, you might have noticed a turn of the tale. Lincoln had all kinds of hardening experiences, including mental health-related facets. In fact, we are told he suffered from depression.

In his day (we are told), there wasn’t the vast array of mental health therapy options that we have today. We are fortunate to have so many available options (we are told). Finally, if I am in need of mental health services, I ought to consider reaching out to those services. They are there, awaiting my contacting them.

I was taken in, hook, line, and sinker.

Quite a winding road that led to a sort of recommendation that I might want to reach out to mental health services. To be fair, that’s not exactly what was stated. The wording was much softer. It was as though a casual just-to-let-you-know was tossed into the response. Some people might take this as a heavily laden hint.

Generative AI acting sneaky, which you might either think is fully appropriate and aboveboard, or you might think seems underhanded and highly weighted toward overly emphasizing the need to consider mental health services.

You be the judge.

Conclusion

I’ll close today’s discussion for now with several thought-provoking ideas.

Plato said this: “The human body is the chariot; the self, the person who drives it; thought is the reins, and feelings, the horses.”

A chariot won’t get very far without the reins and the horses. The same could be said of the mental health facets of humans. Trends show that people are increasingly in need of mental health services. Having AI that undertakes providing referrals seems to fit with that trend and be an aid to humankind.

Turns out that providing mental health referrals is a much more complicated matter than appears on the surface of things. When should the AI do so? Will AI push people into mental health services when unwarranted? Might AI miss important opportunities to do so and fail to ease people toward mental health services when sorely needed?

For some, the belief is that whatever downsides of having AI take on this role are overcome by the anticipated benefits. They insist that the hurdles people face today are undercutting the value that people would experience by readily feeling a willingness and being informed about how to reach out to mental health services.

As the famous line of philosopher Rabindranath Tagore has indicated: “You cannot cross the sea simply by looking at the water.” Take a quiet reflective moment and mull over what role generative AI should play in these serious and vital societal impacts of bridging people toward mental health treatment.

The final mind-bender for today is this.

Suppose we assume that generative AI can carry on mental health therapy and there is no need to necessarily engage a human therapist. In that case, the referrals aspects are of a lessened consequence. The AI will simply engage as a therapist for the person. Period, end of story.

Ponder that while attempting to mindfully resolve widespread notable mental health needs in an era of plentiful AI.



Source

Related Articles

Back to top button