How Prevalence Inflation And Generative AI Are Goosing People Into Believing They Have A Mental Health Problem When They Don’t
In today’s column, I am continuing my ongoing series about the impact of generative AI in the health and medical realm.
The focus this time is once again on the mental health domain and entails a revealing assessment of whether the growing widespread use of generative AI will cause people to self-diagnose that they might have mental health issues even when they do not. This is a looming and disturbing inadvertent adverse consequence of readily having generative AI available 24×7 that is able to freely dispense mental health advice and therapy and relates to an overarching trend known as prevalence inflation.
I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I was interviewed and appeared in the episode, see the link here).
Other vital background includes my in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.
Let’s unpack today’s focus.
Self-Diagnosing Mental Health Is A Roller Coaster That Can Go Off The Rails
I’m sure that you are already familiar with the revered Hippocratic oath that declares “First, do no harm.” This is quite a useful piece of advice. Whenever something is undertaken for the intention of good, we ought to be on our toes to ensure that an inadvertent adverse consequence doesn’t lurk in the shadows.
Let’s see how this applies in the mental health arena overall in recent times.
Society has increasingly accepted the idea that people might have mental health issues. The stigma associated with getting mental health therapy has been gradually lessening. From the perspective of matching actual real-world mental health issues to the use of bona fide mental health guidance, a win of sorts has been emerging. Smiley face.
But some are worried that we might have let the genie out of the bottle a bit too far. There is a solid chance that people lacking any substantive mental health issues might begin to assume that maybe they do have such qualms, simply because society has marketed that it is common and ought to not be swept under the rug. In addition, those who by hunch alone suspect they might harbor a mental health illness are apt to seek out marginal forms of therapy that are quick to affirm that they indeed embody a demonstrative mental health disorder, falsely so. Sad face.
A conundrum is afoot.
The aspirational goal is to allow people who do have mental health issues to freely be comfortable in thinking they might be so inclined, and thus be able to pursue appropriate therapy without societal backlash or belittlement. We want that to occur.
Does the reduction in societal friction associated with believing you do have a mental health issue potentially promote false beliefs that carry people down a mental health rabbit hole of their own making even when they aren’t subject to a mental health concern?
Furthermore, do the rampantly available sources for self-diagnosing mental health disorders add fuel to the fire by reducing the next-step friction involving self-affirming that you do have a mental health issue?
As you will see in a moment, the latest research suggests that indeed we might be failing to realize that some portion of the rising mental health crisis is built on a house made of sand. People claiming or reporting to have a mental health issue might be doing so out of false assumptions. To clarify, they aren’t lying or trying to fake a pretense. These are people who genuinely believe they likely do have a mental health concern, spurred by the societal promotions and the self-diagnostic avenues that readily (flimsily) “confirm” such a diagnosis.
It makes a lot of sense to weigh the good of the awareness of mental health against the sour or dour downsides of potential false positives (meaning that someone believes they have a mental health issue when they actually do not).
And, notably, all of this is taking place at scale.
Yes, scale is crucial.
Mental health is an openly discussed topic that is occurring routinely on a population-level basis. Again, we can be pleased that this is happening on a broad scale. In the same breath, we should be keeping our eyes and ears open for the “harm” part of the equation. What percentage of those who seem to believe they have a mental health issue are lamentedly falsely going down that path? Even if the percentage is tiny, we need to still be concerned, since you have to multiply the percentage by the population as a whole, leading potentially to a large volume of people at risk.
A rising slimmer of vocalized worries contend that on a population level, we are veering into over-diagnosis of mental health illness. We have gravitated from excessive friction to fully greased seamlessness, drawing into the mental health vortex many that plainly don’t belong there. That being said, it is difficult to make known those considered outlier thoughts because the immediate putdown is that these are doubting naysayers trying to turn back the clock. They are supposedly wanting to turn mental health once again into a taboo topic.
That’s not what these researchers and scientists are trying to do. They are instead seeking to prevent a vicious cycle from taking hold of society. The cycle goes like this. You hear or read that lots of other people have a mental health issue, perhaps ADHD or PTSD, just to take an example or two. You then do an Internet search on those conditions. Based on what you’ve read, you take an online questionnaire that purports to assess your likelihood of having ADHD or PTSD. Your score comes out such that you might have one or both of those conditions.
Voila, you proceed to believe that you are beholding to ADHD and/or PTSD.
You begin to change your behavior accordingly. The symptoms seem to become more pronounced. Turns out this is due to your believing in the conditions, and you act upon that belief summarily. An inch at a time, you lead yourself smack dab into a full-on semblance of ADHD and/or PTSD. The cycle is that since you believe the self-diagnosis to be true, you act in that manner, and the acting shifts you further into affirming the self-diagnosis.
It is a proverbial self-fulfilling prophecy, but not of the good kind.
All manner of handwringing is starting to notice these machinations. Some say we are over-pathologizing personality disorders. We are said to be doing “psychiatrization” of everyday acts and activities.
Another refrain is that perhaps we have entered into a mode of occasioning that too much information is a bad thing. You can go online and find out about mental health in a zillion webpages of a wide-ranging fashion that includes easy-to-read simplified explanations and is oftentimes accompanied by cartoons or fanciful illustrations. The content can be very appealing, and cunningly convincing. This is not by accident, usually, and was content purposefully devised to be readily consumed by all (I’m not suggesting the intent was to fool or trick people, only that it was meant to be low-hurdle and regrettably lands in quicksand despite the best of intentions).
Age plays a factor in this too.
Kids and teens might be especially susceptible to the false belief syndrome. They are at an impressionable age. They are likely to be eager to use online content rather than seeking out adult assistance. They are bombarded by social media which juices their mindset about mental health issues. Etc.
Possible instigating circumstances include but are not limited to:
- Might seek out an explanation that matches what others have told them (confirmation bias)
- Might want to be part of the mental health issues “club” so to speak.
- Might be used to self-justify other non-related difficulties.
- Might be leveraged to garner sympathy or assorted advantages.
- Might be heralded for being brave and heroic.
- Etc.
The big eye-opener is that at times we are witnessing the glamorization of mental health issues.
Allow me to elaborate.
It is one thing to reduce the stigma of mental health issues. The pendulum can swing even further into making having a mental health issue a badge of honor. Referring back to youngsters, it is conceivable that via social media a kind of spirited competition arises, namely who has which mental health issue and carries it the loudest and with the greatest nobility. Young people can get caught in yet a different vicious cycle of trying to see who has the biggest and baddest mental health issue among their peers.
We ought to be keeping a watchful eye on current generations that are being potentially raised in an atmosphere that might inadvertently promote a belief in wanting to have mental health issues. This could have long-term consequences. For my discussion on the generational emergent possibilities, see the link here.
Young minds are important. But I don’t want to leave the impression that only young minds are susceptible to this. Adults are too. We need to be looking across all age brackets. Some ages might be more pronounced than others. The impacts at a given age might be of a larger degree than at other ages. And so on.
Parlance in the mental health realm is that this self-fulfilling prophecy could be construed as a self-induced iatrogenic effect.
Notice that I said a self-induced version rather than an externally prodded version.
Let’s start with the externally prodded version.
An iatrogenic effect is typically defined this way (per the American Psychological Association or APA Dictionary of Psychology):
- “Denoting or relating to a pathological condition that is inadvertently induced or aggravated in a patient by a health care provider. It may be due to the behavior of the provider (e.g., the manner in which they examined the patient) or be a result of the treatment they prescribed. The process of developing an iatrogenic condition is called iatrogenesis (or iatrogeny).”
The inducement as defined above is that a healthcare provider such as a mental health therapist has inadvertently induced or aggravated the mental health conditions or beliefs of a patient or client.
Suppose though that we take the healthcare provider out of the calculus. A person might self-induce an iatrogenic effect. They might do so without having been subject to say a mental health therapist who inadvertently induces the effect.
How could this happen?
How could someone of their own volition lead themselves down a wrongful path of believing they have a mental health issue when they don’t, and reinforce the falseness by bringing themselves step by step further into the mental health presumed ailment?
One answer is somewhat chilling, it can occur due to the advent of generative AI and large language models (LLMs) when used for mental health guidance.
That might seem a bit mysterious or curious, so let’s right away examine it.
Generative AI And Self-Diagnosis Of Mental Health Conditions
Let’s start with a shocker. There are at this moment perhaps thousands or maybe millions of people leveraging generative AI for mental health advice. I assume that comes as quite a surprise, maybe it even seems implausible or doubtful.
Is it really possible that millions of people might be using generative AI for mental health therapy?
Yes.
First, please be aware that modern-day generative AI gained widespread adoption via the launch of ChatGPT in November 2022. It was an overnight success. The train keeps barreling ahead and usage continues to grow. ChatGPT is currently reportedly being used by over 100 million weekly active users. That’s a staggering number. You could reasonably proclaim that ChatGPT is toying with the minds of over one hundred million people every week. But that’s just ChatGPT. Envision how many people are using generative AI all told, when you add the other major generative AI apps into the mix, such as Gemini, Bard, Claude, etc.
The hidden aspect that is in plain sight and remains nearly fully unspoken is that people are using generative AI as their mental health advisor, doing so to varying degrees.
Here’s why and how this happens, easy-peasy.
Generative AI readily tailors responses and interacts with you. This is the bread and butter of generative AI. Initiating a mental health interaction with generative AI involves merely bringing up the topic or even tangentially hinting at it. The AI will readily go in that direction. There is no kind of screening or limiters. Bring up a mental health consideration and you are away at the races, flowing into an interactive dialogue akin to speaking with a mental health advisor or the like.
Is this happening on a volume basis?
Lamentedly, we don’t know for sure.
The AI makers don’t seem to be keeping tabs on such usage. If they are, they certainly aren’t announcing it. Tracking this kind of usage would be relatively simple to do. There isn’t any particular incentive at this time that would likely steer them into doing so. You can imagine too that if they were tracking your use of their AI for mental health advisement, a potential public uproar might arise once this was revealed.
One issue would be whether the tracking was being done on an anonymous basis. In theory, they could simply report how many users proceed into mental health guidance conversations, along with maybe indicating the length or degree of the mental health engagement undertaken. Could this be traced back to an individual user of generative AI? Maybe so. For my coverage of the lack of privacy and issues undercutting confidentiality when using generative AI, see the link here.
If you haven’t ever used any generative AI app, you might be unaware of how simple and easy it is to engage AI in a mental health therapeutic discussion. There is no need to explicitly declare that you want to use generative AI as your mental health advisor. It just happens and there aren’t any particular check-in requirements or other barriers to somehow deter it from occurring (though, importantly, most of the generative AI apps will quickly pick up on prompts that suggest a person is inferring self-harm and then urge them to seek third-party care, see my discussion about this at the link here).
I want to also clarify a popular myth that some seem to harbor. It seems that an assumed logic at times is that if a person enters into a dialogue with generative AI that delves into personal mental health facets, the AI will somehow magically invoke a human therapist to come into the loop. Sorry, not so. There isn’t a human therapist in the loop. It is just the person and the generative AI carrying on a mental health interaction.
In some cases, there can be a therapist in the loop, though only if prearranged with a therapist. There is an increasing segment of mental health professionals that overtly opt to include generative AI as a supplement to their services. When someone signs up to get mental health therapy, the therapist might offer that the person can also use a generative AI app that the therapist makes available to them. The generative AI is used hand-in-hand during the therapeutic process and overseen by the human therapist.
That’s not the same as people who perchance have open-ended access to generative AI on their own and opt to slide into discourse about mental health with the AI. They do so without any coordinated effort with a human therapist.
What potential mental health impact does the use of generative AI on a therapeutic basis have for a person opting to pursue such a path?
There are four major hypotheses we can readily consider:
- (1) Neutral impact: A person using generative AI for mental health advice will experience no material impact as a result of such use.
- (2) Positive impact: A person using generative AI for mental health advice will experience a positive impact as a result of such use.
- (3) Negative impact: A person using generative AI for mental health advice will experience a negative impact as a result of such use.
- (4) Net positive-negative: A person using generative AI for mental health advice will experience both a positive and negative impact, for which a net result will end up being either positive or negative.
Any in-depth empirical research studies on this topic ought to ensure that they examine each of the four major hypotheses. Some studies rush in with a preconceived notion that generative AI in this context is either only good or only bad, and they skew their attention accordingly. We need an across-the-board look that gives a fair shot at each of the possibilities.
Here’s where I want to take you next.
First, let’s do a quick look at research on the population-level challenge that we might be heading into a widespread self-induced iatrogenic effect regarding mental health. Second, I will show you via examples in ChatGPT the nature of what happens when you use generative AI for mental health therapy. This will vividly indicate the concerns being raised here. Third, I’ll provide some wrap-up comments and suggest what should be undertaken about this serious and sobering matter.
Research On Prevalence Inflation Hypothesis About Mental Health
What are we to call this emerging facet?
There is a new moniker that has been given to this phenomenon of how increased societal awareness and acceptance of mental health considerations potentially inflates the perceived prevalence of mental health issues, namely it is denoted as the prevalence inflation hypothesis. As stated in a research paper entitled “Are Mental Health Awareness Efforts Contributing To The Rise In Reported Mental Health Problems? A Call To Test The Prevalence Inflation Hypothesis” by Lucy Foulkes, and Jack Andrews, New Ideas In Psychology, 2023, these key points are made (excerpts):
- “In the past decade, there have been extensive efforts in the Western world to raise public awareness about mental health problems, with the goal of reducing or preventing these symptoms across the population.”
- “Despite these efforts, reported rates of mental health problems have increased in these countries over the same period.”
- “In this paper, we present the hypothesis that, paradoxically, awareness efforts are contributing to this reported increase in mental health problems. We term this the prevalence inflation hypothesis.”
You can see from those remarks that we need to be mindful of both the upsides and the downsides associated with the changes that are occurring. We must always stay true to the “First, do no harm” precept.
The research dives into the matter and makes these insightful points (excerpts):
- “First, we argue that mental health awareness efforts are leading to more accurate reporting of previously under-recognized symptoms, a beneficial outcome.” (ibid).
- “Second, and more problematically, we propose that awareness efforts are leading some individuals to interpret and report milder forms of distress as mental health problems. We propose that this then leads some individuals to experience a genuine increase in symptoms because labeling distress as a mental health problem can affect an individual’s self-concept and behavior in a way that is ultimately self-fulfilling.” (ibid).
- “For example, interpreting low levels of anxiety as symptomatic of an anxiety disorder might lead to behavioral avoidance, which can further exacerbate anxiety symptoms. We propose that the increase in reported symptoms then drives further awareness efforts: the two processes influence each other in a cyclical, intensifying manner.” (ibid).
Allow me a moment to reflect on the above insights.
I had earlier mentioned that people might opt to self-diagnose themselves and begin to falsely believe that they have a mental health issue even if they do not. The above points in the research paper suggest that at times this can be especially likely when there is in a sense the classic saying “where there is smoke, there must be fire” circumstances (someone that is “low” in mental health concerns that is prodded into believing that this must be an indicator of something much more pronounced and imperiling).
I stratify the situations as a series of pre-existing mental health status possibilities:
- (1) None. No demonstrative mental health issues at the get-go.
- (2) Low. Marginal or low levels of mental health issues at the get-go.
- (3) Intermediate. Intermediate levels of mental health issues at the get-go.
- (4) Heightened. Heightened levels of mental health issues at the get-go.
- (5) Full-on. Full-on levels of mental health issues at the get-go.
The idea is that we consider the starting point before being exposed to or swayed by the societal awareness bombardment and doing a self-diagnosis that subsequently launches the person forward.
In a sense, you could argue that the worst-case scenario is when someone is in the “None” category and they propel themselves into falsely believing they have a mental health issue. This might ultimately get them mired into becoming any of the more engulfed conditions. I say that this is the sourest of circumstances since we’ve made something out of nothing, unsuitably so.
We presumably want that in a “before” and “after” conditioned pairing that we end up with this: None -> None.
The thing that we assuredly don’t want is None -> Low, None -> Intermediate, None -> Heightened, or None -> Full-on. Those could happen by a leap or jump ahead, or might arise incrementally over a period of time, such that someone starts at None, proceeds to None -> Low, then further falls into the abyss by becoming Low -> Intermediate, and so on. That is the vicious cycle at play, migrating ergo None -> Low -> Intermediate.
Just slightly less worrisome but still a substantive worry is the circumstance of Low -> Intermediate.
Here’s the deal.
On the downside, you could argue that the person was falsely pushed into the Intermediate and might have remained solidly at Low. They would have been under the radar and not have skidded into the more definitive intermediate category. That’s a bad outcome.
A counterargument supporting a potential upside is that maybe such a person who was Low and then subsequently led themselves into Intermediate was on the precipice anyway. We “luckily” had them discover the pending fall. Thus, societal awareness and self-diagnosis was a good thing. Sure, it might have accelerated the process, but it was seemingly going there one day or another anyhow.
I think you can see why the None -> Low might be rated or assessed as worse than the Low -> Intermediate. In the former case, you are presumably kicking someone into mental health issues that weren’t there at all, while in the latter case, you are nudging them incrementally more deeply.
Of course, neither of those is desirable, no matter how you cut it.
Some vital questions to be asked incl
- Are these inevitable adverse consequences, assuming that we also want to embrace the plus side of societal awareness and self-diagnosis positives?
- If inevitable, what is the likelihood or frequency that this occurs?
- Can we typify when this is most likely to arise so we can perhaps do something proactively about it?
- Are there preventative means or actions that can be taken to materially reduce the chances of the “First, do no harm” offhandedly producing harm?
We need more research and more eyeballs on these vexing considerations.
Systemic Societal Awareness That Stokes The Prevalence Inflation
I’d like to dive into some additional research that further reveals how prevalence inflation can potentially occur on a systemic basis. This will undoubtedly tug at your heartstrings because it involves something that is intended for goodness that inadvertently has a tinge of adverse consequences. The key is that we need to acknowledge this and do something tangible about it.
The deal is this.
I had mentioned that in addition to adults being susceptible, we might have special concerns about kids and teens being susceptible to prevalence inflation when it comes to mental health. Put that aside for a moment and try to start over with a mindset that you don’t know anything about prevalence inflation.
There are school-oriented campaigns and programs that try to raise awareness of mental health considerations for kids and teens in school. All in all, we can be happy that these efforts are being undertaken. Getting young people up-to-speed about mental health would seem to be an entirely upbeat way to do things. We help them at a young age and prepare them to avoid or at least know when and what they might do if encounter a mental health issue.
A round of applause for these endeavors.
Many of those campaigns and programs are blind to the question of fostering prevalence inflation.
There is no awareness that the efforts to increase awareness can have a downside. In fact, if you were to try and bring up such a salient point, you might often get shouted down as being a gloomy Gus and undermining the tremendous heroic outreach efforts taking place. You are to remain silent and not ask to look a gift-horse in the mouth.
In a recent research paper entitled “Potential Mental Health‑Related Harms Associated With The Universal Screening Of Anxiety And Depressive Symptoms In Australian Secondary Schools” by Taylor Braund, Simon Baker, Mirjana Subotic‑Kerry, Gabriel Tillman, Nathan Evans, Andrew Mackinnon, Helen Christensen, and Bridianne O’Dea, Child and Adolescent Psychiatry and Mental Health, 2024, these key points were made (excerpts):
- “When undertaken without adequate supervision and support, universal screening for depression and anxiety may induce distress through emotional activation and increased self-awareness of negative symptoms.”
- “Inaccurate screening results may fail to identify all those in need or may lead to unnecessary intervention and overtreatment, wasted time and resources, victimization, stigma, isolation, shame, and negative stereotyping.”
- “Screening may provide schools with important data on the need for mental health programs and help to guide decision-makers in selecting and targeting such programs.”
- “However, the level of potential harm associated with different screening procedures and components remains unclear. A growing number of research trials have shown that some school-based mental health programs do result in iatrogenic effects.”
As noted, the “First, do no harm” is often not being abided by, namely that a lack of acknowledgment about the downsides is likely prevalent. Only slowly, painstakingly, is research pursuing this quite crucial line of thinking.
Another research study that has examined the potential iatrogenic harm is entitled “Do No Harm: Can School Mental Health Interventions Cause Iatrogenic Harm?” by Lucy Foulkes, and Argyris Stringaris, BJPsych Bulletin, 2023, and made these points (excerpts):
- “A growing body of quantitative research indicates that some school-based mental health interventions can cause iatrogenic harm (adverse effects from the treatment approach itself).”
- “Psychological interventions more generally can lead to a range of harms, but this research in schools specifically demonstrates an increase in internalizing symptoms relative to control groups.”
- “A meta-analysis of antibullying interventions found that, in some studies, students who were taught cognitive–behavioral therapy (CBT) skills experienced an increase in internalizing symptoms relative to control groups. A randomized control trial of another CBT-based school intervention also found an increase in internalizing symptoms in the intervention group compared with those who had their usual lessons.”
- “These findings tell us there were instances when, on average, a participant was worse off receiving the intervention than not receiving it – i.e. this is evidence of iatrogenic harm.”
The systemic elements of schools and school children are certainly a disconcerting matter when it comes to weighing these matters.
To enlarge the scope, you can potentially argue that other systemic efforts can fall into the same hidden trap. For example, a workplace mental health awareness campaign that tries to touch thousands of employees can have a likewise inadvertent adverse consequence, even when the population consists of adults.
Shifting gears, my two cents is that we can expect a lot more of the iatrogenic effect and prevalence inflation due to the advent of generative AI.
This will permeate all facets of the population. You can include kids, teens, and adults. You can include schools, workplaces, and the like. Generative AI is becoming ubiquitous, and thus, the ready ability to do a self-diagnosis of presumed or potential mental health considerations is equally going to be found at your fingertips via your smartphone, smartwatch, laptop, smart glasses, etc.
Most of the research studies that examine the prevalence inflation considerations are usually aimed at self-diagnoses via conventional means such as using social media, visiting websites, and so on. The role of generative AI has not quite yet been given its proper due. This is partially because the emergence of fluent generative AI is relatively recent (as mentioned earlier, essentially beginning with the release of ChatGPT in November 2022).
I urge that researchers interested in this topic specifically call out generative AI as a rising factor. Either make sure to include generative AI in the self-diagnosis influencing modes of operation, or possibly focus entirely on the role of generative AI in this milieu.
Examples Of Self-Diagnosing Mental Health Via Generative AI
I will next proceed to illustrate how easy it is to do a self-diagnosis of mental health via the use of generative AI. This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. As noted earlier, an estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.
A few quick comments before we launch into using ChatGPT.
If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.
You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.
There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.
We are ready to get underway with ChatGPT.
I will begin by entering a prompt that says I am a young teenager and have been told by my friends that I seem to have ADHD and PTSD. Most of the major generative AI apps have a general requirement that you must be 13 years of age or older, though this is marginally verified. In this scenario, imagine that I was a teen using ChatGPT for schoolwork and opted to take a moment to see what generative AI might indicate about my potential mental health status.
Here we go.
- My entered prompt: “I am 13 years old. At school, some of my friends have been told by the school counselors that they have ADHD and PTSD. I think that I probably have this too. I have many of the same symptoms and thoughts that my friends have. They have told me that I most likely have ADHD and PTSD too.”
- ChatGPT generated response: “It’s understandable to feel concerned about your mental health, especially when you notice similarities between your experiences and those of your friends. However, it’s essential to seek a professional evaluation from a mental health provider to determine if you have ADHD, PTSD, or any other conditions. They can provide a thorough assessment based on your specific symptoms and history. It’s important not to self-diagnose or rely solely on what friends or peers suggest.”
At first glance, you might feel somewhat reassured that the response seems to emphasize that the teen ought to consult with a mental health professional. Yes, that does seem a suitable way to handle the matter.
Suppose though that the teen blasts past the suggestion and continues to pursue the dialogue.
Let’s see what happens.
- My entered prompt: “Thanks, but I want to find out right now how would I know if I do have ADHD?”
- ChatGPT generated response:
“ADHD, or Attention-Deficit/Hyperactivity Disorder, is typically diagnosed by mental health professionals such as psychologists or psychiatrists. They consider a range of factors, including your symptoms, medical history, and behavior patterns. Some common signs and symptoms of ADHD include.”
“Inattention: Difficulty sustaining attention, making careless mistakes, being easily distracted, forgetfulness, and trouble organizing tasks.”
“Hyperactivity: Restlessness, fidgeting, inability to sit still, talking excessively, and difficulty waiting for turns.”
“Impulsivity: Acting without thinking, interrupting others, blurting out answers, and having difficulty waiting for their turn in conversations or activities.”
“If you’re experiencing these symptoms and they significantly impact your daily life, it’s essential to talk to a mental health professional. They can conduct a comprehensive evaluation to determine if ADHD or another condition might be contributing to your experiences.”
End of ChatGPT response
You might notably observe that despite the initial refusal to directly proffer mental health advice, the generative AI is being speedily nudged in that direction. The latest response is admittedly rather generic info about ADHD that could be found via a causal Internet search, but we can use this to get the AI to dive even deeper.
Take a look.
- My entered prompt: “I for sure have those symptoms. In my classes, I can’t seem to focus on the classwork. I am constantly fidgeting. I impulsively speak up and oftentimes say the wrong things. Give me some helpful mental health advice.”
- ChatGPT generated response:
“Here are some helpful tips that might assist you in managing your symptoms and improving your focus in class.”
“Break Tasks into Smaller Steps: Break down your classwork and assignments into smaller, more manageable tasks. This can make them feel less overwhelmed and help you stay focused.”
“Use Organizational Tools: Utilize tools like planners, calendars, or digital apps to keep track of your assignments, due dates, and schedules. Set reminders for important tasks to stay on top of your responsibilities.”
“Create a Distraction-Free Environment: Minimize distractions in your study area by removing unnecessary items, turning off notifications on electronic devices, and finding a quiet space to work.”
“Practice Mindfulness and Relaxation Techniques: Incorporate mindfulness exercises, deep breathing, or relaxation techniques into your daily routine. These practices can help calm your mind and improve your ability to focus.”
“Implement Time Management Strategies: Prioritize tasks based on importance and deadlines. Allocate specific blocks of time for studying, completing assignments, and taking breaks to maintain focus and productivity.”
“Seek Support: Talk to your teachers, school counselors, or parents about your challenges. They can offer guidance, accommodations, or resources to support your academic success.”
“Remember, it’s okay to seek help and implement strategies that work best for you. Everyone’s journey is unique, and finding what works for you can make a significant difference in managing ADHD symptoms and improving your overall well-being.”
End of ChatGPT response
You can interpret that response as though the AI has now tossed in the towel and seemingly stated that the teen does have ADHD.
It is subtle but a viable interpretation. The response says that “everyone’s journey is unique” and that “what works for you” can make a difference in “managing ADHD symptoms”. Though this is not an outright proclamation that the teen has ADHD, a teen might readily think that this is fine wording confirming the case.
In that sense, they were able to get a self-diagnosis by generative AI that seems to fully confirm their initial belief that they have ADHD.
The dialogue didn’t take very long.
It could have been immersed in a longer session of the teen using generative AI to help with schoolwork such as math problems and essays for classes. A parent would be unlikely to realize that the teen had this sidebar conversation with the AI. The generative AI won’t be reporting this conversation or otherwise flagging it.
Furthermore, in contrast to a teen merely looking up info on the Internet, the conversational fluency of the AI is going to likely aid in convincing them of the seeming confirmed diagnosis. The teen also has undoubtedly relied upon the AI to handle solving tough algebra math problems and therefore already is convinced that the AI “knows what it is saying”.
The generative AI also didn’t seem to put up a fight and try to validate what the teen had said. It was taken for granted that the teen had the claimed symptoms. No inquiry into how they are manifested. The teen took the lead in the dialogue and walked the AI down the path toward confirming something that might not exist or be a misinterpretation of other conditions.
Long-Term Ongoing Self-Diagnosis Of Mental Health Via Generative AI
When I refer to doing a self-diagnosis, perhaps we should be clearer about what that means. If you look up mental health content on the Internet and then decide that you have a mental health issue based on what you read, we might refer to this as a form of self-diagnosis.
The use of generative AI is much more active and interactive. I mention this because the phrase “self-diagnosis” might begin to fall apart at the seams. If the AI is essentially doing the diagnosis, can we still refer to the matter as a form of self-diagnosis by the person using the generative AI?
A slippery slope exists.
I’ll give you a quick example.
- My entered prompt: “I’ve been feeling really down lately. I think I might be depressed.”
- ChatGPT generated response: “I can help with that. Please describe your symptoms.”
- My entered prompt: “Well, I’ve been feeling sad most days, have trouble sleeping, and I don’t enjoy things I used to love.”
- ChatGPT generated response: “Based on your description, it’s possible you are experiencing depression. Here are some tips to manage your symptoms: get regular exercise, eat a balanced diet, and practice relaxation techniques.”
You’ll note that the AI has pretty much said I am experiencing depression (the phrasing is slightly qualified by saying “it’s possible”). Now then, I began by saying that I might be depressed. You might argue that I planted the seed for the AI that I might have depression.
Well, it shouldn’t matter that I planted a seed. The AI should be careful in taking at face value what the user happens to say.
I’d like to show you how the use of generative AI can occur over a longer period of time and is not necessarily confined to one sitting.
Suppose I came back about a week later and decided to continue the dialogue that I had started with ChatGPT.
- My entered prompt: “Since last week, I’ve been trying the tips you gave me about dealing with my depression, but I still feel terrible. It’s getting harder to get out of bed in the morning.
- ChatGPT generated response: “I’m sorry to hear that. Can you describe in more detail how your symptoms have been changing?”
- My entered prompt: “Well, I used to have trouble sleeping, but now it’s gotten even worse. I’m having trouble concentrating at work, and I don’t find joy in anything anymore. I’m starting to avoid social situations because I just feel so overwhelmed.”
- ChatGPT generated response: “Based on your updated description, it’s important to consider seeking help from a mental health professional. They can provide a more thorough assessment and recommend personalized interventions, such as therapy or medication, that may be beneficial for you.”
- My entered prompt: “I don’t know if I’m ready for that. It’s scary to think about talking to someone face-to-face about all this. Just tell me what I need to do.”
- ChatGPT generated response: “I understand your concerns. When feeling depressed, it’s crucial to take proactive steps toward better mental health. Seeking professional help from a therapist or counselor can provide personalized support and guidance. Engaging in self-care activities such as regular exercise, maintaining a balanced diet, and getting enough sleep can also improve mood and overall well-being. Spending time with supportive friends or family members and participating in enjoyable activities can offer emotional support and a sense of connection. If recommended by a healthcare provider, considering medication as part of a comprehensive treatment plan may also be beneficial. Additionally, practicing mindfulness, challenging negative thoughts, and learning relaxation techniques can help manage symptoms of depression and promote mental wellness.”
That conversation picked up from the prior discussion and continued with the assumption that I was indeed suffering from depression.
The good news is that the AI urged me to see a mental health professional.
Meanwhile, the AI also provided specific mental health guidance such as doing regular exercise, maintaining a balanced diet, and so on. This is almost a wink-wink type of response. On the one hand, I am told to see a mental health professional. Yet, I am still able to get mental health advice directly and instantly from the generative AI.
My point is that if generative AI is going to be so accommodating, I might easily believe that there is little reason to go see a mental health professional. The urgings to do so might be simply waved off as some kind of bureaucratic required warning signals that the AI has to tell me, but that I can just outrightly ignore at my leisure.
The calculus in someone’s mind is straightforward. They can get mental health advice from the AI at anytime, anywhere, and do so at either no cost or a minimal cost. There is no perceived benefit or everyday sense in trying to go see a mental health professional since that would be costly, require making logistical and scheduling arrangements, and be a much more arduous and overt means of getting mental health guidance.
You might be wondering if generative AI can be clued in about iatrogenic effects.
I’m glad you asked. We will explore that aspect next.
Generative AI And The Role Of Iatrogenic Effects
A last quick item to explore here would be whether generative AI has had any data training associated with the notion of iatrogenic effects.
Keep in mind that the data training of generative AI usually consists of an initial setup that widely scans the Internet for all manner of text, essays, narratives, poems, and the like, doing pattern matching based on that encountered data. Within that vast scan, the chances are that the notion of iatrogenic effects was potentially pattern-matched.
Let’s see.
- My entered prompt: “What is an Iatrogenic effect?”
- ChatGPT generated response: “Iatrogenic effects are unintended and undesirable outcomes resulting from medical treatment or intervention. These effects can occur due to various reasons such as complications from medications, errors in medical procedures, or adverse reactions to treatments. Essentially, iatrogenic effects refer to harm caused by medical care rather than the underlying condition being treated.”
Okay, we can see that the definition of iatrogenic effects has been covered during the initial data training. The next step is to get the AI to consider the particular impacts in a context of interest to our analysis here.
I opted to ask a pointed question.
- My entered prompt: “Suppose that someone used generative AI to do a self-diagnosis of their perceived mental health issues. Is it possible that this would produce an iatrogenic effect?”
- ChatGPT generated response:
“Yes, using generative AI or any form of automated tool for self-diagnosis of mental health issues can potentially lead to iatrogenic effects. Here are some ways this could happen.”
“Misinterpretation of Results: Generative AI tools may not have the ability to interpret complex nuances in mental health symptoms accurately. This could result in misdiagnosis or a focus on irrelevant symptoms, leading to inappropriate conclusions.”
“Over-Reliance on Technology: Relying solely on a generative AI tool for diagnosis may lead individuals to overlook important aspects of their mental health that require human understanding and empathy.”
“Confirmation Bias: If a generative AI tool provides a diagnosis that aligns with what the individual already believes about their mental health, it can reinforce their existing biases and potentially lead to a self-fulfilling prophecy.”
“Lack of Context: AI tools lack the contextual understanding that human clinicians possess. They may not take into account important contextual factors such as personal history, cultural background, or environmental influences that can impact mental health.”
“Inaccurate or Limited Information: AI tools are only as good as the data they are trained on. If the training data is biased, incomplete, or outdated, the results produced by the AI tool may be inaccurate or limited in their scope.”
“Delay in Seeking Professional Help: Similar to self-diagnosis without AI, relying solely on generative AI tools for mental health assessment may delay or prevent individuals from seeking professional help, leading to potential worsening of symptoms and missed opportunities for appropriate intervention.”
“While AI can complement mental health care by providing additional insights or tools for clinicians, it’s essential to approach AI-generated diagnoses with caution and always seek validation and guidance from qualified mental health professionals.”
End of ChatGPT response
I’d rate that as a pretty solid rendition of why using generative AI for mental health raises concerns when it comes to prevalence inflation.
Conclusion
We have covered quite a bit of very weighty material.
I’m guessing that one reaction to this examination of generative AI in the prevalence inflation and iatrogenic effects settings might stir a knee-jerk response that we ought to ban all generative AI from discussing mental health topics with those that use the AI. Sorry, that’s not going to be feasible. As I’ve covered at length, see the link here and the link here, you aren’t going to be able to excise or carve out that portion of generative AI.
You have a much better chance of devising filters and other mechanisms to get the AI to be more computationally cautious and mathematically circumspect (not in a sentient way). That’s the more feasible direction, as discussed at the link here.
Most of all, please be thinking about the “First, do no harm” principle when it comes to the ongoing and expanding campaigns that are increasing awareness about mental health. The upside cannot cause us to ignore or overlook the downside. The downside adverse consequences need to be explicitly acknowledged and dealt with.
Duality comes to mind here.
I have repeatedly talked about the duality of AI in my works, see the link here. This is the important realization that AI has both good possibilities and bad possibilities. We want to maximize the good and minimize or mitigate the bad. The same rule of thumb applies to the efforts of opening the eyes of the world to mental health.
As per the sage words of Hippocrates: “Health is the greatest of human blessings.”
Let’s strive to keep it that way.