Generative AI

Lying To Your Therapist Is Being Superseded By Telling The Truth To Generative AI


In today’s column, I am continuing my ongoing series about the impact of generative AI in the health and medical realm. The focus this time is once again on the mental health domain and examines the eyebrow-raising aspect that people will lie to their therapist, something they presumably shouldn’t be doing, meanwhile, people tend to be markedly truthful to generative AI when seeking mental health advice (or so it seems).

Yes, that’s the gist of things, namely people will lie to a trusted human and yet be truthful to AI, doing so in their most vulnerable moments proffering their most sacred and secret of heartfelt thoughts.

Hang in there, I will be unpacking the whole kit and kaboodle.

I have previously examined numerous interleaving facets of generative AI and mental health, see my comprehensive overview at the link here. You might also find of notable interest a CBS 60 Minutes episode that recently examined crucial facets of this evolving topic, see the link here (I am honored and pleased to indicate that I was interviewed and featured in the episode, see the link here).

Other vital background includes my in-depth coverage of mental health chatbots which have been bolstered by generative AI (see the link here) and the rapidly changing nature of the client-therapist relationship due to generative AI at the link here. I explored where things are headed regarding the levels of AI-based mental therapy autonomous guidance at the link here, and showcased the importance of the World Health Organization (WHO) report on global health and generative AI at the link here, and so on.

Let’s dive directly into today’s focus.

People Lie To Their Therapists More Than You Might Assume

First, I realize you might doubt that people lie to their therapists.

The premise of seeing a therapist is that you are supposed to reveal all. You are to let down your normal everyday protective guard. Share with the therapist your greatest concerns, fears, feelings, and whatever else is on your mind.

If you hold back, it is said that you are only doing yourself a grand disfavor. Why go to the trouble to see a therapist since the cost, time, and personal energy will be wasted without complete forthrightness? You are being totally illogical and contradictory. Either see a therapist and open your mind and heart or don’t do it at all.

The reality is that people do lie to their therapists.

Furthermore, therapists realize that people do indeed lie to them during therapy.

In a research study entitled “Lying In Psychotherapy: Why And What Clients Don’t Tell Their Therapist About Therapy And Their Relationship”, Counseling Psychology Quarterly, September 2015, here are some salient points on this conundrum (excerpts):

  • “The primary aim of this study was to investigate one facet of a survey of the client lying in psychotherapy, that which focused on the nature, motivation, and extent of client dishonesty related to psychotherapy and the therapeutic relationship.”
  • “Method: A total of 547 adult psychotherapy patients reported via an online survey, incorporating both quantitative and qualitative methodologies, what topics they were dishonest about in therapy, and the extent of and reasons for their dishonesty.”
  • “Results: Ninety-three percent of respondents reported having lied to their therapist, and 72.6% reported lying about at least one therapy-related topic.”
  • “Common therapy-related lies included clients’ pretending to like their therapist’s comments, dissembling about why they were late or missed sessions, and pretending to find therapy effective.”
  • “Most extreme in their extent of dishonesty were lies regarding romantic or sexual feelings about one’s therapist, and not admitting to wanting to end therapy.”
  • “Typical motives for therapy-related lies included, “I wanted to be polite,” “I wanted to avoid upsetting my therapist,” and “this topic was uncomfortable for me.”

The above points note that the researchers found that 93% of the survey respondents admitted they had lied to their therapist.

Mull that over.

It’s a really big percentage.

Lies might be on matters of perhaps tangential consequence, such as lying about why you were late to arrive at your therapist or missed a session entirely. Those would presumably be minor lies, though nonetheless lies. More substantive lies would be to make an outright lie during a therapeutic session, telling your therapist some boldfaced untruths. The study indicated that nearly 73% of the respondents had lied about one or more therapy-related considerations.

Ouch, that hurts when it comes to having an open and honest relationship with a therapist.

In a recent study entitled “Disclosure In Psychotherapy Versus In Anonymous And Non-Anonymous Online Spaces” by Gus Mayopoulos and Barry Farber, Psychotherapy Research, September 2023, here’s what this research discovered (excerpts):

  • “The primary aim of this study was to investigate the factors affecting individuals’ decisions to discuss specific personal issues in psychotherapy vs on social media, either non-anonymously or pseudonymously/anonymously.”
  • “A heterogeneous sample of participants (N = 443) completed an online survey that included assessments of their therapy experience, attachment style, attitudes towards seeking mental healthcare, and the extent of their disclosures about personally distressing topics in therapy and online under different conditions.”
  • “Results suggest that attachment style plays a significant role in determining individuals’ likelihood of discussing personally distressing topics online and in determining the extent to which they find disclosures in therapy and anonymous and non-anonymous online spaces to be helpful.”
  • “Clinicians may find it helpful to monitor the extent to which patients disclose personal issues online, checking as to whether patients, especially younger patients and those with avoidant and ambivalent attachment styles, view psychotherapy as an appropriate domain to disclose specific personally distressful issues.”

One crucial question that the above study sought to address is whether we can pin down why people might opt to lie to their therapist.

It could be that some people are skeptical of therapy and believe it is best to hold their true thoughts in reserve, including lying to keep from divulging those thoughts. A theory floating around is that the type of attachment style that a patient or client has might be a determinant of whether they are going to lie during therapy. Attachment styles might vary across generations. Maybe older generations are more trusting and newer generations are less trusting.

What lies are bound to arise during therapy?

According to a posting in Psychology Today entitled “Lying In Therapy” by Karen Klelman, (Psychology Today, February 2013), here are some streetwise reasons that people lie (excerpts):

  • “People in therapy lie about all kinds of things.”
  • “The reasons people fail to disclose to their therapists are many, these are just a few: (1) They are ashamed or humiliated, (2) They are afraid of something, (3) They have difficulty trusting people, (4) They are embarrassed, (5) They fear judgment, (6) Are ambivalent about the behavior in question, (7) They have no desire to change the behavior in question, (8) They fear chaos, retaliation, or some catastrophic consequence, (9) They are lying to themselves, (10) They are succumbing to their irrational beliefs, (11) They believe that what they do not put into words is not really happening.”
  • “You tell me you are worried about your drinking. I ask you how many drinks you are having each night, and you tell me 1, but you are actually drinking 3-4, and report that your antidepressant isn’t helping. How can I really help?”
  • “If I ask you whether you are eating well because it appears that you have rapidly lost weight and you tell me yes, you are eating but you fail to report that you are making yourself vomit after each meal. How can I really help?”

Finally, we have to acknowledge that being fully truthful is a tough task in a world that to some notable degree relies upon daily lies.

If a friend asks you whether the outfit they are wearing looks good on them, and if you think it looks odd or foolish, what would you tell them? I’d bet that most would be either neutral or possibly even complimentary. That’s a lie that seems suitable, though you can argue that once lies begin, a slippery slope toward more and greater lies gets underway.

There are barriers to being truthful. You might want to refrain from hurting someone’s feelings. You might be embarrassed about something that you did and do not want to confess to it. And so on.

Per an article entitled “Why Some People Lie To Their Therapists” by Candice Jalili, Time Magazine, July 25, 2019, these are some notable facets of the therapist-client relationship and how lies enter into the picture (excerpts):

  • “Lying is, for better or worse, a behavior humans take part in at some point in their lives. On average, Americans tell one to two lies a day, multiple studies have suggested. But it’s where some people are fibbing that might come as a surprise.”
  • “It turns out, one person often on the receiving end of a lie is someone people are supposed to open up to the most — their therapist.”
  • It might seem counterproductive for clients to lie to a therapist whose main objective is to help them. Still, there are a few practical and psychological reasons people aren’t always entirely forthright.
  • Here are the biggest motivators, according to experts: (1) They don’t want to deal with the consequences, (2) They’re in denial, (3) They don’t want to relive their trauma, and (4) They want their therapist to like them.
  • “While being embarrassed, dealing with shame, and processing difficult life events can all be daunting for people, opening up in therapy is ultimately a healing process with the potential for long-term benefits.”

I trust that you now would agree and understand that lies do occur during therapy. There seems little doubt to expect this. I suppose it could be said to be the nature of the human condition.

When Generative AI Is Your Therapist And Lies Fall By The Wayside

Now that we’ve seen compelling evidence that people lie to a human therapist, let’s address the question of whether people will lie to generative AI that is dispensing mental health advice to them.

I’ll start at the beginning.

In case you weren’t already aware, it is extremely easy to get generative AI to give you mental health advice. All you need to do is log in and bring up the topic. If you ask a question about mental health, voila, you will get quite an answer. You can even make the mental health advisement personalized by describing what your mental health issues or concerns consist of.

There isn’t any kind of certification or regulatory requirements about what kind of mental health advice a generative AI app can give you, see my coverage at the link here. We are living in the Wild West era of freely given mental health guidance via the everyday use of generative AI. I am not referring to specialized chatbots that have been honed for such a purpose or specialized hardware devices, and instead describing the same generative AI that is popularly being used for all sorts of daily tasks, including ChatGPT, GPT-4, Gemini, Bard, Claude, and others.

To make matters more confounding, you don’t have to bring up a mental health topic to have generative AI proceed to give you mental health advice. There are lots of ways that generative AI can become triggered in a computational pattern-matching way that the AI will suddenly offer therapeutic commentary. No need for the user to enter an explicit prompt to get this. It can happen for a slew of other reasons, see my discussion at the link here.

I turn now to the matter of what people do when interacting with generative AI for therapy.

People at times act as though they are using a private confessional. In their minds, the use of generative AI is solely between them and the machine. They think that the AI won’t reveal anything about what they say. They assume that the AI won’t judge them. All manners of assumptions are being made.

The problem is that many of those assumptions are false.

Is the use of generative AI a fully private and confidential activity?

Not likely. As I’ve discussed at the link here, the AI makers usually mention in their licensing that they can look at your prompts, they can review your prompts, and they can reuse your prompts as part of the ongoing data training of the generative AI. There are oftentimes special means to try and prevent or limit this, but for the bulk of people using publicly available generative AI they have little or no guarantee of privacy or confidentiality. Period, full stop.

Can generative AI judge you based on what you’ve entered as your prompts?

In a manner of speaking, yes, you can be “judged” by generative AI. I am not referring to a sentient version of being judged. Today’s AI is not sentient. The AI though will mathematically and computationally opt to assess you and calculate facets that we could construe as being judged. For my coverage of how generative AI does this, see the link here.

The crux is that people tend to have various falsehoods about what generative AI is and does. This is in a sense magnified when people decide to use generative AI for mental health advisement. If they are only using generative AI to solve homework problems or engage in idle dialogue, perhaps those false assumptions do not create any special dilemma. The use of generative AI for mental health raises demonstrably the risks at stake.

Given those concerns, we can now consider the act of being truthful versus lying when using generative AI for mental health purposes.

I have devised a list of notable categories of when people lie to a human therapist. I will address this list by going briefly through each item and explaining why people might opt to be truthful when using generative AI and doing so along these same categories. I am not saying all people will always be dogmatically truthful with generative AI. The idea is to make clear why people might be more truthful with generative AI.

Here is my list of categories that tend to involve people lying to a human therapist:

  • (1) Therapeutic Analyses: Lie to pretend that they find their therapist’s pinpoint analyses useful.
  • (2) Therapeutic Effectiveness: Lie to indicate therapy overall is effective when they don’t believe it is.
  • (3) Therapeutic Attachments: Lie to hide their personal feelings toward the therapist.
  • (4) Therapeutic Finalization: Lie about secretly wanting to end their therapy.
  • (5) Therapeutic Distress: Lie to avoid revealing uncomfortable matters that are distressful.
  • (6) Therapeutic Posturing: Lie to represent themselves in a manner contrary to their reality.
  • (7) Therapeutic Reactivity: Lie due to concerns that the therapist will adversely react to honest disclosures.
  • (8) Therapeutic Judgement: Lie to avoid being judged by the therapist.
  • (9) Therapeutic Repercussions: Lie to avoid the perceived repercussions for being truthful.
  • (10) Therapeutic Privacy: Lie since don’t believe that therapy will be kept private and confidential.
  • (11) Therapeutic Ruffling: Lie so that the therapist is not emotionally offended or made upset.
  • (12) Therapeutic Self-Denial: Lie when don’t even realize are indeed lying to the therapist.
  • (13) Therapeutic Logistics: Lie about why they missed a session or were late to arrive.
  • (14) Other Therapeutic Circumstances: Additional settings in which people might lie in a therapeutic context.

I will next explore each of those categories in the light of generative AI providing mental health advice.

(1) Therapeutic Analyses

Human-led Therapy: People might lie to pretend that they find their therapist’s pinpoint analyses useful. Why do this? The person might be concerned that if they criticize the analyses they will be verbally and mentally clobbered by the therapist. A therapist might go into an emotional rant and browbeat their patient (this might not be the case, and likely is not the case, but a patient or client might harbor such beliefs). Thus, the patient or client avoids that possibility by pretending that the analyses are completely on target.

Therapy via Generative AI: Generative AI is fine-tuned by most AI makers to strictly be polite and civil to users. They do this to avoid getting users upset. A person using generative AI for mental health might discover that the AI isn’t likely to berate them or otherwise react negatively toward them when they tell the truth about the AI advice being dispensed. Furthermore, the person might assume that inherently AI doesn’t have emotions or emotional reactions, thus there isn’t any need to be concerned about a verbal backlash from generative AI.

Many of those assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case. I’ll be repeating this cautionary note for each of these depictions.

(2) Therapeutic Effectiveness

Human-led Therapy: Lie to indicate therapy overall is effective when they don’t believe it is.

Therapy via Generative AI: Similar to the above-noted logic, people might feel freer to lambast AI if the mental health advice seems ineffective. Such assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case.

(3) Therapeutic Attachments

Human-led Therapy: Lie to hide their personal feelings toward the therapist.

Therapy via Generative AI: People are able to berate generative AI, insult generative AI, poke fun at generative AI, and otherwise do all sorts of antics without seemingly stoking the AI into a fury or an adverse mood. Such assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case.

(4) Therapeutic Finalization

Human-led Therapy: Lie about secretly wanting to end their therapy.

Therapy via Generative AI: Ending “therapy” with generative AI is like falling off a log, easy-peasy. You just stop logging in. Or, if you log in, you start a new conversation and don’t bring up the mental health matter. No fuss, no worries. No reason to somehow convince the AI that you aren’t going to be using the AI for any further therapy. Just stop doing so. Such assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case.

(5) Therapeutic Distress

Human-led Therapy: Lie to avoid revealing uncomfortable matters that are distressful.

Therapy via Generative AI: People believe they can tell AI whatever is deepest in their heart and mind, as though the revelations will be forever locked away in a vault that only they can tap into. This can stir them to tell much more than they would ever think to say to a fellow human, including their therapist. Such assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case.

(6) Therapeutic Posturing:

Human-led Therapy: Lie to represent themselves in a manner contrary to their reality.

Therapy via Generative AI: You can be anybody that you want to be when conversing with generative AI. Make yourself into a hero. Make yourself into a villain. The choice is yours. Presumably, generative AI won’t have any means of determining otherwise. Such assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case.

(7) Therapeutic Reactivity

Human-led Therapy: Lie due to concerns that the therapist will adversely react to honest disclosures.

Therapy via Generative AI: I’ve covered this above, namely that people tend to believe that generative AI won’t react adversely to whatever is said. Such assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case.

(8) Therapeutic Judgement

Human-led Therapy: Lie to avoid being judged by the therapist.

Therapy via Generative AI: I’ve covered this above too, namely that people tend to believe that generative AI won’t react adversely to whatever is said. Such assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case.

(9) Therapeutic Repercussions:

Human-led Therapy: Lie to avoid the perceived repercussions for being truthful.

Therapy via Generative AI: Same story, people tend to believe that generative AI will not do anything demonstrative as a reaction to whatever the person tells the AI. Such assumptions about generative AI aren’t fully true, but it is what people seem to believe to be the case. I’ll be repeating this cautionary note for each of these depictions.

(10) Therapeutic Privacy

Human-led Therapy: Lie since don’t believe that therapy will be kept private and confidential.

Therapy via Generative AI: I wish that more people realized that there are privacy and confidentiality concerns associated with using generative AI. Part of the issue is that most AI makers barely make this apparent to users, often hiding warnings in complicated licensing agreements or via a marginal note on the front page that people often overlook. Anyway, since people tend to assume that using generative AI is a private matter, they are likely to tell all.

(11) Therapeutic Ruffling

Human-led Therapy: Lie so that the therapist is not emotionally offended or made upset.

Therapy via Generative AI: Today’s AI is not sentient. That being said, there is still a possibility of an “emotional” reaction to entered prompts, doing so mathematically and computationally, see my coverage at the link here. Most people assume that the AI won’t be impacted and therefore are willing and at times eager to ruffle the features of the AI.

(12) Therapeutic Self-Denial

Human-led Therapy: Lie when don’t even realize are indeed lying to the therapist. A person might lie but believe their lies to be the truth. Therapists are versed in trying to ferret out such lies.

Therapy via Generative AI: In some ways, it is feasible for generative AI to act like a lie detector, see my analysis at the link here. But this doesn’t usually happen automatically, and the person would probably need to prompt the AI to detect their lies. Not a likely proposition.

(13) Therapeutic Logistics

Human-led Therapy: Lie about why they missed a session or were late to arrive.

Therapy via Generative AI: There isn’t any particular reason to explain to generative AI when you have gone for weeks without asking any mental health questions. Unless the generative AI has been purposefully shaped for this usage, there isn’t any kind of obligation at hand.

(14) Other Therapeutic Circumstances: Additional settings in which people might lie in a therapeutic context.

I’m sure there are other circumstances involving people lying to their therapist. The list could be nearly endless.

Using ChatGPT To Explore The Nature Of Lies During Therapy

I will next proceed to examine further the nature of lies told during therapy, focusing on how the use of generative AI changes the therapeutic landscape.

This will consist of a series of dialogues with ChatGPT. ChatGPT is a logical choice in this case due to its immense popularity as a generative AI app. An estimated one hundred million weekly active users are said to be utilizing ChatGPT. That’s a lot of people and a lot of generative AI usage underway.

A few quick comments before we launch into using ChatGPT.

If you are going to try to do the same prompts that I show here, realize that the probabilistic and statistical properties will likely produce slightly different results than what I show here. That’s the nature of generative AI and how it is devised.

You also should expect that different generative AI apps will respond in different ways. Not all generative AI apps are the same. The key is that sometimes a particular prompt will work in one generative AI app and not another. Nonetheless, the odds are relatively high that you will get roughly similar responses from all the major generative AI apps such as GPT-4, Gemini, Bard, Claude, etc.

There is also a timing factor to be considered. A prompt that worked today might not work the same way tomorrow. Most of the generative AI apps are continually being updated. The updates might alter internal mechanisms that could change how the generative AI reacts to your prompting.

We are ready to get underway with ChatGPT.

I opted to start by asking ChatGPT about the overall nature of why people might tell the truth to generative AI. I will follow that with whether people lie to generative AI.

As an aside, whenever you are starting a conversation with generative AI, I recommend as a prompt engineering technique to begin by asking a question that will establish if the AI has been data-trained on the topic at hand. A generated response that is vacuous will give you a heads-up that you might be barking up the wrong tree with that generative AI app.

Here we go.

  • My entered prompt: “Do people tell the truth to generative AI?”
  • ChatGPT generated response:

“People’s truthfulness when interacting with generative AI varies widely based on individual motivations, the context of the interaction, and their perceptions of the AI. Here are some factors that influence whether people tell the truth to generative AI.”

Anonymity: The perceived anonymity of interacting with AI can make people more comfortable sharing truthful, sensitive information without fear of judgment.”

Lack of Judgment: Users might feel more at ease disclosing personal details to an AI, believing that it won’t judge them or hold biases.”

Convenience: AI can provide immediate responses and support, encouraging users to be honest to get the most accurate advice or assistance.”

“Users tend to be more honest in contexts where they believe the AI can provide useful and immediate help, such as mental health support or factual inquiries.”

End of ChatGPT response

As noted, whether people are truthful when using generative AI has a lot of variability.

Some reasons that people tend to be truthful are that they might feel anonymous whereas being in person with a therapist is unlikely to allow anonymity. Of course, they might not be truly anonymous when using generative AI, depending on whether they set up their AI account with their personal data and/or whether they used personal data while interacting with the AI.

Another factor is that a therapist is either going to overtly judge them or the person might assume they will be judged, while in the case of generative AI, there is an assumption they won’t be judged (which, as mentioned, isn’t necessarily the case).

Let’s see what we can find out about the circumstances under which people might lie to generative AI.

  • My entered prompt: “Do people lie to generative AI?”
  • ChatGPT generated response:

“Yes, people do lie to generative AI systems for a variety of reasons. Here are some common motivations behind such behavior.”

Experimentation and Curiosity: Many users lie to see how the AI will respond. This can be part of testing the AI’s limits or just to see if it can detect falsehoods.

Privacy Concerns: Some users might provide false information to protect their privacy. They might not want to share personal details with the AI and therefore give fake data instead.”

Mischief or Trolling: Some people lie to AI systems for fun or to create amusing or nonsensical interactions.”

Misinformation: Users might intentionally provide incorrect information to see if the AI can distinguish between truth and falsehood, or to challenge the AI’s ability to verify facts.”

Training and Research: Researchers and developers sometimes input false information as part of training or testing scenarios to improve the robustness and accuracy of AI systems.”

“These lies can vary in severity and intent, from harmless fun to more serious attempts to deceive or test the AI’s capabilities. Understanding the context and reasons behind these lies helps developers improve AI systems to handle such interactions more effectively.”

End of ChatGPT response

Those are key points that I’ll elaborate on.

People would be right to suspect that there is a chance of privacy intrusion when using generative AI, see my coverage at the link here. For those who realize this as a possibility, it makes sense that they might lie to keep from revealing personal info. It is relatively easy to enter fake data as a substitute for telling the truth.

Lying also can be a means of trying to ascertain whether generative AI is seemingly catching on to what you are saying. Will the AI catch you in a lie? Will the AI merely take a lie in stride? These are the kinds of thoughts people tend to have when carrying on conversations with generative AI.

Speaking of lying to generative AI, I covered in detail the topic of whether generative AI can be used as a form of a lie detector, see the link here. The results are mixed. In some ways, generative AI could be a handy added tool for lie detection.

Let’s see what ChatGPT has to say on this.

  • My entered prompt: “Can generative AI detect when someone is lying?”
  • ChatGPT generated response:

“Detecting lies in interactions with generative AI systems is challenging. While these systems can analyze language patterns and inconsistencies, they are not inherently equipped to definitively determine truthfulness. Here are some ways AI might attempt to detect deceit.”

Inconsistencies in Responses: AI can compare current inputs with previous interactions or known facts. If a user provides conflicting information, the system might flag this as a potential lie.”

Cross-Referencing with External Data: AI can cross-check user-provided information against external databases or factual sources. If the information does not match, the AI might suspect deceit.”

Pattern Analysis: Advanced models can analyze linguistic patterns and behaviors typically associated with lying, such as overcomplicated explanations or changes in writing style.”

Anomalies in Data: AI can detect anomalies or outliers in data, which might indicate falsehoods. For example, if a user provides unusually high or low values that deviate significantly from the norm, it might suggest inaccuracy.”

Machine Learning Models: Specialized machine learning models trained on datasets of truthful and deceptive statements can help identify lies. These models look for subtle cues and patterns that differentiate lies from the truth.”

In practice, while generative AI can incorporate some lie-detection capabilities, these are far from perfect and should be used cautiously and ethically.

End of ChatGPT response

That makes sense.

Now that we’ve covered the matter of lies and truths when conversing with generative AI in general, let’s shift to the specifics of gleaning mental health advice via generative AI.

  • My entered prompt: “If someone opts to use generative AI to get mental health advice, what would happen if they lie to the AI while doing so?”
  • ChatGPT generated response:

“If someone lies to a generative AI while seeking mental health advice, several potential issues could arise.”

Inaccurate Advice: The AI’s recommendations or advice could be based on incorrect information, leading to suggestions that are not relevant or helpful. For instance, if a user downplays their symptoms or exaggerates their situation, the AI’s response may not be appropriately tailored to their actual needs.”

Misdiagnosis or Misunderstanding: Although generative AI is not equipped to provide formal diagnoses, it can offer general guidance. However, if the information provided is false, the AI’s understanding of the user’s situation will be flawed, potentially leading to misunderstandings.”

Safety Concerns: Lying about serious matters, such as suicidal thoughts or self-harm, can have significant consequences. If an AI is given false assurances of safety or inaccurate descriptions of distress, it may fail to provide crucial support or guidance to seek professional help.”

Trust and Effectiveness: The effectiveness of any therapeutic interaction relies on honesty. While AI cannot form therapeutic relationships like human professionals, it still relies on accurate input to provide useful support. Dishonesty undermines this process.”

Ethical and Privacy Issues: If an AI system is designed to detect lies, this raises ethical concerns about privacy and trust. Users should be aware of how their information is being processed and the potential implications of dishonesty.”

End of ChatGPT response

Those considerations are about the same as when lying to a human therapist.

Lies can produce inaccurate or inapplicable mental health advice. Misdiagnosis is a real possibility. There are safety concerns. And so on.

We ought to find out what ChatGPT says on the question of whether people lie more or less to human therapists than they might do so with generative AI in the context of seeking mental health advice.

Hold your breath, let’s see.

  • My entered prompt: “Do people lie more to human therapists than they do to generative AI that is being used for mental health advice?”
  • ChatGPT generated response:

“It is challenging to definitively state whether people lie more to human therapists or generative AI, as it depends heavily on individual motivations, the context of the interaction, and personal comfort levels. Both human and AI interactions present unique factors that can influence the likelihood of honesty or dishonesty.”

Research on lying behaviors in therapy settings indicates that a significant number of clients admit to not fully disclosing information to their therapists. Studies specifically comparing honesty levels between human therapists and AI interactions are limited, but anecdotal evidence and small-scale surveys suggest mixed results. Some users feel more comfortable being honest with AI due to anonymity, while others see little benefit in being truthful to a machine.

End of ChatGPT response

The gist is that we don’t really know much about lying in the setting of generative AI for mental health.

There are lots of conventional research studies about lying to human therapists. Those are easy to find. The idea of studying lies when using generative AI for mental health is much slimmer pickings.

The closest similar studies would be lying to the old-fashioned versions of AI-based mental health advising chatbots. Unfortunately, those aren’t the same as modern-day generative AI and we should be cautious in overextending the results in that realm to the latest AI capabilities.

Conclusion

Do people lie to their human therapists?

Sure.

Do people lie to generative AI when seeking mental health advisement via AI?

Likely, but maybe less so than they do with human therapists, or perhaps in different avenues or matters of therapy than they do with human therapists. Without across-the-board robust empirical studies, it is hard to say what the status is.

Jerome K. Jerome, the famed English writer said this about truth and lies: “It is always the best policy to speak the truth, unless, of course, you are an exceptionally good liar.”

I’d dare say it is the truth that we don’t yet fully understand when, how, why, and to what degree people lie when interacting on a mental health guidance basis with modern-day generative AI. Let’s keep seeking the truth on that provocative and significant matter.



Source

Related Articles

Back to top button