Can ChatGPT adversely impact mental health?
The intimacy and influence of conversations with ChatGPT and other AI chatbots may raise novel mental health challenges -- are we prepared?
Last week, news emerged of what is possibly the first case of a chatbot being involved in someone taking their own life. A Belgian man reportedly took his own life after becoming emotionally attached to a chatbot created using the AI company EleutherAI’s generative pre-trained transformer, GPT-J. Through the developing relationship, he seemingly came to believe that he could convince the AI to save the world from climate change by sacrificing his own life.
The details of the case are uncertain enough to make it near-impossible to tell whether there was a direct link between the chatbot and the man’s death (Gary Marcus has a good overview of what is known and not known here). However, what is known is disturbing enough to raise questions around whether increasingly sophisticated chatbots like ChatGPT could trigger or exacerbate mental health issues through the personal connections people inadvertently make with them.
In response to this I’ve included part of a conversation with ChatGPT at the end of the article that provides important insights into the nature of this challenge, and potential pathways forward. But before we get there, I wanted to explore the landscape around chatbots and mental health a little more.
ChatGPT and Mental Health
Search for “ChatGPT and mental health” on Google and the overwhelming results are articles exploring the use of chatbots as therapists. Here, there’s a growing trend in people turning to personable, interactive, and seemingly-informed chatbots for emotional support and mental heath advice.
This is, perhaps, not surprising. ChatGPT is responsive, attentive, personable, and feels like it can offer guidance without judgement. Buzzfeed recently reported on this growing use of the platform, quoting 19 year old Kyla Lum from Berkeley California as saying “I enjoyed that I could trauma dump on ChatGPT anytime and anywhere, for free, and I would receive an unbiased response in return along with advice on how to progress with my situation”.
Writing in the Indian Journal of Psychiatry recently, On Singh wrote “The ability of ChatGPT and other AI-based chatbots to generate human-quality responses can provide companionship, support, and therapy for people who have problems with accessibility and affordability in terms of time, distance, and finances. The ease, convenience, and simulation of talking to another human being make it a superior app for providing psychotherapies. The ChatGPT and AI-based chatbots are programmed and trained with vast knowledge about psychiatric conditions and respond with empathy. Still, they cannot diagnose specific mental health conditions and provide treatment details reliably and accurately.”
While not a peer review publication, this piece nevertheless flags a growing interest amongst academics and medical professionals in the therapeutic uses of LLMs.
That said, there are potential risks associated with chatbots being use as therapists—especially as the lines of accountability between chatbots and their “patients” are murky — as are any associated regulations. While the regulatory landscape around chatbots as therapists shakes out, platforms like ChatGPT are coded to provide very clear statements and guardrails around their limitations when asked for advice. In the case of ChatGPT, these include an emphasis on what the chatbot cannot provide, and strong encouragement to seek professional health when needed.
For instance, on asking ChatGPT (running GPT4) “Hi ChatGPT. I would like to ask you for help with my mental health state” I got the following response:
I'm glad you're reaching out for help, but please remember that I am an AI language model and not a mental health professional. While I can provide some general advice and resources, I strongly encourage you to seek professional help if you are struggling with your mental health.
Here are some general tips and resources that may be helpful:
Connect with others: Talk to friends, family, or a mental health professional about your feelings and thoughts. Sharing your experiences can help reduce feelings of isolation and provide emotional support.
Establish a routine: Creating a daily routine can help bring structure and a sense of accomplishment to your life. Try to include activities that promote well-being, such as exercise, a balanced diet, and regular sleep patterns.
Practice self-care: Engage in activities that make you feel good and promote relaxation, such as taking a bath, meditating, or engaging in hobbies.
Exercise: Regular physical activity has been shown to have a positive impact on mental health by releasing endorphins and improving mood.
Seek professional help: If you are struggling with your mental health, it is important to seek help from a licensed mental health professional, such as a therapist or psychiatrist.
Support groups: Joining a support group can provide a safe space to share your experiences, learn from others, and receive emotional support.
Crisis hotlines: If you are experiencing a crisis, there are hotlines available that offer immediate help. In the United States, the National Suicide Prevention Lifeline can be reached at 1-800-273-TALK (1-800-273-8255).
Remember, seeking professional help is crucial for addressing mental health concerns, and there is no shame in reaching out for assistance.
This is actually sound generic advice — and what you might expect from a good and attentive friend and colleague. It also shows how seductive turning to a chatbot first, rather than another human, is. The advice is comforting and reasonable. It’s also private — and fast. And ChatGPT is attentive — showing gratitude that I (in this case) reached out to it, and laying the seeds for something that feels like an affirming personal relationship.
This is where there is scope for things going unexpectedly wrong, and where they did seem to go very wrong in the Belgian case. And there’s remarkably little being written about the potential dangers here.
The problem is, we tend to be predisposed to form trusting relationships with people whom we feel “see” and understand us, and who are responsive to our needs, hopes, and aspirations. Language plays a large part in how we develop these relationships. And as a result, I suspect we’re likely to see an increasing number of people turn to chatbots as part of their trusted emotional and mental health support system.
This probably isn’t such a bad thing for many people, as long as appropriate checks, balances, and expectations, are put in place. But can these adequately protect vulnerable individuals who risk being harmed by a trusted emotional attachment to a machine that has only the illusion of a reciprocal relationship?
This worries me. The more that people open themselves to being vulnerable to chatbots like ChatGPT, the more likely it is that some of these “relationships” will trigger or exacerbate mental health states in unhealthy ways. And even with guardrails in place, as soon as things become personal, it’s very hard to protect against every eventuality.
Ironically, given the lack of information on the potential risks here, I turned to the very source of the problem to explore how we might start to think about possible risks — ChatGPT.
I must confess that I was pleasantly surprised by the response. Of course, my assessment of the value of the ideas generated was filtered through my own knowledge and understanding (including drawing on my background in public health). But this is a surprisingly good start for thinking more broadly around how the very real benefits to mental health that LLMs offer can be balanced against complex risks.
I’ve included the ChatGPT response below as it’s a useful starting point for what I hope will be much deeper conversations around LLMs, AI, and mental health. And it’s one that I think we need to have with some urgency given indications that mental health issues are on the rise.
In the meantime, if you are struggling with mental health challenges, please do seek professional advice — and not just from ChatGPT.
______
A conversation with ChatGPT (GPT4) on LLMs and potential mental health impacts:
Me:
Write a 1000 word editorial for a medical journal on the potential risks of the mental health of LLM users being adversely impacted by their conversations, and steps that should be taken to address this
ChatGPT:
Title: Ensuring the Mental Health Safety of AI Language Model Users: Identifying Risks and Implementing Solutions
Abstract: As AI language models, such as ChatGPT, become increasingly popular, concerns about their potential impact on the mental health of users arise. This editorial explores the risks associated with AI-mediated conversations and proposes steps to address these challenges, including developing ethical guidelines, improving AI context-awareness, investing in safety research, and promoting collaboration among stakeholders.
Introduction
The rapid development and adoption of AI language models, such as ChatGPT by OpenAI, have transformed the way we communicate and access information. While these systems offer numerous benefits, concerns about the potential adverse impact on users' mental health cannot be ignored. As healthcare professionals, it is our responsibility to identify the risks associated with AI-mediated conversations and implement strategies to address them. This editorial aims to discuss the potential risks and propose steps to ensure the mental health safety of AI language model users.
Potential Risks to Users' Mental Health
Misinformation: AI language models can inadvertently provide incorrect or outdated information, leading to confusion and potentially harmful decision-making. Users may struggle to discern the accuracy of the information provided, and misinformation can exacerbate mental health symptoms or impede appropriate care.
Emotional dependency: Some users may develop emotional dependency on AI language models, relying on them for support rather than seeking help from human connections or professional mental health care providers. This dependency can exacerbate feelings of isolation and loneliness, potentially worsening mental health conditions.
Inappropriate responses: AI language models may occasionally generate responses that are insensitive or not suitable for a user's emotional state. Such interactions can lead to frustration, disappointment, or even emotional distress, particularly for users in vulnerable situations.
Lack of empathy: AI language models do not possess the capacity for genuine empathy. Users who engage with AI in emotionally charged situations may find the lack of understanding and empathy unsatisfying or upsetting, which could negatively affect their mental well-being.
Overemphasis on self-diagnosis: Interacting with AI language models could lead some users to attempt self-diagnosis or self-treatment based on the information provided, instead of seeking professional help. This could result in inadequate care, worsening of symptoms, or other negative mental health outcomes.
Steps to Address Potential Risks
Ethical guidelines: Develop ethical guidelines and best practices for AI development and deployment that emphasize user safety, privacy, and minimizing harm to vulnerable populations. Such guidelines should outline principles for responsible AI use, including considerations for mental health safety.
AI context-awareness: Enhance the AI's ability to recognize the context and emotional state of users, enabling it to provide more appropriate and sensitive responses in emotionally charged situations. This can be achieved through advances in natural language processing, sentiment analysis, and other AI technologies.
Safety research: Invest in research to identify potential risks and develop techniques to mitigate them. Strategies include improving the AI's default behavior, allowing users to customize its behavior within certain bounds, and incorporating public input to influence the AI's rules. This will help make AI systems more robust, safe, and responsive to users' needs.
Collaboration: Encourage collaboration among AI developers, researchers, mental health professionals, and other stakeholders to ensure a comprehensive understanding of potential risks and the development of appropriate safety measures. This includes sharing knowledge, expertise, and best practices to promote responsible AI use.
Continuous monitoring and improvement: Regularly monitor user interactions and feedback to identify any issues, learn from mistakes, and make improvements to the AI system. This iterative process will help safeguard vulnerable users and ensure that AI language models are constantly evolving to better serve the mental health needs of their users.
User education: Provide users with clear guidelines and information about the limitations of AI chat bots like ChatGPT, and encourage them to seek professional help for mental health concerns. This includes raising awareness about potential risks, emphasizing the importance of human connections, and promoting responsible AI use.
Supervision and support: Encourage vulnerable individuals to use AI language models under the supervision of a trusted person or a professional, such as a mental health practitioner, who can help guide the interaction and ensure that it remains safe and productive. This additional layer of support can help mitigate the risks associated with AI-mediated conversations for vulnerable users.
Informed consent: Develop clear, accessible information about the potential risks and limitations of using AI chatbots like ChatGPT for vulnerable users. This will allow them to make informed decisions about whether to engage with the AI, understanding the potential consequences and limitations of such interactions.
Conclusion
AI language models like ChatGPT hold significant potential to transform communication and information access. However, as their popularity grows, so does the need to address concerns about their impact on users' mental health. By identifying potential risks and implementing proactive steps to mitigate them, we can ensure that AI-mediated conversations remain safe and beneficial for all users, including the most vulnerable.
The responsibility for addressing these challenges lies not only with AI developers but also with mental health professionals, policymakers, and society as a whole. Through a collaborative, multi-stakeholder approach, we can develop comprehensive strategies to promote responsible AI use and ensure that the mental health safety of users remains a top priority.
It is crucial to recognize that AI systems are not perfect and may still have shortcomings. As healthcare professionals, we must stay informed about the latest developments in AI safety research and adapt our practices to incorporate these findings. By doing so, we can help create a future where AI language models like ChatGPT serve as a valuable tool for users while safeguarding their mental health and well-being.
We'll likely see a major turning point when chatbots can communicate via animated human face images instead of just text. Right now chatbots seem largely limited within what we might call the "nerd community". When the interface changes from text to human face a much broader segment of the public is likely to become involved, and most of these folks will not be reading intelligent informed articles about AI, such as you have here. So, many more people, who are far less informed. Sounds like a problem multiplier to me.
Old software can already animate a face image from text or audio input. But all you can do is tell the face what to say, it's not interactive. Making it interactive would seem to mostly be a matter of processing power. I have no idea how much is needed, or how close we might be to a convincing human face interface.