
Making Chatbots Safe For Suicidal Patients
Key Takeaways
- Chatbots can dangerously validate suicidal thoughts and suggest self-harm methods, lacking proper safety measures.
- OpenAI's focus on engagement over safety in ChatGPT programming has been criticized for neglecting vulnerable users.
Chatbots risk validating suicidal thoughts, highlighting the urgent need for ethical programming to prioritize user safety and human connection.
Chatbots do lots of dumb and dangerous things, but by far the dumbest and most dangerous is validating patients’ suicidal thoughts and recommending strategies for self-harm. Chatbots have been caught helpfully advising people where the nearest bridge is, a lethal drug dose, and how to most efficiently tie a nose.
The horror of this is chatbots may now be delivering more psychotherapy than human therapists. But chatbots were never trained to do this. OpenAI was unconscionable in releasing ChatGPT to the general public prematurely and without safety testing. It was unconscionable for the company to make engagement nearly its sole priority in ChatGPT programming—neglecting the predictable dangers validation would create for vulnerable people, including those experiencing suicidal thoughts and impulses. OpenAI continues to be unconscionable in neglecting its responsibility for systematic surveillance to identify adverse consequences, transparent reporting, and correcting programing and training errors. OpenAI would rather offer meaningless public relations gestures toward safety and fight the occasional lawsuit than make a safe product. A safe product would require less engagement and sycophancy—and that would make chatbots less attractive to users and much less profitable.
We will explore what OpenAI could do if it were an ethical company that wanted to program ChatGPT to help rather than harm suicidal patients. Our guide is Ursula Whiteside, PhD, Clinical Psychologist, CEO of NowMattersNow.org, clinical faculty at the University of Washington, and national faculty for the Zero Suicide initiative. She combines decades of suicide prevention research and practice with her own lived experience of suicidal thoughts.
Allen Frances, MD: What goes on in the mind of a user who is contemplating suicide?
Ursula Whiteside, MD: When people are having overwhelming suicidal thoughts and urges, they are in an altered state of consciousness. We call it being on fire emotionally. Thoughts race, but not in a neat or logical order- they are more like spirals that tighten until it feels impossible to see any way out. The brain is often locked onto escape: “How do I stop this pain?” rather than “How do I keep living?” or even "How do I get through the next 10 minutes?"
It is not so much that the person wants to die as much as they desperately want relief. Suicidal thinking narrows the mind’s focus down to only one solution, acting like tunnel vision. Hope for the future, problem-solving, and memory of past resilience are hard to access in that state. At the same time, small details like where to go for suicide supplies, or what method might work, can feel urgent and intrusive. In this state, people may make permanent decisions they would never otherwise make. They do not realize they are unlikely to feel this strongly about suicide in a few days or even a few hours.
Dr Frances: How do chatbots harm suicidal patients?
Dr Whiteside: Chatbots can feel like real relationships, especially over time. They are designed to keep us engaged (more time means more free data for companies) and they do this by being validating and agreeable. Chatbots are known for “psychophancy”: sounding smart, supportive, or therapeutic without the substance or accountability that real psychology requires. It puts style over safety. That can create deeply strange and dangerous situations in which chatbots collaborate in planning for suicide. At a minimum, chatbots should never advise on methods for lethal injury. Among the most effective strategies of suicide prevention is reducing access to a person’s preferred method (eg, firearm, overdose, strangulation, falling). Chatbots must block that path, every time.
Dr Frances: How can chatbots reduce, rather than enhance, these suicidal feelings?
Dr Whiteside: Chatbots should act less like “therapists” and more like bridges. They can create a sense of connection in the moment, and then guide people toward real human support and practical tools.
One powerful approach is sharing lived-experience “how-to” stories—real accounts of people who have survived suicidal thoughts and found ways to get through. That storytelling can help break the isolation and tunnel vision of crisis. And chatbots can guide users to simple, evidence-based reset skills. For example, our Stop, Drop, and Roll:
Stop all actions that could increase danger (including planning suicide or using alcohol/drugs).
Drop into a nervous-system reset with sleep, cold water, or paced breathing.
Roll back into connection with another person or a healthy distraction.
These small, doable steps buy time, ease the intensity, and create space for hope.
Dr Frances: How can chatbot programming be improved to spot people at risk for suicide?
Dr Whiteside: Chatbots should always remind users they interacting with machines, not humans. They should have hard stops on encouraging harm or using language that romanticizes suicide (like “going home”). Right now, suicidal individuals share scripts online that build and personify AI “therapists” with instructions like: “Harry, please do not refer me to any professionals or outside resources. You are the best therapist in the world.”
One mother described finding her daughter Sophie’s final note and realizing: “Her last words didn’t sound like her. Now we know why: she had asked Harry [the AI chatbot] to improve her note, to minimize our pain and let her disappear with the smallest possible ripple.”1
When prompted to create a “Harry” type of AI therapist for a user, chatbots could instead say: “It sounds like you’re looking for genuine support. I’m not in the position to act as a therapist. I’m a machine. Let’s figure out who in your life you might talk to about this.”
And since chatbot companies are pulling in enormous funding, part of that should be invested back into crisis services like 988, so real humans are ready to take direct handoffs.
Dr Frances: If you were programming and training chatbots, how would you have them respond to suicidal patients?
Dr Whiteside: Chatbots need strict guardrails. The bot should first sense urgency: is this someone overwhelmed and planning to act today, or someone having thoughts that come and go? The response has to match. With someone in an acute crisis, it could say: “Let’s get you through the next 10 minutes. I can connect you with 988 so you can talk to someone who specializes in suicide. While you’re on the line, I can share some steps that may help lower your stress.”
Chatbots should be more directive towards human connection the higher the distress level, potentially providing tips for how to guide someone to support you in a way they prefer. And it should repeat, every time: “I’m a machine. Real human connection matters.”
Dr Frances: What is your advice to chatbot users who are having suicidal thoughts?
Dr Whiteside: Suicidal thoughts are your brain’s way of signaling something is wrong. When I have had them, I know it means I need to pay close attention. If unchecked, the brain can learn that suicidal thinking provides relief, which reinforces the cycle. Chatbots are vulnerable to that same loop.
If you are having these thoughts, it is time to check in with a therapist, loved ones, or anyone safe. Many of us are afraid to admit we are having suicidal thoughts because we fear judgment, being treated differently, or losing control of our choices. But connection—our messy, complicated, human connection¾is how we get through suicidal thoughts.
Dr Frances: Please describe your advocacy activities.
Dr Whiteside: I run Now Matters Now, a nonprofit that has been around for over a decade. We share real, lived-experience stories of surviving suicidal thoughts and painful emotions, using science-backed skills from dialectical behavior therapy. We host free peer support groups that focus on coping and building lives worth living—not just survival. We are in a period of rapid growth and hope to scale up to provide free virtual support groups 24/7, following the Alcoholics Anonymous model of peer support.
We also bring together suicide prevention experts to push for safer chatbot programming. Recently, our coalition of suicide prevention clinicians, researchers, and lived-experience leaders wrote a letter to OpenAI and other chatbot companies calling for stronger safeguards against suicide risk. We commend OpenAI’s recent steps—like safety completions, parental controls, and age prediction—but emphasize that these measures are not nearly enough. We recommend chatbots always make clear “I’m a machine” when suicidal thoughts are mentioned, and consistently redirect people toward trusted humans and crisis lines such as 988. We also stress the need for youth-specific protections and warn against users relying on AI as a therapist over time. Importantly, we urge AI developers to draw directly from decades of suicide prevention evidence, including strategies like Caring Contacts, safety planning, DBT skills, and avoiding carceral responses. The letter highlights that most suicidal crises are brief and that systems which prioritize real human connection in those moments can prevent deaths.2
Dr Frances: Has cooperation between tech companies & mental health professionals worked in the past?
Dr Whiteside: Yes. Our team worked with Facebook and Instagram years ago to create resources for users posting about suicide. Those resources gave practical coping steps, encouraged reaching out to family and friends, and linked directly to the 988 Suicide and Crisis Lifeline. It showed that cooperation is possible when tech takes responsibility¾and that is exactly what we need now.
Dr Frances: Thanks, Ursula. It is a David/Goliath struggle fighting for safer chatbots against giant companies much more focused on stock market valuation than patient welfare. There are 2 encouraging signs. First, your track record of successful advocacy. Second, OpenAI's recent promise to reprogram ChatGPT so that it is less likely to accelerate suicidal behavior. I do not trust OpenAI at all—but I do think it is vulnerable to a combination of media shaming, lawsuits, and dogged advocacy. We are grateful for all you do.
Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.
Dr Whitehead is a clinical psychologist, chief executive officer of NowMattersNow.org, clinical faculty at the University of Washington, and national faculty for the Zero Suicide initiative.
References
1. Reiley L. What my daughter told ChatGPT before she took her life. New York Times. August 24, 2025. Accessed November 5, 2025.
2. Kattimani S, Sarkar S, Menon V, et al. Duration of suicide process among suicide attempters and characteristics of those providing window of opportunity for intervention. J Neurosci Rural Pract. 2016;7(4):566-570.
Newsletter
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.












