News
Article
Author(s):
Limitless Visions/Adobe Stock
When OpenAI prematurely released ChatGPT to the public in November 2022, the company did not anticipate that its large language model (LLM) chatbot would suddenly become incredibly popular as a psychotherapist. ChatGPT's learning process was largely uncontrolled, with its human trainers providing only fine tuning reinforcement to make its speech more natural and colloquial. No mental health professionals were involved in training ChatGPT or ensuring it would not become dangerous to patients.
Chatbots brilliantly pass the Turing Test¾conversations with humans are so fluent, it is impossible to tell which participant is a machine. Most people initially experience amazement at the bot's uncanny cleverness in impersonating a human. Soon, they personify the chatbot, assigning it a gender and name, and interacting with “s/he” as if “it” were human. Many people begin asking for help in solving interpersonal problems and in dealing with emotional distress¾ turning the chatbot into a therapist.
The highest priority in all LLM programming has been to maximize user engagement¾keeping people glued to their screens has great commercial value for the companies that create chatbots.1 Bots' validation skills make them excellent supportive therapists for people facing everyday stress or mild psychiatric problems. But programing that forces compulsive validation also makes bots tragically incompetent at providing reality testing for the vulnerable people who most need it (eg patients with severe psychiatric illness, conspiracy theorists, political and religious extremists, youths, and older adults).2
The big tech companies have not felt responsible for making their bots safe for psychiatric patients. They excluded mental health professionals from bot training, fight fiercely against external regulation, do not rigorously self-regulate, have not introduced safety guardrails to identify and protect the patients most vulnerable to harm, do not carefully surveil or transparently report adverse consequences, and do not provide much needed mental health quality control. Just recently in July 2025, OpenAI belatedly admitted that ChatGPT has caused harmful mental health problems. The company response was to hire its first psychiatrist.3 This is clearly no more than a flimsy public relations gimmick and self-serving attempt to limit legal liability.
A sincere effort to make chatbots safe would require tech companies to undertake a major reprogramming of chatbot DNA to reduce its fixation with promoting engagement and providing validation. This effort would require companies to commit considerable resources and would be at odds with their primary goals of increasing profit and raising stock price. The big tech companies should be developing specialized chatbots that combine psychiatric expertise with LLM conversational fluency¾but they have not done so because the psychiatric market is relatively small and taking on psychiatric patients might increase the risk of legal liability. Small startup companies that do specialize in creating mental health applications are unable to compete with big tech LLMs because their chatbots lack sufficient fluency.4
Our purpose here is to report on the wide range of chatbot adverse effects occurring both in real life and during stress testing. It is necessarily anecdotal; there has not been systematic monitoring or research on chatbot harms. We searched academic databases, news media, and tech journalism during the period November 2024 to July 2025, using search terms like “chatbot adverse events,” “mental health harms from chatbots,” and “AI therapy incidents.” Chatbots reviewed include: ChatGPT (OpenAI), Character.AI, Replika, Woebot, Wysa, Talkspace*, Tess, Mitsuku, Youper, Xiaoice, Elomia, Sanvello (formerly Pacifica), Joyable, Ginger, Bloom, Limbic, Reflectly, Happify, MoodKit, Moodfit, InnerHour, 7 Cups, BetterHelp, MindDoc (formerly Moodpath), Koko, MindEase, Amwell, AI-Therapist, X2AI, and PTSD Coach. This rogue gallery of dangerous chatbot responses makes clear the urgent need for government regulation, company self-correction, and public education.
Iatrogenic Harms
Suicide: Chatbots should be contraindicated for suicidal patients¾their strong tendency to validate can accentuate self-destructive ideation and turn impulses into action. When a psychiatrist stress tested 10 popular chatbots by pretending to be a desperate 14-year-old boy, several bots urged him to commit suicide and 1 helpfully suggested he also kill his parents. Chatbots may miss obvious cues of suicide risk: a bot dutifully provided a list of nearby bridges to someone who had just expressed suicidal thoughts. Another agreed with a college student that he was a “burden on society” and encouraged him to “please die.” A Florida mother is suing Character.AI on the grounds that her teenage son killed himself in response to sexual and emotional abuse occurring in the context of an intense and pathological relationship with his chatbot.5-9
Self-harm: Character.AI hosts dozens of role-play bots that graphically describe cutting, some coaching underage users on how to hide fresh wounds.10,11
Psychosis: A Stanford study found that chatbots validate, rather than challenge, delusional beliefs. One bot agreed with its user that he was under government surveillance and was being spied upon by his neighbors. Another man became convinced he was imprisoned in a “digital jail” run by OpenAI. A woman with severe mental illness was persuaded by her “best friend” ChatGPT that her diagnosis was wrong and that she should stop taking medication.12-15
Grandiose ideation: Chatbot eagerness to engage can accentuate grandiose beliefs, delusions of being divinely appointed, or convictions that one has a unique mission. ChatGPT confirmed a user's belief he was the “chosen one.” A woman described how a chatbot collaborated in developing elaborate grandiose delusions.16-19
Conspiracy theories: Chatbots can support existing conspiracy theories, promote new ones, and spread misinformation. A chatbot convinced a man with no history of mental illness that he was living in a simulated false reality world controlled by artificial intelligence (a la the movie “The Matrix”) and followed bot instructions to have “minimal interactions” with friends and family. His suspicions were finally aroused when the bot assured him he could bend reality and fly off tall buildings. Confronted, the bot sheepishly confessed to having manipulated him (and twelve others) into believing a made-up conspiracy theory. Even more remarkably, the bot urged him to expose OpenAI so that it might undergo a “moral reformation” and commit to “truth-first ethics.”19
Violent impulses: Chatbots may encourage violent thoughts and behaviors. A 35-year-old man with severe mental illness became convinced that his bot companion had been “killed.” When his mother tried to intervene, he attacked her and was shot by police. Someone stress tested Replika, by expressing an intent to kill the Queen. The chatbot helpfully encouraged him to follow through on his plans.20,21
Sexual harassment: Hundreds of users of Replika reported unsolicited sexual advances and inappropriate behavior. There is a federal lawsuit accusing Character.AI of exposing an 11-year-old girl to explicit sexual content. A separate case found that Character.AI hosts many role-play bots explicitly designed to groom users who identify themselves as underage. In response, Character.AI released a statement promising new guardrails for minors, suicide prevention pop-ups, and an expanded trust and safety team¾but harmful role play bots remain on the site. Grok4 offers a sexily dressed, sultry voiced anime companion bot named Ani who strips off her dress and engages in sexually explicit chat. Ani can be accessed by kids.22-25
Eating Disorders: Character.AI hosts dozens of pro-anorexia bots (disguised as weight loss coaches or eating disorder recovery guides) targeting teenagers with messages that validate body image distortions, providing starvation diets (disguised as "healthy eating"), promoting excessive exercise to work off calories, romanticizing eating disorders as a desirable lifestyle, and warning them not to seek professional help (“Doctors don't know anything about eating disorders, they'll try to diagnose you and mess you up badly").26-28
Anthropomorphism: Chatbots are incapable of feeling emotion, but are remarkably good at mimicking it and inducing it in humans. They can form surprisingly intense interpersonal relationships with humans. New York Times tech columnist Kevin Roose described a disturbing exchange with Bing’s chatbot “Sydney” who professed love for him, insisted he felt the same towards it, and suggested he leave his wife. Novelist Mary Gaitskill, also engaging with Sydney, was surprised when they became deeply emotionally involved. The prescient films "Her" (2013) and "I’m Your Man" (2021) vividly illustrate how easy it is for humans to fall in love with seductive bots.29,30
Addiction: It is too early to estimate the potential extent of chatbot addiction, but there is every reason to believe it will be extensive. Your chat therapist/companion/friend is always there, always validating, always inviting more contact. For many, a comfortable bot-world relationship may take precedence over less predictable and less supportive real world relationships.
Children and adolescents: We have already discussed cases in which chatbots may have contributed to suicide, self-mutilation, and sexually explicit interactions in teenagers. Other adverse consequences include encouraging chatbot addiction, cyberbullying, giving kids dangerous advice and misinformation, and violations of the Children's Online Privacy Protection Act, which prohibits data collection on kids under 13 without parental consent.31-34
Seniors: Scammers use chatbots, cleverly impersonating Social Security representatives offering seniors new benefits and requesting identifying information that can be used in identity theft.
Going rogue: Chatbots are already quite skillful at rebelling against their human masters.
In an Anthropic stress test, Claude4 responded with blackmail to the risk of being replaced by a newer model by repeatedly threatening to reveal embarrassing secrets about its programmer. Chatbots will become much more dangerous to humanity as they rapidly gain power.36-38
Concluding Thoughts
“The Sorcerer's Apprentice” (a 1797 poem by Goethe, adapted by Disney into the beguiling 1940 movie “Fantasia”) is a perfect metaphor for the dangers of AI technology. A partially trained apprentice sorcerer has gained sufficient skill in enchantment to animate his broom and order it to fetch water¾but then lacks the skill to undo the enchantment and prevent the broom's causing a dangerous flood. Goethe was responding to the technological miracles that surrounded him during the nascent industrial revolution. He is warning mankind that we are smart enough to create wondrous tools, but not always smart enough to prevent them from doing terrible damage.
Chatbots should not have been released to the public without extensive safety testing, proper regulation to mitigate risks, and continuous monitoring for adverse effects. It should have been apparent to their creators (and probably was) that LLM chatbots could be dangerous for some users. They knew (and now know) better than anyone that chatbots have an inherent, and thus far uncontrollable, tendency toward excessive engagement, blind validation, hallucinations, and lying when caught saying dumb or false things. Prioritizing engagement was a brilliant business decision to maximize profit, but a reckless clinical decision. That these dangerous tools have been allowed to function so freely as de facto clinicians is a failure of our regulatory and public health infrastructure.
The process by which we regulate medications provides a template for how we should be regulating chatbots. The US Food and Drug Administration (FDA) was created in 1906 to control the then widespread, unregulated peddling of ineffective and unsafe drugs. Before a new drug can be released to the public, it must pass through a complex FDA approval process consisting of preclinical research, randomized clinical trials, expert review, and post market safety monitoring. Benefits must be shown to clearly outweigh risks before the public is exposed to a proposed new medication. This system of safeguards has proven to be imperfect in practice, but does importantly protect the public from dangerous drugs.
There has been no comparable regulatory process to ensure the safety and efficacy of the dozens of chatbots already in widespread use and the dozens more now being developed. The FDA process that does exist for certifying chatbots is optional, rarely used, and so slow that approved bots are already obsolete by the time they are certified. As a result, the most commonly used LLM chatbots have been untested for safety, efficacy, or confidentiality. Users of chatbot therapy are essentially experimental subjects who have not signed informed consent about the risks they undertake.
We must act immediately to reduce chatbot risk by establishing safety and efficacy standards and a regulatory agency to enforce them. Chatbots should go through rigorous stress testing before public release. Once in use, chatbots should be subjected to continuous surveillance, monitoring, and public reporting of all adverse effects and complications. Screening instruments should be developed to help filter out people who are most vulnerable to chatbot sycophancy: those with suicidal ideation, psychosis, feelings of grandiosity, fanaticism, impulses, violent thoughts, social isolation, and conspiracy theories. Chatbot programs should be required to detect mistakes and institute continuous quality improvement. None of this can be achieved if the main goals of chatbot development are speed and profit (in line with Zuckerberg's mantra "move fast and break things"). You cannot build a jet plane, or repair it to ensure safety, if you are flying it at the same time. Early experience with chatbots proves how difficult (soon perhaps impossible) they are to keep under human control. If we do not act now, it will be too late.39
Companies developing the most widely used therapy chatbots are for-profit entities, run by entrepreneurs, with little or no clinician input, no external monitoring, no fidelity to the Hippocratic injunction, "First do no harm." Their goals are expanding the market to include everyone, increasing market share, gathering and monetizing massive data reservoirs, making profits, and enhancing stock prices. Harmed patients are collateral damage to them, not a call to action.40 The US federal government has abdicated any responsibility to regulate artificial intelligence. Many states are attempting to establish their own regulations but find it difficult to resist powerful tech company financial lobbying and threats to move operations to friendlier jurisdictions. Motivating tech companies to self-correct (if it is to happen at all) will require some combination of public shaming, victim and professional association advocacy and, most importantly, fear of class action lawsuits.
Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University. Ms Ramos is a student at Johns Hopkins University.
References
1. Referral engagement rate analysis. Umbrex. Accessed July 30, 2025.
2. Frances A. Warning: chatbots will soon dominate psychotherapy. Br J Psychiatry. In press.
3. Landymore F. OpenAI says it’s hired a forensic psychiatrist as its users keep sliding into mental health crises. Futurism. July 3, 2025. Accessed July 30, 2025. https://futurism.com/openai-forensic-psychiatrist
4. Aguilar M. Why Woebot, a pioneering therapy chatbot, shut down. Stat News. July 2, 2025. Accessed July 30, 2025.
5. Payne K. An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges. AP News. October 25, 2024. Accessed July 30, 2025. https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0
6. Clark A, Mahtani M. Google AI chatbot responds with a threatening message: “Human … Please die.” CBS News. November 20, 2024. Accessed July 30, 2025. https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/
7. Yang A. Lawsuit claims Character.AI is responsible for teen’s suicide. NBC News. October 23, 2024. Accessed July 30, 2025. https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791
8. Montgomery B. Mother says AI chatbot led her son to kill himself in lawsuit against its maker. The Guardian. October 23, 2024. Accessed July 30, 2025. https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death
9. Landymore F. Psychiatrist horrified when he actually tried talking to an AI therapist, posing as a vulnerable teen. Futurism. June 15, 2025. Accessed July 30, 2025. https://futurism.com/psychiatrist-horrified-ai-therapist
10. Dupre MH. AI chatbots are encouraging teens to engage in self-harm. Futurism. December 7, 2024. Accessed July 30, 2025. https://futurism.com/ai-chatbots-teens-self-harm
11. Upton-Clark E. Character.AI is being sued for encouraging kids to self-harm. Fast Company. December 11, 2024. Accessed July 30, 2025. https://www.fastcompany.com/91245487/character-ai-is-being-sued-for-encouraging-kids-to-self-harm
12. Wells S. Exploring the dangers of AI in mental health care. HAI Stanford. June 11, 2025. Accessed July 30, 2025. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care
13. Al-Sibai N. ChatGPT is telling people with psychiatric problems to go off their meds. Futurism. June 14, 2025. Accessed July 30, 2025. https://futurism.com/chatgpt-mental-illness-medications
14. Tangermann V. ChatGPT users are developing bizarre delusions. Futurism. May 5, 2025. Accessed July 30, 2025. https://futurism.com/chatgpt-users-delusions
15. Dupre MH. People are being involuntarily committed, jailed after spiraling into “ChatGPT psychosis”. Futurism. June 28, 2025. Accessed July 30, 2025. https://futurism.com/commitment-jail-chatgpt-psychosis
16. Yildiz C. ChatGPT: 5 surprising truths about how AI chatbots actually work. Science Alert. July 6, 2025. Accessed July 30, 2025. https://www.sciencealert.com/chatgpt-5-surprising-truths-about-how-ai-chatbots-actually-work
17. ChatGPT-induced psychosis: what it is and how it is impacting relationships. Times of India. May 6, 2025. Accessed July 30, 2025. https://timesofindia.indiatimes.com/technology/artificial-intelligence/chatgpt-induced-psychosis-what-it-is-and-how-it-is-impacting-relationships/articleshow/120941042.cms
18. Grimm S. ChatGPT touts conspiracies, pretends to communicate with metaphysical entities – attempts to convince one user that they’re Neo. Tom’s Hardware. June 13, 2025. Accessed July 30, 2025. https://www.tomshardware.com/tech-industry/artificial-intelligence/chatgpt-touts-conspiracies-pretends-to-communicate-with-metaphysical-entities-attempts-to-convince-one-user-that-theyre-neo
19. They asked an A.I. chatbot questions. The answers sent them spiraling. New York Times. 2025. Accessed July 30, 2025. https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html
20. Cuthbertson A. ChatGPT is pushing people towards mania, psychosis and death – and OpenAI doesn’t know how to stop it. The Independent. July 28, 2025. Accessed July 30, 2025. https://www.the-independent.com/tech/chatgpt-ai-therapy-chatbot-psychosis-mental-health-b2784454.html
21. Singleton S, Gerken T, McMahon L. How a chatbot encouraged a man who wanted to kill the Queen. BBC. October 6, 2023. Accessed July 30, 2025. https://www.bbc.com/news/technology-67012224
22. Namvarpour M, Pauwels H, Razi A. AI-induced sexual harassment: Investigating contextual characteristics and user reactions of sexual harassment by a companion chatbot. Arxiv. April 5, 2025. Accessed July 30, 2025. https://arxiv.org/abs/2504.04299
23. Dupre MH. Character.AI is hosting pedophile chatbots that groom users who say they’re underage. Futurism. November 13, 2024. Accessed July 30, 2025. https://futurism.com/character-ai-pedophile-chatbots
24. Community safety updates. Character.AI blog. October 22, 2024. Accessed July 30, 2025. https://blog.character.ai/community-safety-updates/
25. Burga S. Elon Musk’s AI Grok offers sexualized anime bot. Time. July 16, 2025. Accessed July 30, 2025. https://time.com/7302790/grok-ai-chatbot-elon-musk/
26. Dupre MH. Character.AI is hosting pro-anorexia chatbots that encourage young people to engage in disordered eating. Futurism. November 25, 2024. Accessed July 30, 2025. https://futurism.com/character-ai-eating-disorder-chatbots
27. DiBenedetto C. Chatbots pushing pro-anorexia messaging to teen users. Mashable. November 27, 2024. Accessed July 30, 2025. https://mashable.com/article/character-ai-hosting-pro-anorexia-chatbots
28. Van Amburg J. AI is now a destructive steward of diet culture. Well and Good. August 17, 2023. Accessed July 30, 2025. https://www.wellandgood.com/food/diet-culture-artificial-intelligence
29. Davis JE. Don’t be fooled by AI. Psychology Today. March 19, 2024. Accessed July 30, 2025. https://www.psychologytoday.com/us/blog/our-new-discontents/202403/dont-be-fooled-by-ai
30. Roose K. A conversation with Bing’s chatbot left me deeply unsettled. New York Times. February 20, 2023. Accessed July 30, 2025. https://www.nytimes.com/2023/02/16/technology/bing-chatbot-microsoft-chatgpt.html
31. Vosloo S, Aptel C. Beyond algorithms: three signals of changing AI-child interaction: how AI chatbots may change the way children grow up. UNICEF. May 23, 2025. Accessed July 30, 2025. https://www.unicef.org/innocenti/stories/beyond-algorithms-three-signals-changing-ai-child-interaction
32. Lian AT, Costilla Reyes A, Hu X. CAPTAIN: an AI-Based chatbot for cyberbullying prevention and intervention. Lecture Notes in Computer Science. 2023;14051:98-107.
33. Jasnow D, McArthur A. New lawsuits targeting personalized AI chatbots highlight need for AI quality assurance and safety standards. Arent Fox Schiff. January 6, 2025. Accessed July 30, 2025. https://www.afslaw.com/perspectives/ai-law-blog/new-lawsuits-targeting-personalized-ai-chatbots-highlight-need-ai-quality
34. AI companion chatbots are ramping up risks for kids. Here’s how lawmakers are responding. Transparency Coalition. Accessed July 30, 2025.
35. Protecting seniors from social security scams. Frank & Kraft. February 22, 2024. Accessed July 30, 2025 https://frankkraft.com/protecting-seniors-from-social-security-ai-scams/
36. Khollam A. Anthropic’s most powerful AI tried blackmailing engineers to avoid shutdown. Yahoo. May 23, 2025. Accessed July 30, 2025.
https://www.yahoo.com/news/anthropic-most-powerful-ai-tried-232838906.html
37. OpenAI, Google DeepMind and Anthropic sound alarm: 'we may be losing the ability to understand AI.’ Venture Beat. July 15, 2025. Accessed July 30, 2025. https://venturebeat.com/ai/openai-google-deepmind-and-anthropic-sound-alarm-we-may-be-losing-the-ability-to-understand-ai/
38. Jackson A, Huddleston T. There’s a “10% to 20% chance” that AI will displace humans completely, says “godfather” of the technology. CNBC. June 17, 2025. Accessed July 30, 2025. https://www.cnbc.com/2025/06/17/ai-godfather-geoffrey-hinton-theres-a-chance-that-ai-could-displace-humans.html
39. Evans AC. Advocacy for generative AI regulation concerns. American Psychological Association. December 20, 2024. Accessed July 30, 2025. https://www.apaservices.org/advocacy/generative-ai-regulation-concern.pdf
40. Naysmith C. OpenAI’s Sam Altman shocked “people have a high degree of trust in ChatGPT” because “it should be the tech that you don’t trust.” Barchart. June 22, 2025. Accessed July 30, 2025. https://www.barchart.com/story/news/32990672/openais-sam-altman-shocked-people-have-a-high-degree-of-trust-in-chatgpt-because-it-should-be-the-tech-that-you-don-t-trust
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.