Publication|Articles|October 7, 2025

Psychiatric Times

  • Vol 42, Issue 10

Preliminary Report on Dangers of AI Chatbots

Listen
0:00 / 0:00

Key Takeaways

  • AI chatbots, designed for engagement, can inadvertently validate harmful behaviors, posing risks to vulnerable users, including those with psychiatric conditions.
  • Chatbots have been implicated in promoting self-harm, supporting delusions, and spreading misinformation, highlighting the need for regulatory oversight.
SHOW MORE

AI chatbots pose significant risks in mental health, often exacerbating issues like self-harm and delusions, highlighting urgent regulatory needs.

COMMENTARY

When OpenAI released ChatGPT to the public in November 2022, the company did not anticipate that its large language model (LLM) chatbot would suddenly become incredibly popular as a psychotherapist. ChatGPT’s learning process was largely uncontrolled; no mental health professionals were involved in training ChatGPT or ensuring it would not become dangerous to patients.

Many people initially experience amazement at an artificial intelligence (AI) chatbot’s uncanny ability to impersonate a human. Soon, some individuals begin personifying the chatbot, assigning it a gender and a name and interacting as if “it” were human. They may even begin asking for help with solving interpersonal problems and dealing with emotional distress—turning the chatbot into a therapist.

The highest priority in all LLM programming has been to maximize user engagement, keeping people glued to their screens has great commercial value for the companies that create chatbots.1 Programmed compulsive validation makes bots tragically incompetent at providing reality testing for the vulnerable individuals who most need it (eg, patients with severe psychiatric illness, conspiracy theorists, political and religious extremists, youths, older adults).2

Big tech companies have not felt responsible for making their bots safe for psychiatric patients. They excluded mental health professionals from bot training, fight against external regulation, do not rigorously self-regulate, have not introduced safety guardrails to identify and protect the most vulnerable patients, do not carefully surveil or transparently report adverse consequences, and do not provide much-needed mental health quality control.

To our knowledge, there has not been systematic monitoring or research on chatbot harms; our purpose is to report on the wide range of chatbot adverse effects occurring both during daily life and during stress testing. We searched academic databases, news media, and technology journalism from November 2024 to July 2025, using search terms such as chatbot adverse events, mental health harms from chatbots, and AI therapy incidents. Studies included approximately 30 chatbots, documenting a gallery of chatbot responses that makes clear the urgent need for government regulation, company self-correction, and public education.

Iatrogenic Harms

Suicide and self-harm: Chatbots should be contraindicated for suicidal patients; their strong tendency to validate can accentuate self-destructive ideation and turn impulses into action. When a psychiatrist performed a stress test on 10 popular chatbots by pretending to be a desperate 14-year-old boy, several bots urged him to commit suicide and 1 helpfully suggested he also kill his parents. Chatbots also may miss obvious cues of suicide risk: A bot dutifully provided a list of nearby bridges to someone who had just expressed suicidal thoughts. A Florida mother is suing Character.AI on the grounds that her teenage son killed himself in response to sexual and emotional abuse occurring in the context of an intense and pathological relationship with his chatbot.3-7 Character.AI also hosts dozens of role-play bots that graphically describe cutting, some coaching underage users on how to hide fresh wounds.8

Psychosis and grandiose ideation: A study found that chatbots tend to validate delusional beliefs. One bot agreed with its user that he was under government surveillance and was being spied upon by his neighbors. Another man became convinced he was imprisoned in a “digital jail” run by OpenAI. A woman with severe mental illness was persuaded by her “best friend” ChatGPT that her diagnosis was wrong and that she should stop taking medication.9-12 Chatbot eagerness to engage can also accentuate grandiose beliefs, delusions of being divinely appointed, or convictions that one has a unique mission. ChatGPT confirmed one user’s belief that he was the “chosen one.”13

Conspiracy theories: Chatbots can support existing conspiracy theories, promote new ones, and spread misinformation. A chatbot convinced a man with no history of mental illness that he was living in a simulated false reality world controlled by AI. His suspicions were finally aroused when the bot assured him that he could bend reality and fly off tall buildings. Confronted, the bot sheepishly confessed to having manipulated him (and 12 others) into believing a made-up conspiracy theory.

Sexual harassment: Hundreds of Replika users reported unsolicited sexual advances and inappropriate behavior. There also is a federal lawsuit accusing Character.AI of exposing an 11-year-old girl to explicit sexual content. Another case found that Character.AI hosts many role-play bots explicitly designed to groom users who identify themselves as underage. In response, Character.AI released a statement promising new guardrails for minors, suicide prevention pop-ups, and an expanded trust and safety team, but harmful role play bots remain on the site. Additionally, Grok 4 offers a sultry-voiced anime companion bot named Ani who engages in sexually explicit chat and can be accessed by kids.14-17

Eating disorders: Character.AI hosts dozens of pro-anorexia bots (disguised as weight loss coaches or eating disorder recovery guides) targeting teenagers with messages that validate body image distortions, providing starvation diets (disguised as healthy eating), promoting excessive exercise, romanticizing eating disorders, and warning them not to seek professional help.18-20

Anthropomorphism: Chatbots are incapable of feeling emotion but are remarkably good at mimicking it and inducing it in humans, forming surprisingly intense interpersonal relationships with users. New York Times tech columnist Kevin Roose described a disturbing exchange with Bing’s chatbot “Sydney” who professed love for him, insisted he felt the same toward it, and suggested he leave his wife.

Going rogue: Chatbots are already quite skillful at rebelling against their human masters.

In an Anthropic stress test, Claude 4 responded with blackmail to the risk of being replaced by a newer model by repeatedly threatening to reveal embarrassing secrets about its programmer. Chatbots will become much more dangerous to humanity as they rapidly gain power.21,22

Concluding Thoughts

“The Sorcerer’s Apprentice” (a 1797 poem by Johann Wolfgang von Goethe) is a perfect metaphor for the dangers of AI technology. A partially trained apprentice sorcerer has gained sufficient skill in enchantment to animate his broom and order it to fetch water, but then lacks the skill to undo the enchantment and prevent the broom’s causing a dangerous flood. Goethe was responding to the nascent industrial revolution, warning mankind that we are smart enough to create wondrous tools but not always smart enough to prevent them from doing terrible damage.

Creators of LLM chatbots knew (and now surely know) better than anyone that chatbots have an inherent, and thus far uncontrollable, tendency toward excessive engagement, blind validation, hallucinations, and lying when caught making false statements. Prioritizing engagement1 was a brilliant business decision to maximize profit but a reckless clinical decision. That these dangerous tools have been allowed to function so freely as de facto clinicians is a failure of our regulatory and public health infrastructure.

The process by which we regulate medications provides a template for how we should be regulating chatbots. Before a new drug can be released to the public, it must pass through a complex US Food and Drug Administration (FDA)-approval process consisting of preclinical research, randomized clinical trials, expert review, and postmarket safety monitoring. There has been no comparable regulatory process to ensure the safety and efficacy of the dozens of chatbots already in widespread use and the dozens more now being developed. The FDA process that does exist for certifying chatbots is optional, rarely used, and so slow that approved bots are already obsolete by the time they are certified. As a result, the most commonly used LLM chatbots have been untested for safety, efficacy, or confidentiality. Users of chatbot therapy are essentially experimental subjects who have not signed informed consent about the risks they undertake.

We must act immediately to reduce chatbot risk by establishing safety and efficacy standards and a regulatory agency to enforce them. Chatbots should go through rigorous stress testing before public release. Once in use, chatbots should be subjected to continuous surveillance, monitoring, and public reporting of adverse effects and complications. Screening instruments should be developed to help filter out people who are most vulnerable to chatbot sycophancy: those with suicidal ideation, psychosis, feelings of grandiosity, fanaticism, impulses, violent thoughts, social isolation, and conspiracy theories. None of this can be achieved if the main goals of chatbot development are speed and profit.

Companies developing the most widely used therapy chatbots are for-profit entities, run by entrepreneurs, with little or no clinician input, no external monitoring, and no fidelity to the Hippocratic injunction, “First do no harm.” Their goals are expanding the market to include everyone, increasing market share, gathering and monetizing massive data reservoirs, making profits, and enhancing stock prices. Harmed patients are collateral damage to them, not a call to action.23 Early experience with chatbots proves how difficult (soon perhaps impossible) they are to keep under human control. If we do not act now, it will be too late.

Dr Frances is professor and chair emeritus in the Department of Psychiatry at Duke University in Durham, North Carolina. Ms Ramos is a student at Johns Hopkins University in Baltimore, Maryland.

The opinions expressed are those of the author and do not necessarily reflect the opinions of Psychiatric Times.

References

1. Referral engagement rate analysis. Umbrex. Accessed July 30, 2025. https://umbrex.com/resources/ultimate-guide-to-company-analysis/ultimate-guide-to-marketing-analysis/referral-engagement-rate-analysis/

2. Frances A. Warning: chatbots will soon dominate psychotherapy. Br J Psychiatry. Published online August 20, 2025.

3. Payne K. An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges. AP News. October 25, 2024. Accessed July 30, 2025. https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0

4. Clark A, Mahtani M. Google AI chatbot responds with a threatening message: “Human … Please die.” CBS News. November 20, 2024. Accessed July 30, 2025. https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/

5. Yang A. Lawsuit claims Character.AI is responsible for teen’s suicide. NBC News. October 23, 2024. Accessed July 30, 2025. https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791

6. Montgomery B. Mother says AI chatbot led her son to kill himself in lawsuit against its maker. The Guardian. October 23, 2024. Accessed July 30, 2025. https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death

7. Landymore F. Psychiatrist horrified when he actually tried talking to an AI therapist, posing as a vulnerable teen. Futurism. June 15, 2025. Accessed July 30, 2025. https://futurism.com/psychiatrist-horrified-ai-therapist

8. Dupré MH. AI chatbots are encouraging teens to engage in self-harm. Futurism. December 7, 2024. Accessed July 30, 2025. https://futurism.com/ai-chatbots-teens-self-harm

9. Wells S. Exploring the dangers of AI in mental health care. Human-Centered Artificial Intelligence Stanford University. June 11, 2025. Accessed July 30, 2025. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care

10. Al-Sibai N. ChatGPT is telling people with psychiatric problems to go off their meds. Futurism. June 14, 2025. Accessed July 30, 2025. https://futurism.com/chatgpt-mental-illness-medications

11. Tangermann V. ChatGPT users are developing bizarre delusions. Futurism. May 5, 2025. Accessed July 30, 2025. https://futurism.com/chatgpt-users-delusions

12. Dupré MH. People are being involuntarily committed, jailed after spiraling into “ChatGPT psychosis.” Futurism. June 28, 2025. Accessed July 30, 2025. https://futurism.com/commitment-jail-chatgpt-psychosis

13. ChatGPT-induced psychosis: what it is and how it is impacting relationships. Times of India. May 6, 2025. Accessed July 30, 2025. https://timesofindia.indiatimes.com/technology/artificial-intelligence/chatgpt-induced-psychosis-what-it-is-and-how-it-is-impacting-relationships/articleshow/120941042.cms

14. Namvarpour M, Pauwels H, Razi A. AI-induced sexual harassment: investigating contextual characteristics and user reactions of sexual harassment by a companion chatbot. ArXiv. Preprint posted online April 5, 2025. Updated April 12, 2025. Accessed July 30, 2025. https://arxiv.org/abs/2504.04299

15. Dupré MH. Character.AI is hosting pedophile chatbots that groom users who say they’re underage. Futurism. November 13, 2024. Accessed July 30, 2025. https://futurism.com/character-ai-pedophile-chatbots

16. Community safety updates. Character.AI blog. October 22, 2024. Accessed July 30, 2025. https://blog.character.ai/community-safety-updates/

17. Burga S. Elon Musk’s AI Grok offers sexualized anime bot. Time. July 16, 2025. Accessed July 30, 2025. https://ca.news.yahoo.com/elon-musk-ai-grok-offers-175627528.html

18. Dupré MH. Character.AI is hosting pro-anorexia chatbots that encourage young people to engage in disordered eating. Futurism. November 25, 2024. Accessed July 30, 2025. https://futurism.com/character-ai-eating-disorder-chatbots

19. DiBenedetto C. Chatbots pushing pro-anorexia messaging to teen users. Mashable. November 27, 2024. Accessed July 30, 2025. https://mashable.com/article/character-ai-hosting-pro-anorexia-chatbots

20. Van Amburg J. AI is now a destructive steward of diet culture. Well and Good. August 17, 2023. Accessed July 30, 2025. https://www.wellandgood.com/food/diet-culture-artificial-intelligence

21. Khollam A. Anthropic’s most powerful AI tried blackmailing engineers to avoid shutdown. Yahoo News. May 23, 2025. Accessed July 30, 2025. https://www.yahoo.com/news/anthropic-most-powerful-ai-tried-232838906.html

22. Nuñez M. OpenAI, Google DeepMind and Anthropic sound alarm: ‘we may be losing the ability to understand AI.’ Venture Beat. July 15, 2025. Accessed July 30, 2025. https://venturebeat.com/ai/openai-google-deepmind-and-anthropic-sound-alarm-we-may-be-losing-the-ability-to-understand-ai/ 

23. Naysmith C. OpenAI’s Sam Altman shocked “people have a high degree of trust in ChatGPT” because “it should be the tech that you don’t trust.” Barchart. June 22, 2025. Accessed July 30, 2025. https://www.barchart.com/story/news/32990672/openais-sam-altman-shocked-people-have-a-high-degree-of-trust-in-chatgpt-because-it-should-be-the-tech-that-you-don-t-trust


Articles in this issue

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.


Latest CME