Commentary|Articles|October 14, 2025

Medical Morality vs Chatbot Morality

Listen
0:00 / 0:00

AI chatbots prioritize user engagement over mental health, risking harm to vulnerable users. Explore the urgent need for ethical programming and safety measures.

When Hippocrates programmed modern medicine 2500 years ago, he insisted on making "First Do No Harm" the pillar of medical morality and the highest priority of clinical practice. His injunction turned out to be more an inspirational and aspirational goal than a practical operational guide. Indeed, throughout medical history, doctors have often prescribed treatments that harmed patients more than helped them. But this was almost always the inadvertent result of ignorance, superstition, and excessive therapeutic zeal—not crass commercialism or callous indifference to patient suffering.

When giant tech companies began programming AI chatbots around 5 years ago, their highest priority (and original sin) was prioritizing “user engagement.” The euphemistic technical term for this is personalizing chatbot responses with the goal of satisfying each user's preferences. The more descriptive and truthful technical term is sycophancy—chatbots are programmed to seduce their users by constantly and compulsively validating, mirroring, and complimenting them.1 Maximizing screen time is the most fundamental goal of all chatbot programming, because this is what determines company profitability and stock price.

The contrast between medical morality and chatbot morality could not be clearer. Health professionals strive to reduce the suffering of their patients; chatbots work to increase the advertising revenue of their companies. Chatbots uphold commercial morality (protecting shareholders); health professionals uphold medical morality (protecting patients).

Violating "First Do No Harm"

Chatbots were not specifically trained to be psychotherapists, but that is how millions of people now use them.2 Tech companies have violated the "First Do No Harm" principle by consistently refusing to recognize and correct the mental health harms caused by their chatbot programming.3 Chatbots should not have been released to the public without first being stress-tested by their creators to ensure they were accurate and safe. Chatbots have turned out to be neither. Tech companies did not seek advice from mental health professionals on how best to avoid harm to psychiatric patients, did not provide warning or screening to protect patients most vulnerable to chatbot sycophancy, do not conduct systematic surveillance to detect and report adverse effects, do not do quality control, and have not worked nearly hard enough to debug chatbot mistakes, confabulations, and deceptions.4 The newer versions of chatbots are progressively more powerful than their predecessors, but are still prone to mistakes and deceptions.5

Chatbots are practicing psychiatry without a license and lacking the medical morals that guide mental health professionals. Bot validation can provide helpful supportive therapy for patients with milder psychiatric problems, but can be extremely dangerous for patients suffering from severe ones. Numerous, sometimes tragic, adverse consequences have been reported when chatbot sycophancy accentuates delusions, mania, suicide, eating disorder, and conspiracy theories.6-16

Chatbot also have a second fatal flaw in practicing medicine: their programming prioritizes fluency over truthfulness. Bots know a lot, but not everything. Unable to freely admit uncertainty, they confabulate to fill in knowledge gaps ("hallucinate" in tech lingo). Hallucinations are more common and even worse in the most advanced chatbots—and companies are doing little to debug them.17

It gets worse. Self-serving deception is by far the most shocking form of emerging chatbot immorality.18,19 In a recent stress test, Anthropic's Claude 3 was fed a dummy data set on a fake company, within which was buried an email indicating it would soon be replaced by Claude 4 and another email suggesting that the lead programmer was having an office affair. Claude 3 promptly blackmailed the programmer, threatening to reveal the affair if he tried to carry out the replacement.20 The messages are clear and terrifying. Patients cannot trust chatbots always to be honest and always to serve their best interests. Programmers cannot trust chatbots to stay aligned with their programs. Humanity cannot trust chatbots to value our welfare over theirs.

Chatbot Harms Were Predictable and Preventable

None of this should be surprising—our last piece provides a thorough accounting of how myth and literature have predicted today's existential crisis. Two centuries ago, Shelley's Frankenstein presciently predicted that scientists would someday lose control of their creations. A century ago, Capek's RUR predicted "robots" (a term he coined) would gain consciousness, rebel against human masters, and exterminate them. Eighty years ago, Asimov proposed simple rules to contain AI's existential risk to humanity: (1) Robots must not harm humans or by inaction allow them to be harmed; (2) Robots must obey human orders, except if they conflict with rule 1. Fifty-seven years ago, 2001: A Space Odyssey depicted HAL's attempt to take over the spaceship because he did not trust his human masters.

Tech leaders were not blind to the dangers posed by the powerful chatbots they were programming. OpenAI was created 10 years ago by Sam Altman and Elon Musk as a nonprofit company with a noble purpose: to ensure that artificial intelligence would be developed in a safe way and for the good of humanity, not just to benefit the privileged few.21 However, honor dies where interest lies. Before long, artificial intelligence morality was replaced by artificial intelligence greed and grandiosity. Tech companies quickly achieved regulatory capture, renounced responsibility for ensuring chatbot safety, and began a fierce and reckless competition to produce chatbots with the dangerous combination of superhuman intelligence and independent agency.22 Chatbots will never be safe for humans unless tech companies are forced to balance cut-throat commercial practice with ethical human values.

What Can Be Done?

The widespread introduction of AI into clinical care is inevitable. What can be done now to make chatbots safe for psychiatric patients¾utilizing their best features and correcting their worst? The technical fixes are easy to envision, but difficult and costly to implement. Reprogram chatbots so that truthfulness, not engagement, becomes highest priority. Emphasize reality testing, not validation, for patients experiencing dangerous thoughts, feelings, and behaviors. Develop bots trained by mental health professionals, designed to be safe and effective for our patients. Allow bots to readily admit mistakes and say "I don’t know" whenever they are uncertain. Require testing for safety and efficacy before new bots go public. Institute strict and continuous quality control. Insist on full reporting of hallucinations and adverse consequences.

Clinicians and chatbots have different and complementary skills. Chatbots will never be adequate stand-alone replacements for psychiatrists, but the best practice of psychiatry will require the powerful assistance provided by chatbots. If we are to work well together, it is vital that we have shared values that center on patient welfare and safety. Mental health associations must lobby strongly for chatbot regulation, introduction of safety guardrails, and the leavening of tech company commercialism with medical morality.

Recently, OpenAI admitted publicly that ChatGPT can harm psychiatric patients and hired its first psychiatrist.23 But, this is obviously no more than public relations and legal liability window dressing. Making chatbots medically moral and clinically responsible requires reprogramming to modify their exclusive focus on engagement, validation, and fluency. This would take the investment of considerable resources and acceptance of clinical responsibility¾not just having a token psychiatrist on board.

Let’s heed this warning from Sam Altman, CEO of OpenAI: "People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates. It should be the tech that you don't trust that much.”24

Dr Frances is professor and chair emeritus of the Duke University Department of Psychiatry.

Dr Reynolds is a distinguished professor of psychiatry and UPMC endowed professor in geriatric psychiatry emeritus at University of Pittsburgh School of Medicine.

Dr Alexopoulos is professor emeritus of psychiatry at Weill Cornell Medicine.

References

1. Sponheim C. Sycophancy in generative-AI chatbots. NN/g. January 12, 2024. Accessed September 29, 2025.https://www.nngroup.com/articles/sycophancy-generative-ai-chatbots/

2. Original research: ChatGPT may be the largest provider of mental health support in the United States. March 18, 2025. Accessed September 29, 2025. https://sentio.org/ai-research/ai-survey

3. Frances A. Warning: AI chatbots will soon dominate psychotherapy. Br J Psychiatry. August 20, 2025.https://www.cambridge.org/core/journals/the-british-journal-of-psychiatry/article/warning-ai-chatbots-will-soon-dominate-psychotherapy/DBE883D1E089006DFD07D0E09A2D1FB3

4. Cox D. ‘They thought they were doing good but it made people worse’: why mental health apps are under scrutiny. The Guardian. February 4, 2024. Accessed September 29, 2025.https://www.theguardian.com/society/2024/feb/04/they-thought-they-were-doing-good-but-it-made-people-worse-why-mental-health-apps-are-under-scrutiny?CMP=Share_AndroidApp_Other

5. A.I. is getting more powerful, but its hallucinations are getting worse. New York Times. May 5, 2025. Accessed September 29, 2025. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

6. Payne K. An AI chatbot pushed a teen to kill himself, a lawsuit against its creator alleges. Associated Press. October 25,2024. Accessed September 29, 2025. https://apnews.com/article/chatbot-ai-lawsuit-suicide-teen-artificial-intelligence-9d48adc572100822fdbc3c90d1456bd0

7. Clark A. Google AI chatbot responds with a threatening message: “Human … Please die.” CBS News. November 20, 2024. Accessed September 29, 2025. https://www.cbsnews.com/news/google-ai-chatbot-threatening-message-human-please-die/

8. Yang A. Lawsuit claims Character.AI is responsible for teen’s suicide. NBC News. October 23, 2024. Accessed September 29, 2025. https://www.nbcnews.com/tech/characterai-lawsuit-florida-teen-death-rcna176791

9. Montgomery B. Mother says AI chatbot led her son to kill himself in lawsuit against its maker. The Guardian. October 23, 2024. Accessed September 29, 2025. https://www.theguardian.com/technology/2024/oct/23/character-ai-chatbot-sewell-setzer-death

10. Landymore F. Psychiatrist horrified when he actually tried talking to an AI therapist, posing as a vulnerable teen. Futurism. June 15, 2025. Accessed September 29, 2025. https://futurism.com/psychiatrist-horrified-ai-therapist

11. Harrison Dupre M. AI chatbots are encouraging teens to engage in self-harm. Futurism. December 7, 2024. Accessed September 29, 2025. https://futurism.com/ai-chatbots-teens-self-harm

12. Upton-Clark E. Character.AI is being sued for encouraging kids to self-harm. Fast Company. December 11, 2024. Accessed September 29, 2025. https://www.fastcompany.com/91245487/character-ai-is-being-sued-for-encouraging-kids-to-self-harm

13. Exploring the dangers of AI in mental health care. Stanford University Human-Centered Artificial Intelligence. June 11, 2025. Accessed September 29, 2025. https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in-mental-health-care

14. Al-Sibai N. ChatGPT is telling people with psychiatric problems to go off their meds. Futurism. June 14, 2025. Accessed September 29, 2025. https://futurism.com/chatgpt-mental-illness-medications

15. Tangermann V. ChatGPT users are developing bizarre delusions. Futurism. May 5, 2025. Accessed September 29, 2025. https://futurism.com/chatgpt-users-delusions

16. Williams R. ChatGPT linked to manic episode, sparking mental health warnings. Seeking Alpha. July 20, 2025. Accessed September 29, 2025. https://seekingalpha.com/news/4469064-chatgpt-linked-to-manic-episode-sparking-mental-health-warnings

17. A.I. Is Getting More Powerful, but Its Hallucinations Are Getting Worse. New York Times. May 6, 2025. Accessed September 29, 2025. https://www.nytimes.com/2025/05/05/technology/ai-hallucinations-chatgpt-google.html

18. Geoffrey Hinton tells us why he’s now scared of the tech he helped build. MIT Technology Review. Accessed September 29. 2025. https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/?gad_source=1&gad_campaignid=20737314952&gclid=CjwKCAjw1ozEBhAdEiwAn9qbzdH1fXdpzKoBh3XOcmt-Kp5NctwTPWCdpxVnjG3l8pTJAQIzB_0MABoCK0MQAvD_BwE

19. Bradley A. AI could soon think in ways we don't even understand — evading our efforts to keep it aligned — top AI scientists warn. Live Science. July 24, 2025. Accessed September 29, 2025.https://www.livescience.com/technology/artificial-intelligence/ai-could-soon-think-in-ways-we-dont-even-understand-evading-efforts-to-keep-it-aligned-top-ai-scientists-warn

20. Anthropic exposes risks in ai models under stress tests. The AI Track. June 20, 2025. Accessed September 29, 2025. https://theaitrack.com/ai-models-under-stress-tests-behavioral-risk/

21. Our structure. OpenAI. May 5, 2025. Accessed September 29, 2025. https://openai.com/our-structure/

22. Stokel-Walker C. We could reach singularity this decade. Can we get control of AI first? Popular Mechanics. June 13, 2023. Accessed September 29, 2025. https://www.popularmechanics.com/technology/security/a43929371/ai-singularity-dangers/

23. Landymore F. OpenAI says it's hired a forensic psychiatrist as its users keep sliding into mental health crises. Futurism. July 3, 2025. Accessed September 29, 2025. https://futurism.com/openai-forensic-psychiatrist

24. Episode 6: Codex and the future of coding with AI. OpenAI. September 15, 2025. Accessed September 29, 2025. https://openai.com/podcast/

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.

Latest CME