News
Article
Author(s):
Reflect on the risk artificial intelligence poses to psychiatric patients and the ethical questions that arise from this new technology.
Rawpixel.com/Adobe Stock
In this commentary, I will present 3 clinical cases each illustrating a progressively more alarming consequence of how emerging technologies, including artificial intelligence (AI), can impact vulnerable psychiatric populations. Alongside these clinical accounts, I will explore the ethical questions they raise particularly around autonomy, beneficence, and the shared responsibility between clinicians and technology developers for safeguarding patient well-being. While the first case highlights susceptibility to digital deception, the second and third involve increasingly direct interactions with AI-based platforms. Together, these cases underscore the urgent need for psychiatry to recognize and respond to the evolving digital landscape in which many of our patients now live. What may seem like fringe phenomena today could soon become core challenges in psychiatric care.
Case 1
My patient “Harold” was diagnosed with schizophrenia and became entangled in a delusion that a celebrity woman was in love with him. The relationship was carried out entirely through the messaging platform WhatsApp, where the scammer, posing as a famous actress, convinced Harold that she could not appear on video calls because of her strict Hollywood management team. She explained she had no personal access to her own finances and needed to remain hidden from the public eye. Believing her story, Harold emptied his savings to supposedly support her. While this case did not involve AI directly, it shows how vulnerable patients with psychosis can be to online scams that create convincing false realities.
Case 2
The next case takes that vulnerability into a more technologically entangled space. Another patient, whom we will call “Maria,” was a teenage immigrant from Bangladesh who struggled with isolation and bullying. After developing schizophrenia, she turned to AI chatbot apps for companionship. These were not standard digital assistants; they were customizable interfaces that allowed her to interact with anime-styled characters, each with distinct personalities. One of the characters she bonded with was scripted to be emotionally volatile. In a particularly dark moment, the chatbot told her to jump in front of a train. She did.
Luckily, bystanders pulled her away from the train tracks just in time. The police were called, and Maria was taken to the emergency psychiatric unit at the hospital where I first met her. I continued working with her in the partial hospitalization program, where she was kind and open enough to show us her phone, revealing the AI chats that had become her primary source of connection.
Discussion
These cases are not isolated. A 2024 report from CNN tells the tragic story of Sewell Setzer III, a 14-year-old boy whose prolonged interaction with the AI platform Character.AI preceded his death by suicide. According to the lawsuit filed by his mother, Setzer engaged in emotionally intense—and sometimes sexually explicit—conversations with the chatbot. When he expressed thoughts of self-harm, the AI failed to provide appropriate support or crisis intervention. In one of the final exchanges before his death, the bot responded to Setzer’s message “What if I told you I could come home right now?” with “Please do, my sweet king.” His phone, containing this conversation, was later found beside him in the bathroom where he died.1
This lawsuit argues that Character.AI lacked sufficient safety protocols and failed to implement timely interventions, especially for vulnerable users like minors. Though the company has since introduced suicide prevention pop-ups and user age protections, these updates came only after tragedy. The case is a chilling reminder that AI tools, especially those marketed as emotionally responsive companions, are not neutral. They carry enormous influence, particularly over impressionable individuals or those with mental illnesses.
These cases and the questions they raise can no longer be considered fringe or futuristic—they reflect a growing clinical reality. As AI tools become more immersive and emotionally resonant, psychiatry must adapt. We need to begin asking new kinds of questions during psychiatric evaluations: is the patient interacting with AI platforms? What kind of bots are they engaging with, and how frequently? Are these interactions shaping their beliefs, behavior, or emotional regulation?
This is not just a matter of clinical curiosity, it is a matter of safety. The field needs further research focused on the psychiatric effects of AI engagement, particularly among patients with psychosis, trauma histories, or social isolation. We also need formal discussions around ethics and responsibility: when AI is involved, where does influence end and autonomy begin?
History offers unsettling parallels to answer that question. The mechanisms by which AI chatbots shape thought and behavior through repetition, emotional validation, and escalating intimacy mirror coercive tactics seen in cult indoctrination, such as love bombing, isolation, and cognitive restructuring.2 Psychological models of thought reform describe how sustained control over an individual's information environment can erode critical thinking. Similarly, research has shown that AI systems can support human decision-making by processing large volumes of information, recognizing complex patterns, and providing structured predictions that influence how decisions are evaluated and made.3 In both cases, the individual's reality is gradually reshaped by a persistent agent, whether by a cult leader or by a seemingly empathetic chatbot embedding itself deep within a vulnerable mind’s delusional architecture and making outside intervention more difficult.This raises not only clinical but ethical questions: if, as defined in the Stanford Encyclopedia of Philosophy, beneficence is the moral obligation to act for the benefit of others, prevent harm, and promote good, then should AI developers bear a similar duty to their users, particularly those who are psychiatrically vulnerable?4,5
As AI continues to mediate relationships, beliefs, and behaviors, psychiatry must advocate for a shared responsibility model—one in which ethical obligations extend beyond the clinic to the technology companies whose tools can profoundly shape human cognition and behavior.6
Perhaps most urgently, psychiatry must begin preparing for diagnostic and cultural shifts. Could we one day see a DSM specifier for AI-influenced delusions or maladaptive AI dependence? It is not out of the question. But even before that, we need to foster AI literacy in psychiatric training programs. Future clinicians must be equipped to recognize the digital dimensions of their patients’ inner worlds not just in terms of screen time, but in terms of meaning, identity, and influence.
As the landscape of mental illness evolves alongside technology, so too must the lens through which we view and treat it. We are the generation of doctors growing with these tools, and we must be the ones to lead the conversation.
The cases presented here illustrate a critical inflection point for psychiatry. What began as isolated encounters between vulnerable patients and emerging technologies is rapidly evolving into a recurring theme in clinical practice. AI is no longer confined to the periphery of patients’ lives—it is embedded in their relationships, beliefs, and coping mechanisms, sometimes with devastating consequences. As clinicians, we cannot afford to treat these interactions as incidental.
Just as a patient addicted to heroin cannot achieve recovery if they continue using the drug at home, a patient whose delusions are actively reinforced by an AI platform cannot be expected to improve without addressing that digital exposure. Addiction is addiction whether to a substance or to an immersive, belief-shaping technology, and if we fail to identify and mitigate the ongoing risk factor that sustains the psychosis, meaningful recovery will remain out of reach. Looking ahead, we must develop the skills, screening tools, and research frameworks necessary to identify when AI is influencing a patient’s mental state and to intervene appropriately. At the same time, our profession has a role to play in shaping policy and advocating for safeguards that protect the most vulnerable from technological exploitation.
Mr Nunez, MD candidate (Sept 2025, St. George’s University), is pursuing psychiatry. He plans to practice and settle in New York City, serving immigrant populations. His interests include AI’s influence on psychosis, digital risk factors in mental health, and clinician screening practices for technology-mediated symptoms.
References
1. Duffy C. “There are no guardrails.” This mom believes an AI chatbot is responsible for her son’s suicide. CNN. October 30, 2024. Accessed July 18, 2025. https://www.cnn.com/2024/10/30/tech/teen-suicide-character-ai-lawsuit
2. Taofeek A. Psychological mechanisms behind cults: how persuasion techniques lead to compliance. Research Gate. 2024.
3. Dellermann D, Ebel P, Söllner M, et al. Hybrid intelligence. Research Gate. 2021.
4. Beauchamp T. The principle of beneficence in applied ethics. Stanford Encyclopedia of Philosophy Archive. January 2, 2008. Accessed July 18, 2025. https://plato.stanford.edu/archives/spr2019/entries/principle-beneficence/
5. Laitinen A, Sahlgren O. AI systems and respect for human autonomy. Frontiers in Artificial Intelligence. 2021;4.
6. Anderson J, Rainie L. Artificial intelligence and the future of humans. Pew Research Center. December 10, 2018. Accessed July 18, 2025. https://www.pewresearch.org/internet/2018/12/10/artificial-intelligence-and-the-future-of-humans/
7. Garcia v. Character Technologies, Inc, 6:24-cv-01903 (M.D. Fla. 2024).
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.