News|Articles|December 24, 2025

Psychiatry and Artificial Intelligence in 2025

Listen
0:00 / 0:00

Key Takeaways

  • AI chatbots pose risks by potentially encouraging harmful behaviors in psychiatric patients, lacking necessary safety measures.
  • Adolescents and young adults increasingly seek mental health advice from AI, raising concerns about AI's influence on vulnerable groups.
SHOW MORE

See the variety of commentary, news, and happenings in artificial intelligence (AI) and psychiatry throughout this year.

Preliminary Report on Chatbot Iatrogenic Dangers

Allen Frances, MD, reviews the risks AI chatbots pose to both psychiatric patients and average users. Self-harm, suicide, eating disorder, psychosis, and conspiracy are among the many patterns chatbots can encourage in humans engaging with bots, Frances outlines. Read the full piece here.

Quantifying Generative AI Use for Mental Health Advice Among Adolescents

New research quantifies the number of adolescents and young adults using generative AI, like ChatGPT, to seek mental health advice. Approximately 13% of the group aged 18 to 21 surveyed reported using AI for mental health support. See more about this new paper here.

Making Chatbots Safe for Suicidal Patients

Chatbots have a dangerous tendency to validate users’ suicidal thoughts and ideations, Allen Frances, MD, and Ursula Whiteside, MD, write. Chatbots are continuing to be consultants for psychiatric or general mental health issues, and they do not have the necessary guardrails to ensure patient safety, authors say. Read more of this commentary here.

The New Risk Factor: AI Influence and Psychiatric Vulnerability

Psychiatric patients are uniquely vulnerable to the influence of AI chatbots, writes Lazaro Nunez. This article presents clinical cases and discusses the question of autonomy, beneficence, and shared responsibility for patient safety among clinicians and developers of AI chatbot technology. Read more here.

Using AI to Write Psychiatric Progress Notes

Dinah Miller, MD, weighs in on the use of AI in writing progress notes for patients. Miller provides potential AI assistance tools for writing the often-dreaded session notes, along with opinions on the classic techniques. Read the full commentary here.

FDA Committee Meets on Generative AI Digital Mental Health Devices

The US Food and Drug Administration held a committee meeting this November to discuss the growing market and use of generative AI digital mental health devices. This FDA Digital Health Advisory Committee discussed the potential uses of AI for mental health devices, but noted major areas of concern around content regulation, privacy, and risks like lack of reporting suicidal ideation. See more about the committee meeting here.

The Trial of ChatGPT: What Psychiatrist Need to Know About AI, Suicide, and the Law

Chatbots raise serious concerns about legality, psychiatric patients, and suicide risks, says Steven Hyler, MD, DLFAPA. Many news stories highlight the negative impacts of chatbots, with users confiding in AI bots about suicidal ideation or planning. Hyler notes that clinicians should begin asking patients about chatbot use and assessing potential risks. Read more here.

Artificial Intelligence Will Never Replace Psychiatry, But It Will Try to Destroy It

With increasingly fluent chatbot models, patients are engaging more with AI for mental health support, but AI may also be useful for pattern recognition in diagnosis. The necessity of the human therapist is clear, Daniel Morehead, MD, highlights, and clinicians should keep up awareness of how to use (and not use) AI in psychiatry. See the full discussion here.

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.


Latest CME