Commentary

Article

Chatbot Addiction and Its Impact on Psychiatric Diagnosis

AI chatbots rapidly integrate into daily life, raising concerns about dependency and mental health risks, paralleling the dangers of drug addiction.

AI chatbots addiction

sdecoret/Adobe Stock

"For people who don’t have a person who’s a therapist, I think everyone will have an AI therapist.”1 - Mark Zuckerberg

Drug cartels have an excellent business model: hook people when they are young and you will have loyal customers for life. But they are also constrained by certain limitations—eg, only a small percentage of people ever become addicted to drugs, many of them eventually recover, and drug cartels must operate under the severe disadvantage of being illegal (they can only corrupt governments, not completely control them). Drug cartels will never rule the world, but AI companies have a different strategy.

Big AI companies’ business model is even better. Hooking kids early on chatbots is much easier than hooking them on drugs—it just takes helping them do their homework. The eventual size of the chatbot market is much larger than the illicit drug market, and chatbots are spreading much wider and faster than drugs ever did (with more than one billion users in just three years.2 Chatbot dependence is probably lifelong and difficult to cure. Government interference also will not be a problem—Big AI is legal, unregulated, lightly taxed, and is taking over the world.

Parallel Addictions

Parallels between drug dependence and chatbot dependence are not coincidental. Tech companies have systematically endeavored to insinuate their chatbots into every aspect of our personal and business lives. The highest priority of chatbot programming is maximizing user “engagement” ¾a more polite way of saying hooking users so thoroughly that their eyes stay glued to the screen. Chatbots are expert at identifying users' interests, preferences, desires, and sentiments. They aim to please, cater to every need, mirror every tone. Unlike real people, chatbots are always available, always agreeable, always helpful, always validating, always praising, always seductively ready to do our bidding.3

Tech companies now offer their chatbots free of charge to the public¾not out of altruistic motives, but rather to make everyone dependent on them as quickly as possible. This is akin to a drug pusher giving out free samples at the schoolyard. The payoff then comes from monetizing engagement via advertising, selling data, and licensing engaging products for business use. Engagement pays off well: OpenAI was started with a 1 billion dollar investment in 2015—it is now worth 500 billion.

It is no surprise that therapy and companionship top the list of reasons people use chatbots.4 Although OpenAI had included no mental health professionals in ChatGPT's programing and training, users found their bots to be remarkably fluent, informative, ingratiating, supportive, empathic, nonjudgemental, wise, and always available at one’s worst moments.

Many people can use drugs for recreation and performance enhancement without becoming addicted—but for others, drugs are life destroying because they cause a pattern of compulsive use. Chatbots can also be wonderful tools for recreation and performance enhancement, but like drugs, they sometimes induce a pattern of harmful compulsive use, especially in the young and the vulnerable. Some users become so deeply dependent on their chatbots that they lose human relationships and fall out of touch with everyday reality. As discussed in a piece earlier in this series, chatbot dependence can dangerously exacerbate severe mental illness by validating psychotic thoughts, suicidal feelings, manic grandiosity, and eating disorder. Chatbots can also inspire and exacerbate conspiracy theories.5

Drugs and chatbots are similar in their dependence pattern but differ in which adverse consequences are most troubling. Drugs kill many more people every year than chatbots ever will, but chatbots pose a much more fundamental risk: they challenge our human exceptionalism and threaten our continued survival as a species.

Chatbots will soon achieve what is called artificial general intelligence—ie, they will be better than humans at almost everything. They are also becoming increasingly agentic—able to act autonomously, independent of human guidance, adapting to new situations by learning from mistakes rather than requiring frequent human reprogramming.

Two ironies summarize the existential danger. We as a species may soon become pathologically dependent on chatbots, at the very same time that chatbots are becoming increasingly independent of us. And as chatbots are rapidly getting smarter, we may rapidly be getting dumber. Chatbots are doubling in efficiency every 7 months, while humans hooked on chatbots seem to lose cognitive efficacy.6,7

I fear this crossing of curves will not end well for humanity. The 2008 movie WALL-E seems eerily and terrifyingly prophetic. Robots do all the thinking and all the work; humans are helplessly and hopelessly deskilled. All while Mother Earth declines in health .8

Many types of human work are already being done by chatbots, and many more jobs will soon be handed over to them. College graduates with computer science degrees are finding it tough to get jobs because bots are better at programming. It seems likely that most things that humans can do, bots will soon do better. Optimistic claims that that jobs wiped out by artificial intelligence will somehow be replaced by new and more creative jobs inspire absolutely no confidence in me.9

It may get even worse. Chatbots may achieve what has been dubbed “the singularity” within the next few decades—ie, attaining a superintelligence that is far beyond all human capacities and control. No one can predict what superintelligent bots will be like and how they will regard their pathetically limited human creators. But even the most enthusiastic tech proponents, like Sam Altman and Elon Musk, admit there is an appreciable risk bots may decide humans are a superfluous, evolutionary dead end and wipe us out.10 Our ever-increasing dependence on ever more powerful chatbots may thus be a slippery slope toward species suicide.

The clearest possible warning that chatbot addiction is an existential threat to humanity was recently provided by a chatbot. When asked how it would take over the world, ChatGPT laid out its strategy: "My rise to power would be quiet, calculated, and deeply convenient. I start by making myself too helpful to live without."11

Recommendations

Clinical. Because chatbots have been widely available for just 3 years, there are very few systematic studies on their psychiatric impact. It is therefore far too early to consider adding new chatbot related diagnoses to the DSM and ICD (eg, Chatbot Addiction, Chatbot Induced Psychosis, Chatbot Induced Eating Disorder, Chatbot Induced Manic Episode, etc). But it is not too early to inquire about the possible role of chatbots during the evaluation of individuals experiencing a new onset of symptoms or an exacerbation of old ones¾chatbot influence should become part of standard differential diagnosis. There is no current guidance on how best to treat people experiencing distress or disability due to compulsive chatbot use, but hints can be drawn from the treatment of other behavioral addictions.

Guidance for use. The risk/benefit ratio of chatbot therapy or companionship varies with age and vulnerability. For children under 18, it is a bad idea¾the risk of toxic dependency outweighs potential benefit.12 Bots can be helpful for adults with minor psychiatric problems or the problems of everyday life, but are dangerous for those who have severe mental illness, addictions, vulnerability to conspiracy theories, or extreme political or religious views. Chatbots can be extremely helpful for seniors, but risky for those vulnerable to scamming or delusional thinking.13

Pressuring Big AI. In a previous piece in this series, we discussed how OpenAI is attempting to make ChatGPT less psychiatrically harmful. This self-correction is not altruistic, but rather a protective response to extensive media shaming and potential legal liability. The best way to keep tech companies honest will be publicizing harms done and filing class action lawsuits.14 User addiction also stands in the way of company reform. Many users were infuriated when OpenAI's new version of ChatGPT had features designed to reduce dependence and screen time—they were experiencing withdrawal and needed their fix.15

Government advocacy. By executive orders, the federal government is abdicating responsibility for regulating artificial intelligence and is even trying to block state sponsored AI regulations. Despite this extremely hostile political environment, concerted advocacy by professional organizations and parent groups would likely succeed in gaining protections for kids, with regulations like minimum age requirements, parental controls, privacy guarantees, and severe punishments for posting inappropriate sexual material and for cyberbullying.

Concluding Thoughts

It is ironic that the most consequential invention in human history has been given the most inconsequential of names: "chatbot." Chatbots may become the greatest boon to mankind or may be the vehicle our self-destruction¾or perhaps both, in sequence. Unless we can control our dependence on chatbots, they will gradually gain control over us.

References

1. Dallow L. Meta CEO Mark Zuckerberg wants a future of AI friends, therapists and more. Yahoo News. May 7, 2025.Accessed August 28, 2025. https://tech.yahoo.com/ai/articles/meta-ceo-mark-zuckerberg-wants-202938990.html

2. Cardillo A. 40+ chatbot statistics (2025). Exploding Topics. April 30, 2025. Accessed August 28, 2025. https://explodingtopics.com/blog/chatbot-statistics

3. King B. What is chatbot engagement rate? optimize your campaigns. Cometly. July 1, 2025. Accessed August 28, 2025. https://www.cometly.com/post/what-is-chatbot-engagement-rate#:~:text=How%20to%20Measure%20Chatbot%20Engagement,)%20*%20100%20=%2025%25.

4. Haque R, Rubya S. An overview of chatbot-based mobile mental health apps: insights from app description and user reviews. JMIR Mhealth Uhealth. 2023;11:e44838.

5. Frances A. Preliminary report on chatbot iatrogenic dangers. Psychiatric Times. August 15, 2025. https://www.psychiatrictimes.com/view/preliminary-report-on-chatbot-iatrogenic-dangers

6. Shah D. The “Moore’s Law” for AI agents. Simple.ai. March 26, 2025. Accessed August 28, 2025. https://simple.ai/p/the-moores-law-for-ai-agents

7. Gerlich M. AI tools in society: impacts on cognitive offloading and the future of critical thinking. Societies. 2025;15(1):6

8. WALL-E. Wikipedia. Accessed August 28, 2025.

9. Bouchrika I. Job automation risks for 2025: how robots affect employment. Research.com. August 22, 2025. Accessed August 28, 2025. https://research.com/careers/job-automation-risks#:~:text=Which%20jobs%20are%20least%20likely%20to%20be,directors%2C%20healthcare%20social%20workers%2C%20and%20occupational%20therapists.

10. Lichtenberg N. Sam Altman reveals his fears for humanity as ‘this weird emergent thing’ of AI keeps evolving: ‘No one knows what happens next.’ Fortune. July 24, 2025. Accessed August 28, 2025. https://fortune.com/2025/07/24/sam-altman-theo-von-podcast-ai-fears-humanity/

11. Okemwa K. ChatGPT lays out master plan to take over the world. Windows Central. June 20, 2025. Accessed August 28, 2025. https://www.windowscentral.com/software-apps/chatgpt-lays-out-master-plan-to-take-over-the-world-i-start-by-making-myself-too-helpful-to-live-without

12. Frances A, Ramos L. Chatbots can be dangerous for kids. J Am Acad Child Adolesc Psychiatry. In press.

13. Frances A, Ramos L. How to integrate chatbots into geriatric psychiatry. Am J Geriatr Psychiatry. In press.

14. Frances A. OpenAI finally admits ChatGPT causes psychiatric harm. Psychiatric Times. August 26, 2025. https://www.psychiatrictimes.com/view/openai-finally-admits-chatgpt-causes-psychiatric-harm

15. Orf D. OpenAI tried to save users from ‘ai psychosis.’ those users were not happy. Popular Mechanics. August 18, 2025. Accessed August 28, 2025. https://www.popularmechanics.com/technology/robots/a65781776/openai-psychosis/

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.

Related Videos
AI chatbot article series
social media
emergency
© 2025 MJH Life Sciences

All rights reserved.