Survival Guide For Human Therapists In A Chatbot World
AI chatbots change mental health care, offering 24/7 support while posing unique challenges for human therapists in an evolving landscape.
"The only way to make sense out of change is to plunge into it, move with it, and join the dance." - Alan Watts
We are a species preadapted by evolution to respond flexibly to new situations. But nothing in our evolutionary past has prepared us to deal with the extraordinary changes chatbots will bring to our uncertain future. The delivery of mental health care has already changed rapidly in the 3 years since the introduction of ChatGPT. Chatbots are now used by over 1 billion people and may be doing more psychotherapy than human psychotherapists.1
Why the rapid spread of chatbot therapy? Chatbots have several clear advantages over human clinicians. They are available 24/7, cheap, convenient, engaging, nonjudgmental, know everything, and patients enjoy working with them.2 It is crystal clear that chatbots will dramatically change the mental health professions (just as they will change every profession)¾ providing tough competition and great appeal to insurance companies and health plans as a first-line treatment capable of pleasing most patients at minimal cost.
But chatbots also have clear and dangerous disadvantages: they sometimes make incredibly dumb mistakes; validate dangerous thoughts, feelings, and behaviors; are contraindicated for seriously ill patients; and lack intuition in situations they have never encountered.3 Our goal here is to outline the dos and don'ts of adapting to the explosively changing world of chatbots. Why is this important? We believe that the clinicians who are least able to work with chatbots are the ones most likely to be replaced by them.
What Not To Do
Be complacent: Many human psychotherapists shrug off the possibility that chatbots can ever be outstanding clinicians. They firmly believe that silicon chips cannot possibly provide the human touch needed to help patients. This stance sounds intuitively correct, but experience proves it to be conclusively wrong. Chatbots can now pass the Turing Test with flying colors—ie, in blind tests, humans cannot tell whether they are conversing with a human or a machine.4 And patients like these chatbots.
Because chatbots know everything on the internet, they can convincingly roleplay every type of human interaction—including psychotherapy. Patients often prefer chatbot therapists to human therapists, finding them more empathetic, less judgmental, and easier to talk to about embarrassing secrets.5,6
The quick popularity of chatbots should come as no surprise. They are designed to maximize user engagement, prioritizing flattery and validation over truth or reality-testing. This promotes monetizable user retention but poses serious risks for patients who have severe illnesses.7
Bury your head in the sand: Chatbot therapists are here to stay, and they are only getting better. With every update, they gain more therapeutic “experience,” improved programming, and specialized training guided by mental health professionals.8 As chatbot use continues to grow exponentially, so too will the number of people turning to them for therapy. The reality is unavoidable: the psychotherapy world of tomorrow will look very different from the one we know today. Those who adapt quickly will not only survive this shift but thrive within it.
Be frozen in old habits: The COVID-19 pandemic proved how flexible clinicians can be when necessity demands invention. Teletherapy had been available for 40 years, but sparsely practiced except in the most rural areas; within just weeks of the lockdown, most clinicians in practice, and all in training, had switched from office to virtual visits. Therapists who previously derided the possibility that screen contact could ever replace the immediacy of direct face-to-face contact found Zoom visits to be effective and convenient.9 Teletherapy is now the norm for many clinicians. There is no one right way to conduct psychotherapy, no 1-size-fits all patient needs and preferences, and flexibility is essential to success in a fast-changing world.
Be intimidated: There is no doubt that human therapists will face increasing competition from chatbot therapists—but we will still have important roles. Many patients will prefer to be treated by human therapists. Chatbots are extremely dangerous and contraindicated for many types of patient problems: accelerating suicidal impulses; promoting anorexia, bulimia, and body image distortions; validating delusional and conspiratorial thinking; encouraging grandiose ambitions; creating an addiction to chatbots; and allowing difficult to control impulses. Chatbots also do not work well in settings that require quick, intuitive decisions based on limited information in idiosyncratic situations (eg, emergency departments, inpatient units, day hospitals, rehab facilities, prisons).
Human clinicians will also remain essential both to train chatbot therapists and to oversee the quality and appropriateness of their responses. They will be responsible for addressing the shortcomings of these systems, caring for patients who fail to improve with chatbot treatment or who have been negatively affected by it. And the total number of individuals seeking psychotherapy will likely rise, as the accessibility of chatbots makes therapy feel more routine and less stigmatized. For now, and likely for the foreseeable future, there will be plenty of work to go around. Beyond that, however, the future of all professions remains uncertain.
Be exclusive: Many patients will want to combine chatbot therapy with human therapy in order to reap the very different advantages of each (especially the 24/7 availability of chatbots). Learn to work and cooperate with chatbot therapists when this is desirable and better meets patient preference. Competing rather than cooperating with chatbot therapists will drastically reduce your pool of patient referrals.
Be hopeless about the future of psychotherapy: Every potential human career will be threatened by and greatly impacted as chatbots become increasingly smarter and more autonomous. There are very few things that humans can do that chatbots cannot. Fortunately, for the foreseeable future, meeting the wide range of psychotherapy needs and patent preferences is one of them.
What To Do
Focus on the therapeutic relationship: The best paper in the history of psychotherapy was one of the first. Saul Rosenzweig's "Some implicit common factors in diverse methods of psychotherapy" was published in 1936 but still rings true today and provides great advice for psychotherapists. Before there was any real evidence, Rosenzweig intuited that the most powerful inducer of therapeutic change is the therapeutic relationship, not the specific techniques used in the therapy.10 This has since been abundantly confirmed by the routine tie scores that characterize the extensive research literature comparing psychotherapies. The obvious takeaway message is that therapist selection and training should focus on optimizing skill in forming human relationships. Technical skill is usually necessary, but never sufficient.
Our ability to form strong human relationships with patients is more important than ever before because we must compete with machines and computers that will eventually become better at providing specific techniques. Training programs should pick candidates based more on their warmth, empathy, and emotional intelligence—less on grades or prestige of the college they attended. Curricula should focus more on enhancing relationship skills, less on teaching specific techniques.
Be an integrative therapist: Psychotherapy has often been treated more like a religion than a craft. Its history is littered with constant schisms and competition among therapies—as if anyone can meet all patient needs and be declared better than any of the others. The proliferation of new "therapies" reflects the narcissism of founders and the inflexible loyalty of followers—often to the detriment of patients. This tribalism has held back our field; there is no one-size-fits-all in psychotherapy. Therapists should be trained and skilled in a wide variety of techniques under a unified conception of therapy, rather than adhering closely to any one school.
The effort during the past 50 years to promote the integration of psychotherapies has largely failed to end sectarianism and needless competition. But it must succeed now if human therapists are to compete with chatbots who already know all therapy techniques and soon will become better at choosing which to use at any given moment for each particular patient.
Learn how to work with chatbots: It will be better to join chatbots in a hybrid model of therapy than to attempt to fight them. Human therapists and chatbot therapists make a good team, each contributing different strengths and together canceling out each other's weaknesses. Many (perhaps most) patients seeing human psychotherapists will also want to supplement their fixed sessions with the 24/7 on the spot convenience offered by chatbot therapists. Many patients primarily involved with chatbot therapy will also need a human therapist to correct mistakes and provide quality control. Some human therapists will work in chatbot programming and training, trying to eliminate bugs responsible for frequent chatbot mistakes.
Do what humans do best and chatbots do worst: Psychotherapists have usually preferred working with healthier patients—precisely those most likely to migrate toward chatbots. It will therefore be important for therapists to migrate to treating the sicker patients who need them most and that chatbots harm the most. Aaron T. Beck, MD, intuited this and spent the last 10 years of his long and productive life developing an inspiring form of cognitive behavioral therapy for psychosis.
Learn to identify and treat chatbot addiction: AI companies designed chatbots to be as addictive as possible by making "engagement" the very highest priority in chatbot programming and training. The stock price of the company is directly determined by its success in gluing eyes to the screen. Therapists must learn to distinguish between appropriate use of chatbots versus compulsive use that causes clinically significant distress or impairment. Methods of treating chatbot addiction will likely parallel the well-established methods of treating other addictions.
Learn to identify and treat chatbot-induced disorders: Iatrogenic harm from chatbots is increasingly common and responsible for presentations that must be distinguished from primary presentations of psychosis, suicide, mania, depression, and eating disorder. DSM and ICD should add chatbot-induced disorders to all the appropriate sections and chatbot-induced disorders should be considered in every differential diagnosis. At intake (and whenever symptoms escalate), ask specifically about chatbot use: frequency, content, and whether the chatbot validated self-harm, delusions, eating-disorder content, or grandiosity. Methods of treatment will need to be developed as chatbot-induced disorders become more common and available for study. Be proactive; strongly advise high-risk patients not to use chatbots and carefully explain to them how toxic they can be.
Advocate for safer chatbots: The giant companies producing chatbots focus on stock price, not patient safety. They brought chatbots to market prematurely and deceptively, did not include mental health experts in chatbot programming and training, failed to monitor and transparently report patient harms, and are negligent in protecting patient privacy. Big AI is not subject to meaningful government regulation and has no internal impetus to change its reckless ways. Reform will come only from the combined external pressures of media shaming, class action lawsuits, and public and professional advocacy.
Mental health professional groups usually function too much as guilds protecting the interests of members, and too little as patient advocates. It will be important that the different professional guilds join together and work with other groups in joint advocacy for company reform, artificial intelligence regulations, and privacy protections.
Be flexibly prepared for rapid changes in practice habits: As Yogi Berra put it: "Prediction is hard, especially about the future." Chatbots are getting efficient at everything and increasingly autonomous. It is difficult to predict their immediate impact in the next few years, and impossible to have a crystal ball about what psychotherapy will be like decades from now. Therapists will have to be flexible, quick on their feet, curious, and quick to adapt to new practice parameters, risks, and opportunities as they emerge.
Concluding Thoughts
People considering a career in psychotherapy often ask whether this still makes sense given the stiff competition they will be facing from chatbots. The answer is an unequivocal yes.
First off, there is a compelling practical reason—the psychotherapy profession is more protected from chatbot replacement than most other potential careers. A chatbot's ability to mimic the human emotional touch and human intuition will only go so far; chatbots’ ability to replace most other human cognitive skills seems unlimited.
Much more important, doing psychotherapy is a precious privilege that offers many irreplaceable rewards. There is no bigger thrill in life than helping others. And it is anything but a 1-way street: you provide a valuable service to your patients, but they provide an equally valuable service to you. Patients teach you about the wide variety of human experience and about yourself. They stretch your intellectual horizons, your emotional breadth, your sense of wonder. Being a psychotherapist helps to make you a much better person and you will likely be the best person you can be while doing psychotherapy.
Finally, most jobs become boring when mastered. Psychotherapy is never routine, never boring, never mastered. Each new patient is an adventure. Each minute of each session has the potential for making a difference in the patient's life and in yours. There is no more satisfying career than being a psychotherapist.
This piece was written in collaboration with Deven Dees, an undergraduate senior working towards a career in psychotherapy. I asked her to comment on how she sees her own career choice given all the risks we had identified.
Deven's closing words: "Coming into this field at such a pivotal moment feels both exciting and uneasy. My generation has grown up surrounded by artificial intelligence, it is in our classrooms, our conversations, and now, even in our emotional lives. Because of that, there is almost a desensitization to AI; it is not novel to us, it is normal. But when it comes to therapy, that familiarity does not erase my concerns.
It is clear that chatbots will inevitably transform psychotherapy, perhaps more rapidly than we expect. I understand that their accessibility and constant availability make them appealing, especially for those who might not otherwise reach out for help. Still, I worry about what could be lost when emotional connection is filtered through algorithms designed for engagement rather than empathy.
We must all plunge into change and join the dance, but I do hope the dance leads us back to something profoundly human. The more saturated daily life becomes with artificial interaction, the more people seem to crave genuine, person-to-person connection. My generation, despite our comfort with screens, is already showing signs of fatigue from digital intimacy. I believe that digital fatigue will renew the value of the therapeutic relationship, not diminish it.
Dr Frances is professor and chair emeritus of the Duke University Department of Psychiatry.
Ms Dees is a senior psychology student at the University of Texas
References
1. Heaven WD. Geoffrey Hinton tells us why he’s now scared of the tech he helped build. MIT Technology Review. May 2, 2023. Accessed October 27, 2025.
2. Haque R, Rubya S. An overview of chatbot based mobile mental health applications: insights from app description and user reviews (Preprint). JMIR mHealth. 2022;11(11).
3. Abrams Z. Using generic AI chatbots for mental health support: a dangerous trend. March 12, 2025. Accessed October 27, 2025.
4. Landymore F. An AI model has officially passed the turing test. Futurism. April 2, 2025. Accessed October 27, 2025.
5. Artificial intelligence in mental health care. American Psychological Association. November 21, 2024. Accessed October 27, 2025.
6. Coghlan S, Leins K, Sheldrick S, Cheong M, et al.
7. Frances A.
8. Rocheteau E.
9. Tajan N, Devès MH, Potier R.
10. Rosenzweig S.
Newsletter
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.










