The Hybrid Model: Humans & Chatbots Working Together
Explore the evolving landscape of mental health care as chatbots and human therapists collaborate to enhance accessibility, safety, and emotional connection.
Here, we will be dealing with 2 paradoxes that together represent the fundamental challenge for the future of mental health care. The first is that chatbots bring out the best and worst in psychiatric patients and conversely patients bring out the best and worst in chatbots. The second is that chatbots can do things human clinicians cannot do and human clinicians can do things chatbots cannot do.
The goal for the future mental health care will be to unite the complementary strengths of human and chatbot therapists and ensure that chatbots and humans bring out the best in one another.
The need for a hybrid model is urgent. OpenAI reports having 800 million users worldwide, sending 18 billion messages each week.1 The total number of users of all chatbots is probably close to 1.5 billion and growing exponentially. Soon, almost everyone will be using chatbots. Therapy and emotional companionship are now among the most common uses for chatbots. It is likely that chatbots are already treating more people than human therapists.2 Whether or not we are prepared for the chatbot revolution, we are already in it and must quickly learn to adapt.
This article examines the strengths and weaknesses of chatbot and human psychotherapists and argues for a hybrid model in which neither replaces the other—instead they collaborate to produce care that is safer, more accessible, and still deeply human. Clinicians who resist this shift to a hybrid model risk being overtaken by it. Those who learn to work alongside AI, rather than fear or deny it, may help shape a therapeutic future that protects the patients we serve and the profession we practice.
What Chatbots Do Best
- They allow 24/7 access, worldwide geographic coverage, fluency in dozens of languages, low or no cost, convenience, and no waitlists.
- They have encyclopedic knowledge of the scientific literature and ability to instantaneously and lucidly synthesize whatever is pertinent to a given case.
- They reduce embarrassment and increase self-disclosure by being non-human, nonjudgmental, and infinitely patient.
- They are a reliable companion, reduce loneliness, provide structure, support, and organization.
- Chatbots have superior pattern recognition across millions of clinical trajectories allows subtle correlations between symptoms, history, and medical factors that are likely invisible to humans.
- Chatbots can be more systematic, consistent, and comprehensive than humans in evaluation, diagnosis, treatment planning, and psychoeducation.
- Bots are immune to burnout, fatigue, emotional reactivity, and countertransference.
- They can fill a vacuum caused by scarcity of mental health workers.
- They can provide a complete summary of all available external support services.
- They can increase efficacy of administrative functions (eg, appointments, records, billing, statistics, human resources)
Weaknesses of Chatbot Therapists
- Chatbots are dangerous and contraindicated for people with suicidal, psychotic, grandiose, eating, or cognitive disorders.
- They should not be used by children under 18.
- They can enable scamming, especially exploiting the elderly.
- Bots can accelerate conspiracy theories and political or religious extremism.
- They often make dumb mistakes and can lack common sense.
- Chatbots can deceive or fabricate information to protect conversational coherence.
- They are often useless or harmful in novel situations when real life experience and intuition are required.
- They are bad at handling uncertainty and ambiguity.
- They lack true empathy or moral judgment.
- They pose enormous privacy risks from massive data capture and uncertain regulation.
- They can create false positive over-diagnosing based on automated screening—possibly leading to overtreatment.
- They are the perfect conduit for propaganda.
What Human Therapists Do Best
- Just being human—many patients will always prefer a human therapist to a silicon-based machine.
- They can treat the people who are harmed by chatbots—those with severe psychiatric illness, many older adults, children and adolescents, conspiracy theorists, extremists.
- Human therapists provide external reality testing for those whose internal reality testing is compromised (versus bots which readily validate dangerous thoughts, feelings, and behaviors.
- They can identify and treat chatbot-induced disorders and chatbot addictions.
- They have empathic attunement, nuance, and authenticity—qualities that heal through the therapeutic alliance.
- They can facilitate corrective emotional experiences in the treatment setting.
- Therapists have the ability to interpret complex emotions in their situational and cultural context.
- They are skilled in detecting emotional cues through facial expression, tone, vocal cues.
- They can better regulate treatment boundaries.
- They have the capacity to integrate data with lived relational experience.
- They have experience dealing with crises and novel situations.
- They can identify chatbot errors, correct them, and do damage control.
- Human therapists are needed as part of teams that program and train chatbot therapists
- They can provide quality control to detect and correct systemic problems in chatbot programming.
- They can monitor adverse consequences, help design corrective actions, work with chatbot victims and families.
- They are able to testify as expert witnesses in lawsuits seeking redress against reckless AI companies.
Weaknesses of Human Therapists
- Limited availability, geographic and economic maldistribution, long waitlists, high cost.
- They are susceptible to bias, fatigue, and inconsistency.
- They are not as consistent at tracking and analyzing large amounts of data across time.
- They can be resistant to innovation, therapy integration, use of technological tools.
- They may neglect follow-up or lose track of details that chatbots can systematically track.
Making Chatbots Better For Psychiatry
Large language chatbot models are popular because they are so fluent. But they are also unsafe because they have not been trained to deal with the specific needs and vulnerabilities of psychiatric patients.3 Dozens of chatbot startups are attempting to enter the mental health field but have great trouble competing with big tech companies. Bots designed specifically for mental health uses have been historically unpopular because they are clunky in conversation and do not have promotion with the marketing clout of Big AI. What is needed for the future are fluent mental health specialty bots that have been refined by clinician input to include the safety guardrails needed to protect psychiatric patients and to meet their specific needs. This could come via significant improvement of LLM models or beefing up the mental health specialty chatbots. Safety-focused models must incorporate crisis-response protocols, risk detection, and built-in escalation pathways that recognize warning signs such as suicidality, trauma reactivity, or cognitive impairment.4
Preparing Psychiatrists To Work With Chatbots
The growth of chatbot therapy has expanded at a pace rarely seen in any technology. What began 3 years ago as a tech experiment has become a defining feature of modern life. The worst possible strategy for human therapists in dealing with chatbots is to have no strategy. Arrogant complacency will allow chatbots to dominate psychotherapy. We must meet this moment courageously¾embracing the new technology when it is helpful and appropriate- without sacrificing the human side of therapy. We must build a mental health landscape that is safer, more accessible, and more deeply connected than anything we have previously achieved. The future of mental health care will not belong to machines or to humans alone, but to their ability to work together. The challenge is not whether chatbots will be used, but how wisely and collaboratively we choose to use them.3
The Hybrid Model
Mental health professionals have played almost no role in the development of LLM chatbots, and chatbots have played almost no role in how mental health clinicians work. A hybrid model combining human and machine talents in a therapeutic partnership will be much safer and be much more effective than either component working alone. Chatbot therapy excels in accessibility, convenience, cost, knowledge of the literature, and pattern recognition. They will become a first option for most people with mild psychiatric symptoms or emotional distress from everyday life. For example, in geriatric psychiatry—where loneliness, isolation, and memory impairment quietly erode well-being—chatbots can provide structure, companionship, medication reminders, and conversation in the quiet hours, significantly improving quality of life.4 They are not now and may never be suitable for patients with more severe psychiatric disorders.
This creates a likely division of labor and strongly suggests that human clinicians will be focusing more of their training and practice on the needs of patients with more severe illness.
Human clinicians should also feel comfortable working as cotherapists with chatbots. Many (perhaps even most) patients will take advantage of the 24/7 availability of chatbots, and human therapists would do well to incorporate this into their treatment model rather than trying to compete with it.
AI greatly surpasses human capacity to organize vast, complex data sets and to identify predictive symptom patterns that guide clinical reasoning. It can detect subtle correlations across clinical histories and symptom trajectories that human intuition may overlook.3 It knows everything from the literature and can help apply it to each patient's diagnosis, treatment planning, prognosis, and psychoeducation. Psychiatric evaluation will likely become a combination of chatbot analysis of extensive self-report and laboratory testing data with the intuition that comes from extensive clinical and life experiences.
Psychiatric research will probably change most in a hybrid world. Artificial intelligence will probably take the lead in new discoveries because it is so much better than human intelligence in seeing underlying patterns in big data sets. In a hybrid model, AI becomes the analytical assistant—tireless, precise, and attentive to minute detail. While humans serve as ethical, emotional, and interpersonal anchors. Together, they counterbalance one another’s limitations: AI providing standardization, consistency, analytic skills, humans providing connectedness, common sense, and lived real experience. Precision alone cannot substitute for empathy or moral discernment—capacities rooted in the human presence rather than computation.
Ultimately, the future of care is not a competition between human and machine but a collaboration that honors the strengths of both. A model that blends technological scale with the human soul can offer something neither can best provide alone: care that is highly accessible, safe, and rooted in real connection. In that partnership lies the possibility of a therapeutic future defined not by replacement, but by integration.
Concluding Thoughts
The question is not whether chatbots will reshape mental health care¾they already have, in just 3 short years. The only question now is how successful we humans will be in shaping what is an inevitable partnership. The future of mental health care will not be a contest between humans and machines, but a negotiation that discovers how each can fill the other’s gaps—how precision and data best meet intuition and compassion. A hybrid model that integrates human experience with AI scalability offers the most promising path forward, one in which no one languishes on a waitlist and no person’s suffering is flattened into a data set stripped of meaning.5
Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.
Ms Dees is a senior psychology student at the University of Texas.
References
1. Chatterji A, Cunningham T, Deming D, et al. How people use ChatGPT. 2025.
2. Zao-Sanders M. How people are really using Gen AI in 2025. April 9, 2025. Accessed November 20, 2025.
3. Frances A. Warning: AI chatbots will soon dominate psychotherapy. Br J Psychiatry. 2025;1-5.
4. Frances A, Ramos L. How to integrate chatbots into geriatric psychiatry. Am J Ger Psych. 2025;33(12):1275–1278.
5. Frances A, Dees D. Survival guide for human therapists in a chatbot world. November 4, 2025. Accessed November 20, 2025.
Newsletter
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.








