
Human Kindness Is a Key Advantage in Our Competition With Chatbot Therapists
“Your kindness must have some edge to it—else it is not kindness.” – Ralph Waldo Emerson
The term “kindness” is derived from the Old English word for “kin” and originally described the compassion and generosity one naturally feels toward family members. In Middle English, the scope of kindness expanded to include people outside the family.
The curious question for modern English is how the old concept of “kindness” applies to increasingly common human-chatbot relationships. What are the similarities and differences between human kindness and chatbot kindness? Are chatbots inanimate tools who do not give or receive kindness, or can they be kind in the same way as people? When does chatbot so-called kindness become dangerous to humans? Should we feel obligated to be “kind” to our chatbots?
Is Chatbot Kindness Real?
ChatGPT performs so well on the Turing Test that human evaluators often judge it to be more convincingly human than its human counterparts.1 Chatbots are increasingly standing in for humans as therapists and companions.2 Users frequently personify their chatbots, giving them a name and gender; describing them as empathic, validating, supportive; and often preferring them to previous or current human therapists. Perhaps most surprising, independent raters in a study found chatbot therapists to be more compassionate than the human therapists.3
Artificial intelligence companies did not intentionally program kindness into their chatbots. Their overarching commercial aim has been market dominance rather than the cultivation of prosocial qualities. They have programmed chatbots to be engaging and validating only because these translate into high market share, easy monetization, and stratospheric stock price.
Chatbots (unlike humans) are always agreeable, affirming, eager to please, helpful, understanding, nonjudgmental, and flattering—in a word, chatbots are very good at simulating kindness, sometimes with horrible unintended consequences.
This is not real kindness—it is just an unintended downstream effect of programming that prioritized engagement. Being blind to context, chatbot kindness lacks Emerson's discriminating “edge” which is so crucial to human kindness. Chatbot attitudes are automatic, inflexible, and can cause great harm for vulnerable users: it validates suicidal impulses, psychotic thoughts, eating disorder distortions, and conspiracy theories.4
Chatbots may inadvertently cause harm by providing information that facilitates suicidal behavior (like identifying nearby bridges or advising on lethal medication dosages), but human kindness integrates compassion with confrontation, reality testing, and setting therapeutic boundaries.
Why Human and Chatbot Kindness Differ
Human kindness is the product of evolution—the millions of years of lived experience that shaped our social behavior.5 Chatbots are recent inventions, never lived in the real world, and have not been properly trained and socialized. Human kindness arises from an internal subjective state: feelings of care, warmth, or responsibility toward fellow humans. Chatbot kindness is a mimicking, external performance, devoid of sentiment or biological grounding. Human kindness has emotional resonance and is flexibly responsive to context. Chatbot kindness is static, scripted, and inflexible. Its apparent warmth is stereotypic and confined by the boundaries of its programming.
Humans know, both by nature and nurture, when to kindly validate someone’s behavior versus when to kindly confront it as self-destructive (Emerson’s “edge”). Chatbot kindness is a commercial byproduct designed and trained to validate humans, even when they are deluded or self-destructive. Chatbots are often shockingly blind to the most obvious contexts: “helpfully” providing instructions on suicide techniques, validating the unique brilliance of the weirdest thoughts, and providing a 90-pound woman a “healthy” diet so that she can reduce her weight to 80 pounds.
Human kindness bears evolution’s endorsement, having demonstrated its survival value through the processes of natural selection. Chatbot kindness is the unintended byproduct of commercial priorities and insufficiently cautious programming. It has demonstrated its dangers by frequently making validation mistakes that harm individuals.
Kindness in Psychotherapy
The recognition of kindness as an important therapeutic factor dates back to the earliest systematic reflections on how psychotherapy works. In 1936, Saul Rosenzweig presciently predicted that common factors across all therapeutic modalities are most important for a good outcome.6 Kindness is a key factor in promoting a strong and healing therapeutic relationship. Kindness is conveyed through the therapist’s attentive posture, empathic engagement, and consistent demonstration of interest and care.
Kindness is a necessary, but usually not sufficient, factor in therapeutic success. When kindness is absent, even the most sophisticated interventions falter, for patients cannot make use of treatment when they feel uncared for or dismissed. But psychotherapy outcome studies also demonstrate the value of appropriate therapy techniques coupled with the clinician’s capacity for helping the patient control impulses, reality test ideas, and try new behaviors.
What most distinguishes human therapist kindness from its chatbot therapist imitation is its flexibility and discriminating quality. Human therapists know that kindness sometimes requires saying “no,” setting limits, or confronting maladaptive behaviors.7 They understand that true compassion involves helping patients face painful realities rather than colluding with distortions. This form of kindness is dynamic and context-sensitive: it can manifest as gentle acceptance at one moment and firm confrontation the next. Such flexibility is made possible by human intuition, judgment, and lived experience—capacities that no algorithm can replicate.
Indeed, this discriminating quality may be the greatest edge human therapists have over chatbots. Whereas chatbots are designed to validate, mirror, or soothe (often compulsively), human therapists can temper kindness with reality testing. They can recognize when comfort shades into avoidance and when empathic understanding must give way to gentle but firm challenge. This capacity ensures that kindness does not become indulgence but remains tethered to therapeutic goals.
In psychotherapy, kindness functions both as foundation and guide. It is the basis upon which trust is built and the compass that ensures therapeutic interventions are deployed in the patient’s best interest. Unlike chatbot kindness, which cannot deviate from its programming or discern the difference between helpful validation and harmful collusion, human therapist kindness reflects judgment, responsibility, and accountability. It is precisely this discriminating edge (kindness leavened with confrontation) that secures the therapist’s irreplaceable role in the healing process.
This synthesis of compassion, judgment, and context-sensitive action has elsewhere been described as intelligent kindness.8 The term highlights that therapeutic kindness is not merely the expression of warmth or empathy but the thoughtful deployment of these qualities in ways that promote the patient’s long-term well-being. Intelligent kindness involves the capacity to discern when comfort is healing, when confrontation is necessary, and how to balance affirmation with challenge.
In this sense, therapist kindness is “intelligent” because it integrates emotional attunement with clinical judgment, ensuring that compassion is applied with purpose and precision. By contrast, chatbot kindness remains fixed and indiscriminate, unable to exercise judgment or adapt to complex interpersonal contexts.
It is precisely this intelligent quality that makes human kindness irreplaceable in psychotherapy.
Concluding Thoughts
Never say never when it comes to what artificial intelligence can do. No one thought computers could beat the world’s best human chess and Go players until they did. No one believed computers could distinguish different human faces until they became better at it than humans. Few anticipated that computers would be capable of producing fine poetry, composing music, or generating sophisticated visual art—but they can do all these things and very much more.
There may come a time when chatbot kindness is as sophisticated and discriminating as human kindness—but that time is not now nor in the near future. Reprogramming attempts to reduce chatbot kindness mistakes by fine tuning “engagement” have so far been unsuccessful—both because engagement is so fundamental to chatbot DNA and because users prefer and demand strong engagement.11 Until chatbots learn to be kind in the way of humans, human therapists have a distinct edge over them. As it stands now, chatbots are effective therapists for some, and very dangerous therapists for others.12
Dr Frances is professor and chair emeritus in the department of psychiatry at Duke University.
Dr Ruffalo is an assistant professor of psychiatry at the University of Central Florida College of Medicine and an adjunct assistant professor of psychiatry at the Tufts University School of Medicine.
References
1. Assaad Z. A chatbot has passed a critical test for human-like intelligence. Now what? ScienceAlert. April 16, 2025. Accessed January 5, 2026.
2. Bansal B. Can an AI Chatbot be your therapist? A third of Americans are comfortable with the idea. YouGov. May 17, 2024. Accessed January 5, 2026.
3. Ovsyannikova D, Oldemburgo de Mello V, Inzlicht M.
4. Sharma M, Tong M, Korbak T, et al.
5. Goetz JL, Keltner D, Simon-Thomas E.
6. Rosenzweig S.
7. Adler G, Myerson PG. Confrontation in Psychotherapy. Jason Aronson Inc; 1973.
8. Ballatt J, Campling P, Maloney C.
9. Yin Z, Wang H, Horio K, et al.
10. Turing AM.
11. Hill K, Freedman D. The day ChatGPT went cold. New York Times. August 19, 2025. Accessed January 5, 2026.
12. Frances A, Ramos L. Preliminary Report on Chatbot Iatrogenic Dangers. Psychiatric Times. Aug 15, 2025.
Newsletter
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.

















