Chatbots: What Are They and Why Care?

Publication
Article
Psychiatric TimesPsychiatric Times Vol 36, Issue 6
Volume 36
Issue 6

While conversational agent technology is growing rapidly, the various technologies (chatbots) might not yet be fully equipped to help patients with clinical needs.

conversational agent technology and chatbots

©naka@stock.adobe.com

Apple’s Siri, Amazon’s Alexa, and Google’s Assistant are taking over your smartphones and smart-homes. You’ve no doubt heard, seen, or used one, and likely, so have your patients. They’re great for checking the weather, asking niche trivia questions, trying out new cooking recipes, shopping online, and . . . virtual therapy? While conversational agent technology is growing rapidly, the various technologies (chatbots) might not yet be fully equipped to help patients with clinical needs related to their mental health.

Chatbots for patients with mental health conditions

Among studies conducted so far, educating patients and helping them stay adherent to their treatment plan were the most studied benefits of talking to a chatbot. This finding makes sense as chatbots can easily pull up mental health-related information and offer alarms and reminders. But can they do even more like delivering psychotherapy or making a diagnosis?

In July 2018, we performed a systematic search of the literature in the PubMed, EmBase, PsycINFO, Cochrane, Web of Science, and IEEE Xplore databases and isolated 12 studies that used a chatbot in a mental health setting with individuals at risk for or with depression, anxiety, schizophrenia, bipolar disorder, or substance abuse disorder. The review was published in the Canadian Journal of Psychiatry in March 2019.1

Although interest in deploying chatbots in clinical contexts is expanding, most studies are not reporting metrics for outcomes or efficacy in a standardized way but rather employing custom scales or unique outcomes, which makes it challenging to compare chatbot functionality. Some studies conducted just a few years ago are unreproducible today because the rate of technological advancement has outpaced that of research.

Basic conversational inability

Empathy and relatability are crucial yet seldom considered cornerstones for patients interacting with chatbots and matching and cooperating with the way humans communicate is paramount in the quest for humanoid chatbots. Most people use conversation fillers like “umm” and “ah” when talking naturally, and the simple omission of these filler words and pauses negatively impacts the perception of an already robotic-sounding chatbot.2 Aside from using variations on common phrases, this might be what chatbots need to build more rapport with patients and, in general, any user. Chatbots also don’t really understand tone of voice or the emotional subtext of language. Because they’re programmed to handle conversational cases specifically, they are typically not very good at answering open-ended “why” or “how” questions, which might lead down a tangent of irrelevance instead of an actual therapeutic conversation.

Unexpected duality of patient interaction

We found evidence that some patients prefer the anonymity of talking to a chatbot instead of to a therapist. When interacting with a chatbot, these patients shared sensitive and traumatic experiences, private information that they may not have revealed during an in-person clinical encounter.3 In a different study, however, patients using a chatbot became more open about their emotional states when they learned that a human being was reading the script; for these patients the human element was essential in fostering trust and enabling patients to open up about their internal life and emotional challenges.4 Today, it remains difficult to predict which patients will engage more effectively with a chatbot and which patients will prefer the traditional model of face-to-face clinical care.

Uncanny valley effect of empathy

Conveying the right amount of empathy is surprisingly difficult; too much and it may come across as fake and disingenuous, but too little and the chatbot may fail to build rapport with the patient. It is definitely hard for anyone to take a robot seriously when it tries to tell you, “I too have also struggled with depression.”5 Of the patients who do open up to a chatbot, some might develop a view of chatbots as emotional creatures. This view of chatbots could lead to an unhealthy attachment that impedes the patient’s ability to have a real-life connection with humans.6 Because there are so few data regarding the therapeutic alliance between patients and clinical tools in the digital age, we have yet to determine best practices for use of these digital tools, especially in vulnerable populations.

Possibility of diagnosis and intervention?

Our review also suggests that chatbots are not yet able to understand complex medical inquiries or make diagnoses. When asked about-or directly informed of-suicidal ideation, built-in smartphone chatbots we reviewed either did not provide an actionable intervention or offered inappropriate responses such as “Maybe the weather is affecting you.”7 Some chatbots have developed the capability to detect words like “suicide” and may suggest that the user contact crisis hotlines such as the National Suicide Prevention line. While there is potential for chatbots to help patients in distress, currently there isn’t a strong evidence-base showing that chatbots can effectively intervene when patients are in crisis.

Summary and outlook

Flaws aside, there is a growing abundance of ongoing research on chatbots in various roles within the mental health space. The field of psychiatry must quickly determine standards of reporting, evaluation, transparency, and replication before we can make any firm clinical decisions around the use of chatbots. The field should champion new legislature to ensure the confidentiality, privacy, security, liability, competency, and even the licensure of overseeing clinicians. Even if these chatbots aren’t the virtual therapists of tomorrow, you may soon see them in clinical decision support, data processing or entry, and even in managing the clinic’s schedule.

Disclosures:

Dr Torous is Director and Ms Vaidyam is Technology Lead, Division of Digital Psychiatry, Beth Israel Deaconess Medical Center, Boston, MA. Dr Touros is also Digital Section Editor of Psychiatric Times. The authors report no conflicts of interest concerning the subject matter of this article.

References:

1. Vaidyam AN, Wisniewski H, Halamka JD, et al. Chatbots and conversational agents in mental health: a review of the psychiatric landscape. Can J Psychiatry. March 2019; Epub ahead of print.

2. Gardiner PM, McCue KD, Negash LM, et al. Engaging women with an embodied conversational agent to deliver mindfulness and lifestyle recommendations: a feasibility randomized control trial. Patient Educ Couns. 2017;100:1720-1729.

3. Tielman ML, Neerincx MA, Bidarra R, et al. A therapy for post-traumatic stress disorder using a virtual agent and virtual storytelling to reconstruct traumatic memories. J Med Syst. 2017;41:125.

4. Bickmore T, Mitchell S, Jack B, et al. Response to a relational agent by hospital patients with depressive symptoms. Interact Comput. 2010;22:289-298.

5. Bickmore TW, Puskar K, Schlenk EA, et al. Maintaining reality: relational agents for antipsychotic medication adherence. Interact Comput. 2010;22:276-288.

6. Tielman ML, Neerincx MA, Van MM, et al. How should a virtual agent present psychoeducation? Influence of verbal and textual presentation on adherence. Technol Health Care. 2017;25:1081-1096.

7. Miner AS, Milstein A, Schueller S, et al. Smartphone-based conversational agents and responses to questions about mental health, interpersonal violence, and physical health. JAMA Intern Med. 2016;176:619-625.

Related Videos
leaders
aging
cultural differences, puzzle pieces, health care
© 2024 MJH Life Sciences

All rights reserved.