News
Article
Author(s):
An interdisciplinary gathering at the 2025 virtual conference of the Society for Digital Mental Health heard presentations and panel discussions on digital therapeutics, along with reports of studies on the technology and outcomes by student and young researchers.
Inna/AdobeStock
The fourth annual virtual conference of the Society for Digital Mental Health (SDMH), June 9-10, 2025, featured presentations and panel discussions on innovative online therapeutics, and on policy affecting their approval, access, and application.
Investigations by students and young researchers were also presented, with these 5 recognized by the organizing committee as best of conference.
Assessing Acceptance in Minorities
While digital therapeutics has the potential to increase access to mental health care, 3 of the featured studies examined how access to, as well as acceptance of, the technology can be problematic for minority groups.
Sigdha Kamarsu, Anxiety Research, Treatment, and Technology (ARTT) Lab, Georgia State University, Atlanta, GA, presented their investigation1 into whether sexual and gender minority groups encounter additional or different barriers than majority group peers. The investigators applied standardized, self-report measures of attitudes towards psychological online intervention with internet based cognitive behavioral therapy (iCBT) and perceived barriers to psychological treatment. The cohort of 632 participants was grouped into women, men, and gender-queer categories, with sexual orientation categorized as heterosexual, lesbian/gay/bisexual (LGB), and other sexual minority identities (OSMI).
Kamarsu reported that concerns about lack of personal contact when using iCBT were greater for gender-queer individuals compared with men and women, and greater for OSMI compared with heterosexual individuals. OSMI individuals indicated greater confidence in the effectiveness of iCBT than LGB individuals.
"Our results corroborate our hypothesis that sexual and gender minority individuals experience greater negative attitudes, greater attitudinal barriers, and greater practical barriers than their majority group peers," Kamarsu indicated.
Taylor Myers-Brower, Program for Anxiety, Cognition and Treatment (PACT) Lab, University of Virginia, Charlottesville, described their efforts to culturally tailor MindTrails, a cognitive bias modification program targeting anxious thinking patterns, for Latinx Spanish-speaking individuals.
Eleven treatment-seeking participants provided feedback on a Spanish version of the app over a course of 3 interviews. Factors considered in the interviews varied from technical issues, sources of confusion and the domain of discrimination, to variables affecting engagement and suggestions on personalizing content. The investigators used the feedback to guide iterative changes to the app.
"Early findings suggest positive impressions of a culturally-tailored MindTrails app and highlight potential enhancements to better address the needs of the target audience," Myers-Brower reported.
Olive Chung-Hui Huang, Department of Psychology, University of Windsor, Ontario, Canada reported on a single arm clinical trial3 conducted with the Center for Addiction and Mental Health (CAMH), Toronto, Ontario that investigated how adoption and acceptability of mindfulness-based apps are affected by social determinants of health such as race, gender, and education.
A mindfulness-based app was provided to a cohort of 183 treatment-seeking adults (65% women, 38.5% sexual minority, 36.1% Black, indigenous, or People of Color [BIPOC], and 35.5% without college education).In the 1-month trial and 3-month follow-up, adoption was indicated by completion of any in-app exercises, acceptability of the app was assessed by questionnaire, engagement was reflected in the number of app exercises completed and the number of minutes in meditation, and attrition was determined from the number of assessments left uncompleted.
Huang reported finding that women, sexual minorities, and below-college educated individuals were more likely to engage with a mindfulness app, and BIPOC persons tended toward less engagement. She anticipates that their future research will "evaluate how mindfulness apps meet the needs of BIPOC or non-binary individuals, and the capacity for apps to be more gender-, trauma-, or culturally-informed."
Adopting AI and ML
Tyler Schmitt, University of Pittsburgh, reported progress on a project4 utilizing a youth research collaborative to validate machine learning (ML) classifiers of social media content. Their goals included identifying social media interactions that expose youth to loneliness and social isolation and developing a mitigating algorithm to improve connectedness and social support.
"Loneliness and perceived isolation are linked to disproportionately higher rates of mental health problems in marginalized youth," Schmitt related. "These youth may use social media to seek support and connectedness that they cannot find offline."
The youth collaborative of 14 participants was constituted in partnership with 3 Pittsburgh-based community organizations serving Black, Latinx, and LGBTQ+ youth. The youth held monthly meetings, alternating between in-person and online, that focused on the 4 key concepts of loneliness, social isolation, connectedness, and social support.
"Our youth research partners married research-informed theory and their personal experiences to create working definitions of mental health risk and protective factors that they will use to validate ML classifiers of social media content," Schmitt explained.
The project is ongoing, with 11 meetings held as of the presentation, of which 9 directly focused on the stated goals. To this point, the project has established consensus on which parts of the working definitions were accurate and identified gaps in the working definitions to be further analyzed in building ML parameters.
Steven Siddals, King's College London, London, UK, described the experiences of 19 study5 participants using generative AI chatbots, like ChatGPT, for mental health issues ranging from symptoms of anxiety and depression to distress over romantic relationships, dealing with loss, and resolving stress and conflict. Siddals et al compared experiences of participants using rule-based chatbots to those using generative AI chatbots and between generative AI chatbots and therapist-guided treatment.
Utilizing semi-structured interviews and reflexive thematic analysis, the investigators derived 4 themes from the reported experiences: emotional sanctuary (validating, nonjudgmental); insightful guidance (valued advice, new perspectives); joy of connection (human-like companionship beyond traditional apps); and "AI Therapist" (augmented therapy, creative new uses). Within these 4 themes, the investigators noted caveats: for emotional sanctuary, appropriate guardrails could be absent; for insightful guidance, there could be insufficient challenge; for joy of connection, it could have been more accessible; and the "AI Therapist" did not lead the process.
In comparing the reported experiences with rule-based vs generative AI chatbots, Siddals related that the former was perceived "more predictable," "more explainable," and to have lower risk of bias; while the latter appeared to have "better engagement," "deeper understanding," "more flexibility," and to offer "better quality advice.”
Comparing the experiences with generative AI chatbots to those with clinician-guided treatment, the former was valued by users for around-the-clock availability, lower cost, feeling "less judgmental," and offering "creative new uses.” In comparison to the bot, the human clinician could lead the process and provide real human connection, and was appreciated for providing "deeper empathy" and "sense of commitment."
In an earlier published report6 on this study, the investigators addressed the safety concern with generative AI chatbots accessed by users in crisis. They relate that several of the study participants experienced meaningful crisis support from generative AI, and they advocate for its use in these circumstances as long as guard rails remain in the algorithm.
While acknowledging that early generative AI chatbots had tragically supported users in dying by suicide, Siddals and colleagues argue against "underestimating the capabilities of generative AI to respond to crisis, and...limiting those capabilities at the times that matter most."
Dr Bender reports on medical innovations and advances in practice and edits presentations for news and professional education publications. He previously taught and mentored pharmacy and medical students, and he provided and managed pharmacy care and drug information services.
References
1. Quirk HD, Anderson PL. Do sexual and gender minority (SGM) groups endorse different barriers and attitudes toward mental health treatment than majority group peers. Presented at annual conference of the Society of Digital Mental Health, June 9-10, 2025.
2. Taylor L, Myers-Brower E, Englander-Fuentes E, et al. User perceptions of a culturally tailored digital cognitive bias modification for interpretation program for anxious Hispanic individuals. Presented at annual conference of the Society of Digital Mental Health, June 9-10, 2025.
3. Huang O, Patel A, Daros A, et al. Social determinants of health are associated with differences in acceptability and engagement with a mindfulness app: Moderation by gender, sexual orientation, race/ethnicity, educational attainment. Presented at annual conference of the Society of Digital Mental Health, June 9-10, 2025.
4. Schmitt TN, Win E, Sreenivasan L, et al. Establishing a youth research collaborative to validate machine learning classifiers of social media content: Leveraging human centered design and youth research partnership. Presented at annual conference of the Society of Digital Mental Health, June 9-10, 2025.
5. Siddals S, Torous J, Coxon A. ChatGPT for mental health? Start by listening to real-life stories. Presented at annual conference of the Society of Digital Mental Health, June 9-10, 2025.
6. Siddals S, Torous J, Coxon A. "It happened to be the perfect thing": experiences of generative AI chatbots for mental health. Npj Ment Health Res. 2024;3(1):48.
Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.