News|Articles|February 16, 2026

The Empathy Crisis: How Social Media Algorithms Drive Emotional Numbing

Listen
0:00 / 0:00

Key Takeaways

How social media algorithms push violent clips, fueling desensitization, irritability, and compulsive scrolling—plus practical screening and treatment tips for clinicians.

Most of our patients assume they control their online experience, but social media platforms now use algorithms that quietly shape what they see. More frequently, psychiatrists encounter patients whose affect, cognition, and behavior appear shaped by what these algorithm systems feed them.

For example, I see a patient who is a middle-aged Army veteran with posttraumatic stress disorder (PTSD), major depressive disorder, and polysubstance use disorder who works as an armed security guard. He initially used social media to enjoy music videos and explore guitar lessons, but over time his feed became dominated by progressively violent clips, most commonly police body-cam footage, security altercations, and street fights. He said he would also occasionally encounter clips of combat footage, but these trigger his PTSD and therefore he actively avoids them. On the other hand, he pointed out that he could not stop watching the other violent content. Across visits, his emotional reactions flattened, and he more often described people in these videos and in news stories with dehumanization and harshness. He began describing increased irritability, especially related to driving and dealing with people at work.

This vignette illustrates a pattern increasingly relevant to clinical practice: algorithm-driven exposure to violent or high-arousal content reshaping emotional reactivity, moral judgment, and behavioral regulation.

How Algorithms Work

Social media platforms today use recommendation systems designed to reinforce whatever keeps the user on their platform. These systems maximize engagement and attention, rather than user wellbeing. Decades of media psychology research show that threatening, shocking, and morally provocative material captures attention more reliably than neutral content.1 Using this principle, platforms create a reinforcement loop that gradually shifts users’ feeds toward more intense or provocative content over time.

For many patients, this shift is not intentional. The algorithms infer "interest" from micro-engagement, such as slowing down when scrolling, replaying videos, or hovering. Over time, algorithms can move users from benign content to high-arousal material and eventually to graphic or extreme videos. A one-time disturbing video may upset someone, but it is the repeated, algorithmic recommendation of similar clips that drives escalation and gradual desensitization. Importantly, this drift is not evenly distributed. The Facebook Papers revealed that the company's own research identified vulnerable communities as being disproportionately exposed to 'borderline' content that caused severe harm to users.2 Investigations into TikTok's For You Page found that engagement signals including likes and watch time shape content recommendations, contributing to the amplification of problematic material.3 Independent audits across multiple platforms show the same pattern: research examining YouTube's recommendation system found pathways that can lead users from mainstream content toward more extreme material, and simulated accounts mimicking vulnerable teen male profiles showed a 4-fold increase in misogynistic content (from 13% to 56%) within 5 days of account creation.4,5 In other words, the system detects vulnerability and amplifies whatever sustains engagement because that is what benefits the platform.

Among youth, these patterns are pronounced. In a study of young Instagram users, Arendt and colleagues found that 43% had encountered self-harm content despite most not actively searching for it, and more than half of those exposed reported subsequent emotional distress.6 The Center for Countering Digital Hate reported that TikTok recommended suicide-related videos within minutes, even to accounts posing as vulnerable teens.7 The US Surgeon General's 2023 Advisory on Social Media and Youth Mental Health identifies algorithmic content delivery as a distinct pathway through which youth are exposed to harmful material they did not actively seek.8 A 2024 Youth Endowment Fund survey of over 10,000 teenagers found that 70% encountered real-world violence on social media, with 25% of those exposed reporting the content was algorithmically promoted rather than sought. Among teens who perpetrated violence, 64% reported that social media played a role in their behavior.9

Neurobiology of Desensitization and Compulsive Exposure

Growing research explores why patients exposed to violent or emotionally charged online content develop patterns of emotional numbing, irritability, and difficulty disengaging. Longitudinal research by Huesmann and colleagues shows that repeated exposure to media violence in childhood predicts aggressive behavior in adulthood, a pattern consistent with desensitization processes that reduce emotional and behavioral inhibitions.10 Neuroimaging findings align with this, showing that the amygdala becomes less reactive while emotional processing regions in the prefrontal cortex show reduced engagement.11 Patients often describe this as "getting used to it" or "not feeling anything."

At the same time, these short, intense videos engage dopamine-driven reward circuits.12 The fast-paced, unpredictable nature of these clips creates reinforcement patterns similar to behavioral addictions, such as slot machine reward cycles.13 Many patients report being pulled toward material they do not even enjoy, reflecting reward-system activation instead of genuine interest. This aligns with the pattern of wanting without liking that is described in behavioral addictions, where craving persists even when pleasure does not.14 Clinically, this manifests as time distortion, compulsive checking, and inability to stop scrolling.

Additionally, exposure to real-world violence online can activate stress-regulatory brain systems including the amygdala and hypothalamic-pituitary-adrenal axis. While initial exposure to traumatic content produces elevated cortisol responses, repeated exposure is associated with blunted physiological stress responses, reflecting desensitization at the neuroendocrine level.15

Finally, repeated exposure to violent or high-arousal content has been shown to decrease functional connectivity within the default mode network (DMN), a system involved in internally directed thought.16 At the neurobiological level, this reflects reduced activation of the brain's reflective circuitry during rest. Adolescents may be more vulnerable because the systems involved in impulse control, reflection, and empathy are still developing, with genetic evidence showing addiction-related genes are overexpressed during this developmental period.12 This creates patterns that are harder to reverse later in life.

Moral Psychology and Shifts in Empathy and Judgment

Repeated exposure to violent content not only blunts emotional reactivity but also shifts how patients interpret others’ behavior. Classic social learning research by Bandura demonstrated that children imitate observed aggressive behavior, and subsequent work has shown that such exposure shapes attitudes, expectations, and behavioral scripts for evaluating conflict.17,18More recent work in media psychology shows that media violence predicts later aggression, but not the reverse, and decreases empathy.19 Studies on moral contagion show that moral-emotional language spreads more quickly online, meaning patients are repeatedly exposed to collective reactions that model hostility rather than reflection.20 The algorithm often pairs violent clips with comment sections filled with ridicule, blame, and certainty that model how viewers are expected to interpret the situation and create a social reinforcement loop that rewards harsh conclusions.

Clinically, this pattern becomes more concerning when combined with DMN suppression. The DMN supports mentalizing, which is the capacity to understand others’ mental states and emotional experiences and is essential for empathic responding and moral evaluation. During rapid, continuous scrolling, this neural circuitry remains inactive. As a result, patients may intellectually know that what they are watching is wrong, but they no longer feel that it is wrong. The emotional signal normally tied to moral judgment becomes muted. Over time, this narrows how patients understand the intentions and situations of others and pushes people toward in-group and out-group thinking. This may appear as harsher interpretations of others' motives, reduced empathy, increased irritability, or a growing sense that victims "deserve" what happens to them. These behavioral scripts become increasingly resistant to change as they are rehearsed and enacted, creating a self-reinforcing cycle that can persist into adulthood.¹⁹

Clinical Assessment

Patients rarely mention their online exposure unless asked, so screening must be direct. This should include questions about emotional numbing, irritability, and difficulty stopping once they begin scrolling.

Behavioral clues also warrant assessment. Late-night viewing, compulsive refreshing, and increased vigilance or confrontational thinking often emerge after repeated exposure to violent or high-arousal content. Listen for shifts toward rigid, punitive interpretations and reduced empathic language.

It is also important to consider how this interacts with existing diagnoses. Patients with PTSD may avoid trauma-specific material but still compulsively watch adjacent violence. Depressed patients may show increased irritability and blunted affect. Patients with substance use disorders may be more vulnerable to compulsive late-night engagement. Clinicians should clarify whether media exposure is a symptom of distress or a cause of symptom escalation, because each requires a different intervention. Without screening, clinicians may misattribute changes to personality pathology or treatment-resistant conditions rather than modifiable algorithmic overexposure.

Treatment and Management

Treatment does not require complete social media avoidance. The goal is to help patients regain a sense of control and reduce passive, automatic exposure to high-arousal material. Psychoeducation is often the most effective starting point. Many patients assume the content in their feed reflects their own preferences, so explaining how algorithms operate can increase insight and decrease shame.

Because the compulsive scrolling pattern parallels other behavioral addictions, some patients benefit from applying addiction strategies, such as identifying triggers, reducing cues, and building intentional replacement behaviors. Specific behavioral interventions include limiting use when tired or emotionally dysregulated, creating defined viewing windows rather than scrolling intermittently throughout the day, and intentionally engaging with neutral or prosocial content so that the recommendation system recalibrates. Motivational interviewing can be particularly helpful when patients are ambivalent about implementing these changes.

Therapeutic work should also address shifts in empathy, judgment, and anger that arise from repeated exposure to violent or dehumanizing material. Complementary interventions such as improving sleep routines and stabilizing mood can reduce vulnerability to compulsive late-night viewing, which many patients identify as the most difficult pattern to interrupt.

Future Directions

As awareness of algorithm-driven exposure grows, several broader changes are being discussed. Researchers have called for greater transparency around how recommendation systems steer users toward high-arousal content, and there is increasing interest in independent auditing of algorithms that disproportionately impact vulnerable groups. Several proposals center on age restrictions and more reliable age verification processes. Given the emerging clinical impact, several groups have recommended developing standardized clinician guidance on how to screen for and discuss algorithmic exposure during routine visits. While these policy directions are still evolving, individual clinical interventions will remain important. Long-term solutions, however, will also require changes in how platforms design and regulate their recommendation systems.

Concluding Thoughts

Stepping back, these patterns suggest we are witnessing a shift in human emotional and moral processing driven by technology. This extends beyond individual mental health concerns. Desensitization to violent, high-arousal content is now a measurable neurobiological phenomenon that reshapes how people experience empathy, form moral judgments, and understand the suffering of others. These changes have implications not only for psychiatric practice but for the human connection and moral reasoning on which society depends.

For psychiatrists, this represents an opportunity to serve as first responders to these changes. The clinical implications are both urgent and actionable. Algorithm-driven exposure alters diagnosis and treatment, can worsen existing symptoms, and changes the emotional ground on which the therapeutic alliance rests. The good news is that it is also a modifiable risk factor. We can routinely screen for what appears in patients' feeds, treat compulsive viewing with the same behavioral frameworks we use for other addictions, and deliberately help patients rebuild reflection and empathy rather than assuming these changes are permanent.

Individual clinical work will not solve platform design or business models, but it does matter. In parallel, psychiatrists have a role in advocating for safer defaults, greater transparency, and youth protections so that the systems shaping attention do not silently erode the very capacities our field depends on.

Editor’s Note: The author reports no conflicts of interest. Artificial intelligence tools were used to assist with literature review, citation formatting, and editing. All clinical observations, recommendations, and interpretations are the author’s own.

Dr Batarse is a psychiatry resident at the Detroit Medical Center and attended Wayne State University School of Medicine.

References

  1. Baumeister RF, Bratslavsky E, Finkenauer C, et al. Bad is stronger than good. Rev Gen Psychol. 2001;5(4):323-370.
  2. McCluskey M. Why some people see more disturbing content on Facebook than others, according to leaked documents. Time. November 3, 2021. Accessed February 9, 2026.. https://time.com/6111310/facebook-papers-disturbing-content/
  3. Boeker M, Urman A. An empirical investigation of personalization factors on TikTok. Proceed ACM Web Conf. 2022:2298-2309.
  4. Ribeiro MH, Ottoni R, West R, et al. Auditing radicalization pathways on YouTube. Proceed Conf Fairness, Account, Transp. 2020:131-141.
  5. Regehr K, Shaughnessy C, Zhao M, et al. Safer scrolling: how algorithms popularize and gamify online hate and misogyny for young people. Association of School and College Leaders. January 2024. Accessed February 9, 2026. https://www.ascl.org.uk/ASCL/media/ASCL/Help%20and%20advice/Inclusion/Safer-scrolling.pdf
  6. Arendt F, Scherr S, Romer D. Effects of exposure to self-harm on social media: evidence from a two-wave panel study among young adults. New Media Soc. 2019;21(11-12):2422-2442.
  7. Deadly by design. Center for Countering Digital Hate. December 15, 2022. Accessed February 9, 2026. https://counterhate.com/research/deadly-by-design/
  8. Office of the Surgeon General. Social media and youth mental health: the US surgeon general's advisory. US Department of Health and Human Services. 2023. Accessed February 9, 2026. https://www.hhs.gov/sites/default/files/sg-youth-mental-health-social-media-advisory.pdf
  9. Youth Endowment Fund. Children, violence and vulnerability 2024: what role does social media play in violence affecting young people? November 2024. Accessed February 9, 2026. https://youthendowmentfund.org.uk/wp-content/uploads/2024/11/CVV24_R2_Online.pdf
  10. Huesmann LR, Moise-Titus J, Podolski CL, et al. Longitudinal relations between children's exposure to TV violence and their aggressive and violent behavior in young adulthood: 1977-1992. Dev Psychol. 2003;39(2):201-221.
  11. Mathiak K, Weber R. Toward brain correlates of natural behavior: fMRI during violent video games. Hum Brain Mapp. 2006;27(12):948-956.
  12. Gao Y, Hu Y, Wang J, et al. Neuroanatomical and functional substrates of the short video addiction and its association with brain transcriptomic and cellular architecture. Neuro Image. 2025;307:121029.
  13. Griffiths MD. Adolescent social networking: how do social media operators facilitate habitual use? Edu Health. 2018;36(3):66-69.
  14. Berridge KC, Robinson TE. Liking, wanting, and the incentive-sensitization theory of addiction. Am Psychol. 2016;71(8):670-679.
  15. Reyes-Mota AR, Cerda-Molina AL, Mayagoitia-Novales L, et al. Reduced cortisol response to traumatic images, self-esteem and stress levels in emergency medical technicians from the Red Cross. Pers Individ Differ. 2021;179:110929.
  16. Zvyagintsev M, Klasen M, Weber R, et al. Violence-related content in video game may lead to functional connectivity changes in brain networks as revealed by fMRI-ICA in young men. Neuroscience. 2016;320:247-258.
  17. Bandura A, Ross D, Ross SA. Transmission of aggression through imitation of aggressive models. J Abnorm Soc Psychol. 1961;63(3):575-582.
  18. Huesmann LR. The role of social information processing and cognitive schema in the acquisition and maintenance of habitual aggressive behavior. In: Human Aggression. Academic Press; 1998.
  19. Krahé B, Möller I. Longitudinal effects of media violence on aggression and empathy among German adolescents. J Appl Dev Psychol. 2010;31(5):401-409.
  20. Brady WJ, Wills JA, Jost JT, et al. Emotion shapes the diffusion of moralized content in social networks. Proc Natl Acad Sci USA. 2017;114(28):7313-7318.

Newsletter

Receive trusted psychiatric news, expert analysis, and clinical insights — subscribe today to support your practice and your patients.