Publication
Article
Author(s):
MEDICAL ECONOMICS
Language used to describe patients in electronic health records (EHRs) may be perpetuating racial bias and other negative stereotypes in health care, according to results of a new study.1
Using machine learning techniques, investigators analyzed potentially stigmatizing language in more than 40,000 history and physical notes in EHRs from 18,459 adult patients at an academic medical center in Chicago, Illinois. Sun et al found the odds of Black patients having 1 or more negative descriptors such as “refused,” “(not) adherent,” “(not) compliant,” or “agitated” noted in their EHRs were more than twice (2.54; 95% CI; 1.99-3.24) the odds of white patients, even after adjusting for sociodemographic and health characteristics.
The differences were not solely based on race. Medicare and Medicaid beneficiaries, for example, were more likely to have negative descriptors applied to them than were patients with commercial or employer-based insurance, as were unmarried patients compared with patients who were married, the study results showed.
The findings are especially troubling, Sun and colleagues wrote, given that other studies have found that less than 20% of text in inpatient progress notes is original, with much of the information imported from prior documentation. As a result, “subsequent providers may read, be affected by, and perpetuate the negative descriptors, reinforcing stigma to other health care teams,” Sun et al wrote.
The investigators found that notes written for outpatient visits were less likely to have negative descriptors in the patient’s EHR than inpatient encounters. They theorized this resulted from inpatient settings being inherently more stressful, thereby increasing the risk of doctors using stereotypes as cognitive shortcuts.
Sun et al also found that race-based differences in the use of negative descriptors narrowed after March 1, 2020. They theorized this was due to the stark racial differences in care and outcomes highlighted by the COVID-19 pandemic and the pandemic’s overlap with “a historically defining moment of national response to racialized state violence” created by the killings of George Floyd and other Black Americans.
“These social pressures may have sensitized providers to racism and increased empathy for the experiences of racially minoritized communities,” Sun et al wrote.
The authors further noted it is imperative that medical institutions better address the issue of implicit racial bias and institute provider training. For example, they noted a physician’s use of the term aggressive may reflect the physician’s personal bias regarding Black men, adding that once a negative label becomes part of the patient’s record, “it potentially affects the perceptions and decisions of future providers regardless of whether future providers hold a bias about Black men being aggressive.” Similarly, they recommended “people first” language (eg, saying a patient has an alcohol use disorder instead of labeling them an alcoholic) be used in interprofessional communication.
Similarly, Sun et al also suggested that regulatory bodies such as the Accreditation Council for Graduate Medical Education develop specific recommendations for the use of nonstigmatizing, patient-centered language to prevent the transmission of bias.
The need is especially important, they wrote, in light of the OpenNotes policies adopted by medical institutions. These policies allow patients full access to their EHRs, including chart notes. Sun et al added that inappropriate and negative language used to describe patients may risk “harming the patient-provider relationship with downstream effects on patient satisfaction, trust, and even potential litigation.”
Reference
1. Sun M, Oliwa T, Peek ME, Tung EL. Negative patient descriptors: documenting racial bias In the electronic health record. Health Affairs. January 19, 2022. Accessed January 26, 2022. https://www.healthaffairs.org/doi/full/10.1377/hlthaff.2021.01423 ❒
READ MORE: https://bit.ly/3G4hy63