The scientific method has delivered vaccines and other life-saving medicine, but not everybody trusts it.
COVID/AdobeStock
The results of the Pfizer and Moderna COVID-19 vaccine studies were in the news. At the dinner table, my teenaged daughter asked me, “Can you explain how they did the studies?”
I replied, “Well, they showed that individuals who received the active vaccine injection were much less likely to get the infection than those who got a placebo or dummy injection without active vaccine.”
She listened intently with a slightly worried expression.
“But to make the study completely objective, it was double-blind,” I said.
“What’s that?” she asked.
“Double-blind means that both sides are blind or not given information about who got the real vaccine or placebo,” I replied. “Not the clinical team with the doctor, nurse, and other research staff on one side, and the patient and family and friends on the other side. No one directly involved with the patient in the study is allowed to know.”
Her reaction was spontaneous and swift: “But that’s against human nature! How can we prevented from knowing if we are getting the real vaccine? ”
As a researcher, I have conducted many clinical trials with patients who express concern about the possibility of receiving a placebo, or find it hard to comprehend the restrictive rules of a double-blind design. These are counterintuitive methods, though required by the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), and other agencies worldwide. They are intended to increase objectivity because there is ample evidence that a positive bias naturally occurs in most patients and clinicians involved in a study. They expect the vaccine to be safe and effective, and their expectation often has a healing power of its own.
A long history confirms the need for research objectivity. Until the 18th century, average human longevity ranged from 30 to 50 years; it barely budged despite the thousands of new therapies pursued across the world, some based on ancient texts from a variety of cultures. But scientific proof was gradually replacing beliefs. Revolutionary advances like the discovery of antibiotics in the mid-20th century transformed human survival. Experimental methods to improve the accuracy of research findings, including the objective methods used routinely nowadays in clinical studies, were codified by scientists in the 20th century. During the last 2 centuries, the average human lifespan nearly doubled—to over 75 years. The FDA adopted scientific methods by requiring that research studies in humans be conducted in phases. Phase 1 involves small-scale studies to determine safety, and at different doses, phase 2 involves mid-sized, double-blind studies with a placebo group to obtain initial information on efficacy and safety of the optimal doses identified in phase 1, and phase 3 requires large-scale, double-blind clinical trials with placebo built upon results from phases 1 and 2. If phase 3 results are positive, with sufficient safety and efficacy, the vaccine or drug is approved. But the work is not done. Phase 4 then begins with post-approval monitoring of side and adverse effects in those receiving the approved treatment or vaccine in standard clinical care.
Why are there so many hurdles to cross before approval, and more afterwards? Less than 1 in every 100 potential treatments makes it all the way from the basic science laboratory through clinical trials and finally regulatory approval for patient care. In other words, if we did not have the current regulatory system, tens of thousands of new vaccines, drugs and medical devices would be used widely, despite over 99% of them not meeting necessary standards of safety and efficacy. That would be like throwing us back to the healthcare standards of medieval and even earlier times.
My experience during the early days of the COVID-19 pandemic highlights the struggle to balance scientific knowledge with private beliefs and intrinsic daily habits. In late February 2020, I acquired N-95 masks and face-shields from our hospital’s central supply and instructed all my clinical research staff to wear the mask and face-shield when seeing patients. In March 2020, just before the lockdown, we saw our last outpatient, who subsequently was hospitalized a few days later with COVID-19. I did not become ill but a few of my research staff members did, fortunately only with moderate symptoms that resolved in several weeks. Judging by when they became ill, they probably got the infection from that patient. When I asked, they told me that they had not worn the mask and face-shield continuously, as I had instructed. These team members routinely follow my instructions in carrying out their daily work but wearing a mask and face-shield went against their usual habits and beliefs.
If this was happening in a healthcare setting, the problems that we have seen with the public’s response to the pandemic should not come as a surprise. To ensure a robust pandemic response in 2021, we all will need to better understand why so many people do not act in their own interest, and that of others. Public education about the science and the studies that produced FDA approval is essential, as is modeling by trustworthy, influential and diverse people doing the right thing.
But will that be enough? What do we need to know about those individuals (and groups) who demonstrate vaccine hesitancy, as has been abundant in refusals of flu and even measles vaccinations? The beliefs that underlie vaccine hesitancy may seem counter-intuitive, but they are prevalent and constitute a danger to that individual’s health and to that of others. These actions may seem to go against the self-protective and social compact in human nature, except that it is other aspects of human nature, instinctive fear of the new and unknown as well as entrenched beliefs and habits, that are driving them. Let us hope it does not take centuries to understand what is going on and what works to alter vaccine hesitancy. We do not have the time for that.
Dr Devanand is professor of Psychiatry and Neurology and Director of Geriatric Psychiatry, Columbia University Irving Medical Center.