One of Westen and Morrison's major points is that researchers need to report more of their data if clinicians are to draw clinically meaningful conclusions from the research literature. They advocate that in addition to effect size, such indices as percent improved or recovered; mean level of symptomatology at the end of treatment; percent who remain improved at the end of treatment; percent included and excluded of those initially screened and the reasons for exclusion; and the percent seeking additional treatment should also be included.
Effect size alone "does not yield information on variability of response among subjects," wrote Westen and Morrison (2001). A study could have an impressive effect size if a few patients recovered completely even if the majority were only modestly affected. Reporting the percent of patients improved provides a more comprehensive sense of the treatment's overall efficacy, but "about half to two-thirds of the time this number is not reported in the initial study," Westen explained to PT, "and in meta-analyses, it's never reported."
The mean level of symptomatology at the end of treatment also provides important information. "If you're going to describe a therapy as an empirically supported treatment for depression," explained Westen, "you ought to specify if the treatment alleviates depression entirely or only decreases it, a significant distinction." In their meta-analysis, depressed patients post-treatment averaged 8.68 (SD=6.49) on the Hamilton Rating Scale for Depression (HAM-D) and 10.98 (SD=8.60) on the Beck Depression Inventory (BDI), which are arguably still clinically significant levels of depression (Westen and Morrison, 2001).
If these patients were being treated in the community, Westen and Morrison wrote, their clinicians would continue to treat them. For GAD, the average patient continued to score 11.03 (SD=6.18) on the HAM-D and 47.45 (SD=9.33) on the State-Trait Anxiety Inventory-Trait Version (STAI-T). Weston and Morrison concluded, "These findings, relative to published norms [where available], suggest that the average patient receives substantial benefit but continues, even at termination, to have mild symptoms of the disorder for which he or she was treated."
By far, Westen and Morrison's most provocative proposal was their notion of calculating outcomes on the basis of an "effective efficacy" quotient. Ordinarily, when calculating the success of a treatment, the denominator, or n, in the equation "the number improved/n" represents either those who completed treatment or, more conservatively, the total number of those who started the therapy (the so-called intent-to-treat group). The effective efficacy quotient would take as its denominator all of those screened for a trial, even if they were excluded from the study. According to Westen and Morrison, this number may more accurately reflect the treatment's efficacy in the real world since "clinicians in everyday practice do not have the luxury of screening out patients who they have reason to believe will not respond."
The notion of effective efficacy generated shock and even outrage among the reviewers. Commentary co-author Robert J. DeRubeis, Ph.D., told PT, "There are some good ideas in this paper, but this is not one of them. [They are] the first ever to suggest that you should take the excluded patients and treat them as if they were not helped at all by the therapy. That's as far-fetched as the notion that all of them would have gotten better."
Stewart Agras, M.D., professor of psychiatry at Stanford University, concurred that there is a failure of logic. He explained to PT, "You don't know how many of them would get better, especially since one of the main reasons for excluding [subjects] is that they are not sick enough."
While Westen and Morrison may have been playing devil's advocate with their effective efficacy quotient, they contend that the large number of patients excluded from trials casts the generalizability of findings into doubt. Westen explained, "If you get rid of 70% of the patients who walk in the door because they have comorbid conditions or don't meet your experimental criteria in some other way, and then you get rid of the people who didn't complete the treatment even though they passed a rigorous screening procedure, then what you're really saying is that 50% were treated successfully of the 80% who completed the treatment of the 30% who were accepted into the study in the first place."