PT Mobile Logo

Search form


Antidepressants and Bipolar Disorder: What Do Recent Studies Tell Us?

Antidepressants and Bipolar Disorder: What Do Recent Studies Tell Us?

The Islamic philosopher and physician Abu'l-Walid Ibn Rushd, better known as Averroës, said that the art of healing requires "the acquisition of universal principles . . . coupled with prolonged experience."1 In this article, we review some of the recent research on the use of antidepressants in bipolar disorder (BD), and we also discuss briefly the methodologic principles that should guide us in this aspect of psychopharmacology. Our sense is that differing approaches to this controversial practice stem more from differences in methodologic assumptions (Averroës's "universal principles") than from differences in the research studies themselves.2


Most clinicians prescribe antidepressants extensively in the treatment of BD. In one study, antidepressants had been prescribed for more than 80% of patients with BD, but only 55% had received mood stabilizers, and only one third had ever been given mood stabilizer monotherapy.3

In the world of academic research 2 basic perspectives have been laid out: that antidepressants are effective and largely safe,4 and that antidepressants are largely ineffective and mostly unsafe.5 There is evidence for and against each of these perspectives.

While there might be a misperception that the debate is about whether to use antidepressants at all in BD, no expert denies that antidepressants have some role in managing this condition. The questions regarding antidepressant use in BD involve the frequency of their use, for which kinds of patients they should be used, and under what circumstances they should be used.6


A key feature of evidence-based medicine is the concept of levels of evidence (Table 1 [please see Psychiatric Times, May 2006, page 66]).2,7 Levels of evidence provide clinicians and researchers with a road map that allows consistent and justified comparison of different studies to adequately compare and contrast their findings.

The key point to keep in mind is that each level of evidence has its own strengths and weaknesses, and as a result, no single level is completely useful or useless. For example, metaanalyses and large randomized studies may obscure subtle differences among subgroups. Other variables being equal, increasing rigor and probable scientific accuracy occur as one moves from level V to level I.

The recognition of levels of evidence allows one to have a guiding principle by which to assess the literature. The basic rules are:

  • Other variables being equal, a study at a higher level of evidence provides more valid (or powerful) results than one at a lower level.
  • As much as possible, judgments should be based on the highest levels of evidence.
  • Levels II and III are often the highest levels of evidence attainable for complex conditions and are to be valued in those circumstances.
  • Higher levels of evidence do not guarantee certainty. Since any one study can be wrong, it is important to look for replicability.
  • Within any level of evidence, studies may conflict based on methodologic issues not captured by the parameters used in the general outlines of levels of evidence.

The key reason that randomized studies are more valid than nonrandomized or observational studies is confounding bias, which is systematic error (as opposed to the random error of chance). "Confounding" means that there is some other factor, besides the factor one thinks is at issue, that explains the result (Figure); confounding bias is always a potential problem with any observational study. Observational studies are conducted under normal clinical conditions: a physician decides to give certain treatments for specific patients. Confounding bias can only be removed by designing a randomized study, or, in the case of observational studies, by a statistical analysis that involves regression models.8


Loading comments...

By clicking Accept, you agree to become a member of the UBM Medica Community.