Last fall semester I once again taught my general introductory course: The History of Madness and Psychiatry in the Western World. The course covers a vast swath of time, beginning with ancient Palestine and Greece and ending with debate over the famous case of Osheroff v. Chestnut Lodge. As I sat reading responses to a final exam essay question in which I asked students to discuss long-term patterns and trends in the history of the handling of mental illness, I was struck by a recurring tendency. Most students portrayed the history of mental health in one of two ways.
One group of students was prone to interpreting the history of mental illness and its treatment as one of progressive successes. Their thinking went something like this. Ancient and medieval healers misguidedly turned to spurious supernatural and uninformed somatic explanations to understand and treat disorders. Their ignorance led these healers to routinely mistreat their patients (eg, inserting pessaries into “hysteric” women, using leeches to remove “bad humors”).
The foolishness of healers, the essays continued, began to finally be corrected with the rise of pathological anatomy, neurology, and laboratory science in the 18th and 19th centuries. Professionalized psychiatry, psychotherapy, and academic research are the heroes of these students’ stories: Pinel is credited with developing the first ethical form of talk therapy; dissection and animal experimentation made it possible to dispense with occult explanations; and Kraepelin’s invention of the “surveillance ward” was a result of his care and concern for the welfare of his patients. The 20th century was then the culmination of this kind of work, because clinicians and researchers increasingly got better at knowing the causes of mental illness as well as treating patients in an effective and ethically sound manner.
The second group of students began with basically the same assessment of ancient and medieval medicine. They, too, considered healers at that time to be prisoners of their superstitions and stupidity (after all, they noted, there are no such things as “humors”).
This set of students parted ways from their counterparts, however, in not acknowledging any major sea change in how “madness” was understood and treated thereafter. Instead, as they portrayed it, Pinel deliberately terrorized his patients; asylums locked people away for years at a time; and often painful somatic therapies—electroconvulsive shock therapy, malaria-induced fever, insulin-induced coma, metrazol-induced convulsions—were routinely developed and administered after the First World War. In general, in these students’ estimation, modern psychiatry showed little interest in treating patients humanely.
While I partly delighted in the fact that students were voicing differences of opinion—after all, education should be about informed and open debate—I was mostly surprised and frustrated by the response, because I do not aim to teach the history of psychiatry along either of these lines. From my perspective, reducing the complex history of psychiatry to either an unqualified good or an unqualified evil is hardly defensible. What, then, was going on? From where did they get these notions?
I quickly recognized something that most college educators come to realize after teaching awhile: students come in with all sorts of assumptions learned through years of exposure to schooling, family and friends, and popular culture. These assumptions are rarely upended within a matter of weeks. The student essays were, I concluded, providing me with a glimpse into the conventional ways in which a good many people—both those with and those without much experience of mental illness and its treatments—tend to think about the history of psychiatry and mental health.
A feature common to the viewpoints of both sets of students is something we historians confront regularly: faith in “progress”—or better put, faith in the innate superiority of our present-day over the past. Historians continue to debate exactly when this widespread confidence in human progress first appeared. But at least since the 18th century, there has been a tendency in the Western world for people to believe that, by and large, the human condition is getting better and that each succeeding generation is wiser, healthier, more tolerant, less cruel, more comfortable, and generally happier than their ancestors.
This faith in the superiority of our way of life assumes that we in the present are the measure of all things, an attitude one historian dubbed the “whig interpretation of history.” As Herbert Butterfield1 explained in 1931:
It is part and parcel of the whig interpretation of history that it studies the past with reference to the present . . . Through this system of immediate reference to the present day, historical personages can easily and irresistibly be classed into the men who furthered progress and the men who tried to hinder it; so that a handy rule of thumb exists by which the historian can select and reject, and can make his points of emphasis. On this system the historian is bound to construe his function as demanding him to be vigilant for likenesses between past and present, instead of being vigilant for unlikeness; so that he will find it easy to say that he has seen the present in the past, he will imagine that he has discovered a “root” or an “anticipation” of the twentieth century, when in reality he is in a world of different connotations altogether, and he has merely tumbled upon what could be shown to be a misleading analogy.
Most historians now agree with Butterfield, rejecting the whig interpretation of history as biased in both its methodology and its selection of evidence.
Without their knowing, both groups of students had adopted a whig approach to the history of psychiatry, albeit along different lines and with different outcomes. The first group was inclined to see contemporary science and medicine as the measure of all success and lauded past efforts appearing to contribute directly to today’s thinking and practices. For them, historical figures who appeared to thwart this standard were guilty of ignorance.
The second group tended to treat contemporary moral sensibilities and politics as the standard by which to measure the past, decrying anything seeming to undermine personal freedom or to inflict unjustified suffering. For them, historical figures who seem to have impeded ethical progress were guilty of having done evil. In each of these cases, historically specific values and ideals were being taken as universal and correct without ever being made explicit or being questioned.
While historians have found this self-congratulatory approach to be especially common among observers of the history of science and medicine, the acceptance of the whig interpretation of history is hardly a standpoint only found among the pre-med, psychology, and life science students who generally take my course. No, whiggish thinking is endemic in modern society and is rarely called into question by academics and non-academics alike.
That said, science and medicine do have uniquely close ties to whiggish thinking. In the 19th century, scientists and physicians were successful in portraying their knowledge and labor as resources in the service of human progress. As professional classes, they effectively linked themselves and their work to the story of advancement, seeing themselves as the engines of improvement.
As long as Western society continues to see its future in terms of the ideal of progress, scientists and physicians—and along with them, psychiatrists—will continue to enjoy their exceptional social status . . . and whig interpretations of the history of medicine and psychiatry will continue to be popular.
1. Butterfield H. The Whig Interpretation of History. 1931. http://www.eliohs.unifi.it/testi/900/butterfield/chap_2.html. Accessed February 22, 2012.