|More Like This|
Among the many changes in psychiatric practice often attributed to the psychotherapist Carl Rogers was a shift in the therapist’s relationship with the patient. In particular, Rogers is rightly famous for advocating a “client-centered” or “person-centered” therapy.
As he put it, “This formulation would state that it is the counselor’s function to assume, in so far as he is able, the internal frame of reference of the client, to perceive the world as the client sees it, to perceive the client himself as he is seen by himself, and to lay aside all perceptions from the external frame of reference while doing so.”1 In calling for a change in the therapist’s attitude and orientation toward those being treated, client-centered therapy most often has been seen as a response to overbearing psychoanalytic and behavioral therapies, ie, therapies built around rigid models applied by self-assured counselors.
What is often forgotten, however, is that things like client-centered therapy, “mental hygiene,” and “self-help” were also responses to another phenomenon, one with an even longer history: the figure of the charismatic physician. In the pre-modern world, before doctors prescribed science and began using an arsenal of instruments, they relied primarily on their experience, skills of observation, and bedside manner to treat patients: the key was to establish and maintain the patient’s trust.2
The role of the treating physician was partly performative, meaning that it was necessary to actively instill confidence in his or her talents and expertise. At the same time, patients were notorious for their reluctance to follow orders. They had a habit of complaining and a penchant for self-diagnosis and self-medication. These patterns continued throughout the 1800s, when academically trained physicians widely experienced a marked social mobility, entering the ranks of the privileged middle class. The result was that doctors in the 19th and early 20th centuries often assumed an officious air of superiority toward their patients.2
Things were no different within the specialty of psychiatry. When the first specialists referred to as “mad-doctors” began to emerge in the late 18th century, the force of their personalities was often seen as the key to the success of their treatments. Reverend Dr Francis Willis (1718-1807)—the man enlisted to treat King George III’s raving madness from 1788 to 1789—was purported to have an almost magical gaze, described by one contemporary this way:
On his first meeting a new patient, his usually friendly and smiling countenance changed its expression. He suddenly became a different figure commanding the respect even of maniacs. His piercing eye seemed to read their hearts and divine their thoughts as they formed and before they were even uttered. In this way he gained control over them which he used as a means of cure.3
Philippe Pinel (1745-1826), credited with removing the use of chains and introducing talk therapy (“the moral treatment”) in asylums, nevertheless insisted on the necessity of applying a mental “force evidently and convincingly superior”4 when confronting “maniacs.” This included, among other things, the staging of what he termed “pious frauds,” such as giving one delusional patient the impression that staff would beat him up if he continued to refuse to eat his meals.4
The power psychiatrists wielded over their charges was hardly deemed an ethical challenge or a necessary evil. Far from it. In Germany—where the asylum director was referred to as the “asylum father” (Anstaltsvater)—the role of the alienist or psychiatrist was modeled on the prevailing ideal of the domestic household: the wise director treated his residents as naïve and misguided children requiring constant supervision and correction.
Influenced by Christian theology and Romanticism, the German “mental physician” Johann Christian Heinroth (1773-1843) extended this analogy still further in 1818, claiming an almost Christ-like status for psychiatrists. “From the very outset,” he said, “[the mental physician] influences the patient by virtue of his, one may be permitted to say, holy presence, by the sheer strength of his being, his glance, and his will.”5
Perhaps no one exemplified the figure of the charismatic psychiatrist in the 19th century more than Jean-Martin Charcot (1825-1893). Trained in neurology, the Frenchman conducted research and taught at the famous Salpêtrière Hospital in Paris from 1862 to 1893, where he made his fame studying and treating hysteria. He turned the hospital into a public showcase for him and his methods.
Charcot introduced a set of weekly lectures of staged presentations during which he hypnotized ostensibly hysterical (mostly female) patients, in order to dramatically provoke and alleviate acute symptoms. While his Tuesday lectures were reserved for students and physicians, his Friday lessons were open to the public and attended by a mix of avidly curious spectators, including artists and writers from all over Europe.6 They, of course, only helped extend his fame as an accomplished researcher and healer.
The First World War, with its total mobilization of resources and its heavy casualties, ushered in a period of heroic medicine that lasted well into the 1940s. In the years 1914 to 1918, war psychiatry was overwhelmed by tens of thousands “shellshock” victims, otherwise healthy soldiers showing all the signs of hysteria (a malady commonly associated with women and girls).
In Germany, the Hamburg neurologist Max Nonne (1861-1959) developed what he called “the lightning cure” for “war neurotics,” professing to cure ailing soldiers almost instantly through the use of hypnosis. His counterpart in Ludwigshafen, physician Fritz Kaufmann (1875-1941), boasted of similar results by applying painful electric shocks to various body parts, while barking out military orders to the man: “(O)ne must seek to convince the patient in every way possible that you are in the position to force your own strong will upon him,” Kaufmann insisted. “In short, you must participate in the treatment with your entire personality.”7
In the 3 decades following the end of the Great War, international psychiatry took an increasingly aggressive and invasive turn. Mass compulsory sterilization, malaria fever therapy, metrazol shock therapy, insulin therapy, electroconvulsive therapy, and lobotomy—these emerged and spread during the years 1920 to 1950. A variety of interrelated institutional, social, political, and economic factors played critical roles in the rise of what I have termed “militant psychiatry.”
One of these factors was the continued acceptance of the ideal of the compelling and forceful psychiatrist. To be sure, the clinician’s aura and willpower were now rarely cast as inherently therapeutic, except within psychoanalytic circles. But the successful careers of the self-promoting, media-savvy developers of prefrontal lobotomy, Egas Moniz (1874-1955) and Walter Freeman (1895-1972), for instance, attest to the fact that self-confidence, conviction, and zeal did a lot to make up for the less than overwhelming results in persuading others of the efficacy of the operation.8
It would be wrong to assume that patients universally and uncritically accepted these claims to authority by clinicians. On the contrary, there is ample evidence that indicates many patients—and perhaps fewer family members—were skeptical, if not outright defiant, toward those charged with treating them. But if we wish to understand historically the agenda of not only client-centered therapy, but also social psychiatry and anti-psychiatry, during the second half of the 20th century, we need to be sensitive to how these new attitudes and methods represented a response to the historical disparity in power between physicians and patients.