Quality Still Counts

June 1, 2002

How has the emphasis on economics and mental health care affected the quality of care? Has it improved under managed care? Implementing prevention strategies and improved quality, although initially costly, may save money in the long term.

A decade ago, economic problems were paramount in society in general and in health and mental health care in particular. Rising costs, especially striking in mental health care, helped fuel the rapid development of managed care and carveout managed behavioral health care organizations. There were temporary successes as health care costs were reduced in the 1990s, while our general economy boomed as well. But, again, as the general economy has slumped more recently, health care costs are rising dramatically.

What was unclear in the past decade was how this economic emphasis affected quality. In society in general, it was not clear that an improved economy actually improved quality of life across the board, especially in those increasing numbers of families with two parents working full time (with less time for their children) and for those poorest pockets of society that were untouched by the economic boom. In health care and mental health care, there certainly is no evidence, or some anecdotal information to the contrary, that quality of care was improved under managed care.

Perhaps quality is where more attention is needed, both in society and health care. Irrespective of managed care, data have indicated that about 30% of health care is unnecessary and another 30% insufficient (Becher and Chassin, 2001). In addition, there seems to be wide variation among practitioners and locations, and a tendency for poorer treatment to be received by those who are poor and of a minority ethnic background. The results for mental health care in particular seem no better (Lehman, 2001). Anyone who has done a significant amount of utilization review of community practice would probably agree with these conclusions (Mohl, 1996).

Would any of us find these figures acceptable? If not, we have work to do to improve our quality of care. In our field, our responsibility for this endeavor takes on extra weight due to the very nature of the problems we encounter. Given that our patients have significant problems in their thinking, emotional responsiveness, behavior and even reality testing, it would be a cruel irony to rely solely on their subjective sense of being well-served by their treatment. Patient satisfaction, therefore, can be affected by a variety of factors and should only be part of an assessment of quality of care.

Our own subjective assessment of patient improvement also has significant limitations. Normal professional narcissism and countertransference may impair our assessment of our own success with patients.

We all need to do more, as individual practitioners, as managed behavioral health care organizations (MBHOs) and professional organizations. Whereas there is not much indication that private practices are trying to measure quality, several large health maintenance organizations are indeed rewarding practitioners for documented quality of care (Freudenheim, 2001). Although long-term data were sparse, HMOs are assuming that more prevention and improved quality would be doable and, although costly to do, would save more money by preventing more expensive treatment (Mason et al., 2001). The possibilities are many. Among them, in our individual work with patients, are:




  • Use some kind of standardized patient satisfaction survey;
  • Try to get feedback from significant others;
  • Document the use of practice guidelines and medication algorithms;
  • Use pre- and post-rating scales; and
  • Supervision and consultation.


Having documented positive results from the above may even help prove your relative value to managed care organizations.

If one is part of a system of care, whether that is a clinic or HMO, that system can institute various quality measures such as:




  • Evidence-based treatment guidelines that allow for individual variation;
  • Outcome studies;
  • Benchmark comparisons with comparable systems; and
  • Case conferences.


One MBHO has reported a sophisticated outcome management system that appears to improve clinical outcomes and allocation of resources (Brown et al., 2001). (For more information on this system, please see Implementing Outcomes Management Programs: Learning What Works in the June issue of Mental Health Outcomes -- Ed.) Some sort of measurement, such as the Outcome Questionnaire-45 (OQ-45), at the first session and then repeated at relevant intervals, is crucial.

As to our professional organizations, whether that is our local psychiatric association or the American Psychiatric Association, the No. 1 priority and focus besides their collegial functions should be quality of care. Among the possibilities here are:




  • Continued development of practical, evidence-based practice guidelines;
  • Akin to addressing ethical violations, establish some mechanism to address quality of care violations;
  • Political action;
  • Education of the payors of care;
  • Defining "medical necessity;" and
  • Establishment of objective quality review committees.


The possibilities for trying to improve quality of care are numerous. The following is one case example to indicate how this may work:

IC, a 32-year-old African American woman, has just been evaluated by a 60-year-old white psychiatrist. After going to her workplace Employee Assistance Program, she was referred by the intake coordinator of a national MBHO to this psychiatrist because he was in the network, had an opening within the week and the severity of symptoms covered over the phone indicated a likely need for medication. This company also allowed psychiatrists to conduct combined medication and psychotherapy, given their findings that such combined treatment was cheaper than having a psychiatrist for medication and a social worker for psychotherapy. The managed care company had also begun to measure quality and paid 10% more when standards were met.

The patient fit the DSM-IV diagnostic disorder of major depression, single, moderate. The MBHO had a clinical guideline for this disorder, which recommended a combination of medication and cognitive-behavioral therapy (CBT). This guideline also fit the more extensive one of the APA. For medication, they recommended -- though did not insist upon -- the generic selective serotonin reuptake inhibitor antidepressants for cost and equivalent quality. For monitoring of outcome, they used the Axis V Global Assessment of Functioning (GAF), the Beck Depression Inventory (BDI) and a National Commission on Quality Assurance (NCQA)-approved patient satisfaction survey. To receive the 10% bonus, the patient would need to have a 25% improvement in the GAF and BDI at the end of four months. Four months was deemed a long enough time to get past any placebo effect. The clinical guidelines did permit individual variation with justification, but would still expect the 25% improvement.

The psychiatrist agreed to these measures since they were similar to what he would have done anyway. He had come to accept the necessity of having some kind of assessment of his competence (Epstein and Hundert, 2002). He decided to also use the simple mood scale he normally used, which was to have the patient rate the degree of depression each month on a one to 10 scale. He was also in the habit of getting feedback from a patient's significant other, although, of course, only with the patient's permission (other than in an emergency). He had also begun to use the Texas Medication Algorithm (available online at: <mhmr.state.tx.us/CentralOffice/MedicalDirector/TMAPover.htmla>), which allowed the choice of generic fluoxetine or fluvoxamine. He had also begun to use a simple patient satisfaction survey, a five-point scale on which patients indicated how satisfied they were with their care during each visit. He felt this was especially crucial in situations where trust was liable to be less likely, such as the current cross-cultural match, for patients who were abused and for patients who seemed paranoid. If treatment was not successful, he hoped the MBHO would allow him to use the consultant he tended to use for treatment-resistant depression.

Although the psychiatrist longed for the "good old days," when he could just do as his judgment pleased and did not have to monitor things, he came to understand that more accountability to payors was justified. After all, normal narcissism makes most psychiatrists feel they are doing a good job, even when they may not be. He also came to see that patients seemed to like the attention paid to assessing their improvement. He could even foresee the day when it would be clear that his quality of care had improved and he even made a little more money at the same time.

In the case at question, the rating scales indeed indicated over 25% improvement at four months using 40 mg of fluoxetine and CBT. However, the patient's satisfaction did not parallel that apparent improvement. Upon gentle questioning, the patient relayed an expectation that she would be much better by then and in fact had stopped the medication recently, feeling it was "not working." This perplexed the psychiatrist, since he had tried to earlier educate the patient as to expectable outcomes, but on further exploration discovered mild sexual side effects, which the patient described as affecting her "nature." Medication was then switched to bupropion (Wellbutrin), which still fit the algorithm, and trust seemed to improve. After six months, the patient did improve over 50% and was equally satisfied, as were the MBHO and psychiatrist.

Since the patient seemed so satisfied and now trusting, the psychiatrist felt it was the right time to bring up possible prevention in her children, which was another item in the MBHO's clinical guideline for major depression. The psychiatrist offered to do a brief screening or referral of the children to see how they were adapting to the mother's illness and whether they needed any intervention, also keeping in mind possible inherited vulnerability for depression. He knew if significant problems were discovered, the MBHO had a wraparound program that could be used.

While all this attention to quality took more time, and the expectations varied with insurance coverage, the psychiatrist felt that both the sentinel effect of quality monitoring, as well as the specific expectations, would be worth the effort.

For what we do not yet know to do to improve quality of care, research is essential. Eventually more standardized performance measures may emerge (Hermann and Palmer, 2002). We can and should do better.


References1. Becher EC, Chassin MR (2001), Improving the quality of healthcare: who will lead? Health Aff (Millwood) 20(5):164-179.
2. Brown GS, Burlingame GM, Lambert MJ et al. (2001), Pushing the quality envelope: a new outcomes management system. Psychiatr Serv 52(7):925-934.
3. Epstein RM, Hundert EM (2002), Defining and assessing professional competence. JAMA 287(2):226-235 [see comment].
4. Freudenheim M (2001), In a shift, an H.M.O. rewards doctors for quality care. New York Times July 11:C1,4.
5. Hermann RC, Palmer RH (2002), Common ground: a framework for selecting core quality measures for mental health and substance abuse care. Psychiatr Serv 53(3):281-287.
6. Lehman AF (2001), Keeping practice current. Psychiatr Serv 52(9):1133 [editorial].
7. Mason J, Freemantle N, Nazareth I et al. (2001), When is it cost-effective to change the behavior of health professionals? JAMA 286(23):2988-2992.
8. Mohl PC (1996), Confessions of a concurrent reviewer. Psychiatr Serv 47(1):35-40.