Measuring Outcome in Psychiatric Private Practice Using Outpatient Self-Reports

Publication
Article
Psychiatric TimesPsychiatric Times Vol 23 No 7
Volume 23
Issue 7

Increased demand for accountability is requiring more clinicians to supplement their judgments of patient outcome with standardized and objective protocols. The protocol outlined here is a model or jumping-off point for outcome evaluation.

Increased demand for accountabilityis requiring more cliniciansto supplement their judgmentsof patient outcome with standardizedand objective protocols. In 2004,Massachusetts required its behavioralhealth contractor to have all its providersconduct outcome assessments.1 At thefederal level, the Health Care FinanceAuthority mandated in 1998 thatcontractors include outcome evaluations2and now Medicare is movingrapidly toward differential paymentsbased on performance and outcome.3

Although the pressure for standardizedassessment comes from externalforces, clinical and scientific benefitswill result. Repeated assessments willenable dynamic adjustment of treatment,using either formal algorithmsof evidence-based practice or individualizedclinical pathways; and suchassessments will help answer the keyquestion in all health interventions areour patients getting better?

This question is not easy to answerin real world (ie, nonacademic) psychiatry,4,5 where patients differ greatly incompliance, comorbidity, and otherparameters that are well controlled inacademic research. Moreover, disseminationis imperfect. Even whenevidence-based educational interventions(EBEI) are delivered, the impacton clinician behavior varies greatly.6,7For example, a recent study showed thatonly half of the clinicians who receivedEBEI training applied the knowledge,while half had a “knowledge-behaviorgap.”8 Thus, truly unfiltered translationof best practices from the laboratory tothe field is only rarely possible.4

Since treatment and patient factorsin real-life practice environments varyfrom those in laboratory studies,9 knowingwhether real-world patients getbetter requires assessment in the realworldenvironment. Knowledge aboutwhat works in the real world (effectiveness)would complement the increasingknowledge about what worksin the laboratory (efficacy).

Outcome evaluationin the real world

Outcome evaluation has not been widespreadin psychiatric practices in theUnited States, because the costs havegenerally been viewed as outweighingthe benefits. Even when outcome evaluation has been instituted by public agencies, it has sometimes been curtailedbecause of fiscal constraints.10 Whensuch evaluation becomes required,providers have objected to the increaseddemands, since they are unaccompaniedby sufficient fees.1 As psychiatrists faceincreasing pressures ranging from reducedreimbursement (with compensatoryincrease in the number of patientsseen) to increased paperwork and regulations,the time to measure outcomebecomes ever harder to carve out. Moreover,the decision to evaluate outcomesubsumes more demanding detailedquestions,11,12 including:

Methodologic


  • What variables should be measuredlevel of distress (level of anxietyor depression), functioning (work,school, or social situations), or qualityof life?
  • Should the same data be collectedfrom all patients or individualized(are the data from patients withagoraphobia the same as from patientswith major depression)?
  • Data source—patient (self-report instrument), significant other, orclinician (structured interview, clinicianrating scale).
  • When should the data be collectedpretreatment, frequency thereafter,termination, follow-up.
  • Properties of instrument--reliability,validity, existence of appropriatenorms, sensitivity, specificity, abilityto measure change.


Practical


  • Costs of instrument, copyright,licensing fees.
  • Ease and time to complete--howwell tolerated is the instrument andhow will this contribute to complianceand accuracy?
  • Method of data collection, storageand analysis (all electronic vs allpaper and pencil vs combination).


Finally, as Klein and Smith4 pointout, “if patient evaluation is not unobtrusivelyincorporated into normal clinicalactivities, it is impossible to knowif outcome findings are an artifact ofthe supervening assessment process.”

It is no surprise, then, that psychiatristsand the mental health professionin general have not rushed to embraceoutcome evaluation. Presently, the fragmentedsystem of health care deliveryand reimbursement in America (fee–for-service, HMOs, capitated models,public sector) provides varying incentivesand disincentives to conduct outcomeevaluation. Capitated practicesthat compete for contracts, based onservice, outcome, and price, provide astrong incentive to assess outcome.

Alabama Psychiatric Services (APS),a multioffice private practice providingpsychiatric care for more than 1 millioncovered individuals with a capitatedmodel has been routinely doing outcomeevaluations for 5 years. To illustrate oneway to consistently integrate meaningfuloutcome evaluation into clinicalcare, we show how APS addressed themethodologic and practical questionsoutlined above and describe our protocolfor data collection.

Outcome assessment in a private practice

APS consists of 36 adult and pediatricpsychiatrists, 34 therapists, and 45nurses in 11 psychiatric offices, situatedin rural, suburban, and urbansettings across Alabama. Care is providedfor patients from a pool of 1million covered individuals, about 22%of the population of Alabama. Servicesinclude office care, partial hospitalizationand intensive outpatient treatment programs, and inpatient services.

All APS patients over 18 years wereasked to complete the assessments; noexclusion criteria were applied. Themean age was 36.9 and just 2% wereolder than 60 years. Women made upnearly two thirds (64.4%) of the sample,a proportion that held roughly truewithin all diagnostic categories exceptsubstance abuse disorders, in which69.6% were men. Insurance wasprovided through either the patient's ora family member's employer and manywere self-referred. A depressive disorderwas diagnosed in more than 51%of patients and anxiety disorders werediagnosed in an additional 14.7%. Thecomplete distribution of psychiatricdisorders is shown in the Table.

 
Diagnosis
Percent of population
Major depression, recurrent
30.4
Major depression, single episode
12.2
Depressive disorder, NOS
6.3
Dysthymic disorder
2.5
Anxiety disorder
14.7
Substance abuse disorder
4.2
Adjustment disorder
8.4
Other
21.3
NOS, not otherwise specified.

Our decision about what to measurewas guided by the facts that: (1) selfreferredadults present most oftenbecause of subjective distress; and (2)anxiety and depression represent themost frequent symptoms across disorders.Therefore, anxiety and depressionchange scores were chosen to assess theoverall improvements of all patients.Other clinicians/practices will makedifferent choices about what to measure,based on philosophical/practical differences(eg, clinicians working with apopulation of the seriously mentally illmight choose parameters such as qualityof life, hope, or functioning9).

We elected to collect data directlyfrom patients, rather than use clinicianrating instruments such as the HamiltonRating Scale for Depression.13 This decisionwas driven partly by practical considerationsbecause clinician ratingscales are generally time-intensive andnot useful for routine repeated assessments.14 Seligman15 supported the useof a survey based on self-assessment,arguing that: (1) self-report is the basisof clinical diagnosis to begin with, especiallyin mental health; and (2) the correlationsbetween self-report and diagnosisare usually quite high. Moreover, in theera of evidence-based medicine, especiallygiven our consumer-driven healthenvironment, increasing emphasis is puton the patient's own impressions ofimprovement, choice of treatment, andsatisfaction with treatment.

It was then crucial to find a self-ratinginstrument that could be seamlesslyintegrated into the clinical routine, whilebeing sensitive, reliable, brief, and well tolerated.The Quick PsychoDiagnostics(QPD) Panel16 is a fully automatedtrue/false test with good psychometric properties that is given on a handheldelectronic unit and takes an average of6.2 minutes to complete. It is used atKaiser Permanente and other primarycare environments.

Internal logic allows this test tocustomize questions based on theresponse to previous questions (ie, healthierpatients are spared irrelevant questionsand more ill patients are furtherassessed). Thus, the QPD is both a categoricdiagnostic instrument that givesdiagnostic suggestions based on DSMIVcriteria, as well as a dimensional onethat measures symptom severity.16

The test screens for 7 mental disorders,with sensitivity ranging from 69%to 98%, and specificity ranging from90% to 97%, when compared with theStructured Clinical Interview for DSMIV-TR.16 When the test unit is dockedto a computer, a report provides numericscores with a graphic illustration indicatingthe severity of depression andanxiety. As with the depression scale,change scores of 5 points on the anxietyscale on retest are considered clinicallysignificant.

Our group purchased the QPD (includingthe program with an annual licensurefee and the computer boxes) with ourown funds, enabling us to use it independently.Our information systems techniciansprogrammed a spreadsheetdatabase used for internal trending reportsand for the outcome study. In our practice,we use the QPD as a measure ofseverity and diagnostic suggestion; actualdiagnoses are made by board-certifiedpsychiatrists. Patients complete the assessmentin the waiting room, and changescores are routinely reported to the clinicianbefore the actual visit.

Outcome

First, we focused on the depression selfratingscores over time to illustrate theoutcome data that emerged from thisapproach. Results were obtained for10,648 adult patients seen in the practicefrom July 2001 through December2004, who completed at least onefollow-up assessment; 6432 of thesepatients completed a second follow-upassessment. To give an idea of theresults, we report here on patients whosecondition was diagnosed as “majordepression, single episode.” The Figureshows outcome (both improvement andrecovery) at 2 time points for thesepatients—55.8% had improved to aclinically significant degree at Time 2(after about 150 days), and a subset ofthose, 38.4% of the total sample recovered(depression scale in normal rangein addition to clinically significantimprovement). At Time 3 (after about280 days), these figures increase to 65.3% and 47.7%, respectively.

Results for diagnoses not shown inthe Figure show relatively minor deviations.Nevertheless, these deviationsincrease confidence in the validity ofour results, since they appear to followclinically meaningful patterns. Forexample, persons with a diagnosis of“major depression, recurrent” recoverless frequently than the entire sample,and patients with adjustment disordersrecover more frequently.

Comparing our results to others is notstraightforward because of significantdifferences in methodology, case mix,and improvement criteria. These variablesmay account for the wide range ofreported rates of recovery from majordepression in outpatients (sustainedimprovement in 39% to improvement in76%.17-19 Yet Posternak and associates20observed a striking similarity in recoveryrates between the CollaborativeDepression study21 (41% at 13 weeks and54% at 26 weeks) and their small sampleprivate practice study (38% at 13 weeksand 57% at 26 weeks). Our own results(we report on both improvement andrecovery) are similar to these findings,and show that outcome measurement ina large psychiatric private practice is bothpossible and meaningful.

Limitations

Although our data show that it is indeedpossible to routinely assess and monitorpatients' well-being and treatmenteffectiveness in a large practice usinga computer-based patient self-ratinginstrument, more information (eg, whichtreatment modality, medication, psychotherapy,or combination was mostsuccessful for which type of patient;control groups; correlation with externalratings) would clarify the results onimprovement and remission. Nevertheless,these effectiveness data supportthe assumption that real-world patientswith psychiatric symptoms of depressionand anxiety do in fact improve,similar to results in efficacy studies. Thisis encouraging, since more negativefindings have also been reported: arecent review of the effectiveness of community-based psychotherapy servicesfor children found only 9 scientificallysound studies, with an averageeffect size close to zero.22

Another important aspect is that ourpractice group functions as a full-riskcapitation model, so our reimbursementis not affected by an individual physician'sperformance. Indeed, we onlyshare group outcome statistics withinsurance companies; no individualphysician's results are reported externally.Internally, the goal is to improvethe practice as a whole, not to use theinformation to penalize an individualphysician. However, in a fee-for-serviceenvironment (perhaps even with thepay-for-performance model), there maybe relevant ethical concerns about howmuch detail of such outcome informationis shared, even when this informationappears to be highly valid.

Conclusions

We believe the mandate to evaluateoutcome will eventually reach all psychiatristswho are not paid exclusively froma patient's own funds. Our protocol mayserve as a model or jumping-off pointfor psychiatrists wishing to add outcomeevaluation to their practice. Our experienceshows that we can do this atreasonable cost, we encourage others toforge ahead now and reap the clinicalbenefits, while potentially contributingto policy (how pay-for-performance willevolve) and scientific knowledge.

Dr Oepen is assistant medical director for clinicalservices and medical education atAlabama Psychiatric Services, PC; clinical professorof psychiatry at the University ofAlabama, Birmingham; and research affiliatefor the consolidated department of psychiatryat McLean Hospital of Harvard Medical School.

Dr Federman is a psychologist in private practiceand on the faculty of Boston UniversitySchool of Medicine; he is also a consultant onstatistics and outcome evaluation at AlabamaPsychiatric Services.

Dr Akins is President and Medical Director of Managed Healthcare Administration, Inc in Birmingham, Alabama, and Medical Director and CEO of Alabama Psychiatric Services, PC.

The authors report no conflicts of interest concerningthe subject matter of this article.

References:

References


1.

Barlow DH. What’s new about evidence-basedassessment? Psychol Assess. 2005;17:308-311.

2.

Rapp CA, Bond GR, Becker DR, et al. The role ofstate mental health authorities in promoting improvedclient outcomes through evidence-based practice.Community Ment Health J. 2005;41:347-363.

3.

McClellan M. Testimony on Value-Based Purchasingfor Physicians Under Medicare. Washington, DC:House Ways and Means Subcommittee on Health; 2005.

4.

Klein DF, Smith LB. Organizational requirementsfor effective clinical effectiveness studies. PreventTreatment [serial online]. March 1999;2(1). Availablefrom APA Online. Accessed April 21, 2006.

5.

Price CS. Are our patients getting better? PsychiatricNews. March 1,2002;37(5):18. Available at:

http://pn.psychiatryonline.org/cgi/content/full/37/5/18-a

. Accessed April 21, 2006.

6.

Davis DA, Thomson MA, Oxman AD, Haynes RB.Evidence for the effectiveness of CME: a review of50 randomized controlled trials. JAMA. 1992;268:1111-1117.

7.

Davis DA, Thomson MA, Oxman AD, Haynes RB.Changing physician performance: a systematic reviewof the effect of continuing medical education strategies.JAMA. 1995;274:700-705.

8.

Kennedy T, Regehr G, Rosenfield J, et al. Exploringthe gap between knowledge and behavior: a qualitativestudy of clinician action following an educationalintervention. Acad Med. 2004;79:386-393.

9.

Salzer MS, Blank M, Rothbard A, Hadley T. Adultmental health services in the 21st century. In: Statusof Mental Health Services at the Millennium. Availableat:

http://www.mentalhealth.samhsa.gov/publications/allpubs/SMA01-3537/ chapter11.asp

. Accessed April21, 2006.

10.

McPheeters HL. Statewide mental health outcomeevaluation: a perspective of two southern states.Community Ment Health J. 1984;20:44-55.

11.

Speer DC. Mental Health Outcome Evaluation.San Diego: Academic Press; 1998.

12.

Bufka LF, Crawford JL, Levitt JT. Brief screeninginstruments for managed care and primary care. In:Antony MM, Barlow DH, eds. Handbook of Assessmentand Treatment Planning for Psychological Disorders.New York: Guilford Press; 2004:38-66.

13.

Hamilton M. A rating scale for depression. J NeurolNeurosurg Psychiatry. 1960;23:56-61.

14.

Joiner TE, Jr, Walker RL, Pettit JW, et al. Evidencebasedassessment of depression in adults. PsycholAssess. 2005;17:267-277.

15.

Seligman ME. The effectiveness of psychotherapy:the Consumer Reports study. Am Psychol.1995;50:965-974.

16.

Shedler J, Beck A, Bensen S. Practical mentalhealth assessment in primary care: validity and utilityof the Quick PsychoDiagnostics Panel. J Fam Pract.2000;49:614-621.

17.

Hoencamp E, Haffmans PM, Griens AM, et al. A3.5-year naturalistic follow-up study of depressedout-patients. J Affec Disord. 2001;66:267-271.

18.

Maj M, Veltro F, Pirozzi R, et al. Pattern of recurrenceof illness after recovery from an episode ofmajor depression: a prospective study. Am JPsychiatry. 1992;149:795-800.

19.

Schulberg HC, Block MR, Madonia MJ, et al.Treating major depression in primary care practice:eight-month clinical outcomes. Arch Gen Psychiatry.1996;53:913-919.

20.

Posternak MA, Zimmerman M, Solomon DA.Integrating outcomes research into clinical practice.Psychiatr Serv. 2002;53:335-336.

21.

Solomon DA, Keller MB, Leon AC, et al. Recoveryfrom major depression: a 10-year prospective followupacross multiple episodes. Arch Gen Psychiatry.1997;54:1001-1006.

22.

Weisz JR, Weiss B. Effects of PsychotherapyWith Children and Adolescents. London: SagePublications, Inc; 1993.

Related Videos
brain depression
brain
nicotine use
brain
© 2024 MJH Life Sciences

All rights reserved.