Increased demand for accountability
is requiring more clinicians
to supplement their judgments
of patient outcome with standardized
and objective protocols. In 2004,
Massachusetts required its behavioral
health contractor to have all its providers
conduct outcome assessments.1 At the
federal level, the Health Care Finance
Authority mandated in 1998 that
contractors include outcome evaluations2
and now Medicare is moving
rapidly toward differential payments
based on performance and outcome.3
Although the pressure for standardized
assessment comes from external
forces, clinical and scientific benefits
will result. Repeated assessments will
enable dynamic adjustment of treatment,
using either formal algorithms
of evidence-based practice or individualized
clinical pathways; and such
assessments will help answer the key
question in all health interventions are
our patients getting better?
This question is not easy to answer
in real world (ie, nonacademic) psychiatry,
4,5 where patients differ greatly in
compliance, comorbidity, and other
parameters that are well controlled in
academic research. Moreover, dissemination
is imperfect. Even when
evidence-based educational interventions
(EBEI) are delivered, the impact
on clinician behavior varies greatly.6,7
For example, a recent study showed that
only half of the clinicians who received
EBEI training applied the knowledge,
while half had a “knowledge-behavior
gap.”8 Thus, truly unfiltered translation
of best practices from the laboratory to
the field is only rarely possible.4
Since treatment and patient factors
in real-life practice environments vary
from those in laboratory studies,9 knowing
whether real-world patients get
better requires assessment in the realworld
environment. Knowledge about
what works in the real world (effectiveness)
would complement the increasing
knowledge about what works
in the laboratory (efficacy).
in the real world
Outcome evaluation has not been widespread
in psychiatric practices in the
United States, because the costs have
generally been viewed as outweighing
the benefits. Even when outcome evaluation has been instituted by public agencies, it has sometimes been curtailed
because of fiscal constraints.10 When
such evaluation becomes required,
providers have objected to the increased
demands, since they are unaccompanied
by sufficient fees.1 As psychiatrists face
increasing pressures ranging from reduced
reimbursement (with compensatory
increase in the number of patients
seen) to increased paperwork and regulations,
the time to measure outcome
becomes ever harder to carve out. Moreover,
the decision to evaluate outcome
subsumes more demanding detailed
- What variables should be measured
level of distress (level of anxiety
or depression), functioning (work,
school, or social situations), or quality
- Should the same data be collected
from all patients or individualized
(are the data from patients with
agoraphobia the same as from patients
with major depression)?
- Data sourcepatient (self-report instrument), significant other, or
clinician (structured interview, clinician
- When should the data be collected
pretreatment, frequency thereafter,
- Properties of instrument--reliability,
validity, existence of appropriate
norms, sensitivity, specificity, ability
to measure change.
- Costs of instrument, copyright,
- Ease and time to complete--how
well tolerated is the instrument and
how will this contribute to compliance
- Method of data collection, storage
and analysis (all electronic vs all
paper and pencil vs combination).
Finally, as Klein and Smith4 point
out, if patient evaluation is not unobtrusively
incorporated into normal clinical
activities, it is impossible to know
if outcome findings are an artifact of
the supervening assessment process.
It is no surprise, then, that psychiatrists
and the mental health profession
in general have not rushed to embrace
outcome evaluation. Presently, the fragmented
system of health care delivery
and reimbursement in America (feefor-
service, HMOs, capitated models,
public sector) provides varying incentives
and disincentives to conduct outcome
evaluation. Capitated practices
that compete for contracts, based on
service, outcome, and price, provide a
strong incentive to assess outcome.
Alabama Psychiatric Services (APS),
a multioffice private practice providing
psychiatric care for more than 1 million
covered individuals with a capitated
model has been routinely doing outcome
evaluations for 5 years. To illustrate one
way to consistently integrate meaningful
outcome evaluation into clinical
care, we show how APS addressed the
methodologic and practical questions
outlined above and describe our protocol
for data collection.
1. Barlow DH. What’s new about evidence-based
assessment? Psychol Assess. 2005;17:308-311.
2. Rapp CA, Bond GR, Becker DR, et al. The role of
state mental health authorities in promoting improved
client outcomes through evidence-based practice.
Community Ment Health J. 2005;41:347-363.
3. McClellan M. Testimony on Value-Based Purchasing
for Physicians Under Medicare. Washington, DC:
House Ways and Means Subcommittee on Health; 2005.
4. Klein DF, Smith LB. Organizational requirements
for effective clinical effectiveness studies. Prevent
Treatment [serial online]. March 1999;2(1). Available
from APA Online. Accessed April 21, 2006.
5. Price CS. Are our patients getting better? Psychiatric
News. March 1,2002;37(5):18. Available at:http://pn.psychiatryonline.org/cgi/content/full/37/5/1
8-a. Accessed April 21, 2006.
6. Davis DA, Thomson MA, Oxman AD, Haynes RB.
Evidence for the effectiveness of CME: a review of
50 randomized controlled trials. JAMA. 1992;268:
7. Davis DA, Thomson MA, Oxman AD, Haynes RB.
Changing physician performance: a systematic review
of the effect of continuing medical education strategies.
8. Kennedy T, Regehr G, Rosenfield J, et al. Exploring
the gap between knowledge and behavior: a qualitative
study of clinician action following an educational
intervention. Acad Med. 2004;79:386-393.
9. Salzer MS, Blank M, Rothbard A, Hadley T. Adult
mental health services in the 21st century. In: Status
of Mental Health Services at the Millennium. Available
allpubs/SMA01-3537/ chapter11.asp. Accessed April
10. McPheeters HL. Statewide mental health outcome
evaluation: a perspective of two southern states.
Community Ment Health J. 1984;20:44-55.
11. Speer DC. Mental Health Outcome Evaluation.
San Diego: Academic Press; 1998.
12. Bufka LF, Crawford JL, Levitt JT. Brief screening
instruments for managed care and primary care. In:
Antony MM, Barlow DH, eds. Handbook of Assessment
and Treatment Planning for Psychological Disorders.
New York: Guilford Press; 2004:38-66.
13. Hamilton M. A rating scale for depression. J Neurol
Neurosurg Psychiatry. 1960;23:56-61.
14. Joiner TE, Jr, Walker RL, Pettit JW, et al. Evidencebased
assessment of depression in adults. Psychol
15. Seligman ME. The effectiveness of psychotherapy:
the Consumer Reports study. Am Psychol.
16. Shedler J, Beck A, Bensen S. Practical mental
health assessment in primary care: validity and utility
of the Quick PsychoDiagnostics Panel. J Fam Pract.
17. Hoencamp E, Haffmans PM, Griens AM, et al. A
3.5-year naturalistic follow-up study of depressed
out-patients. J Affec Disord. 2001;66:267-271.
18. Maj M, Veltro F, Pirozzi R, et al. Pattern of recurrence
of illness after recovery from an episode of
major depression: a prospective study. Am J
19. Schulberg HC, Block MR, Madonia MJ, et al.
Treating major depression in primary care practice:
eight-month clinical outcomes. Arch Gen Psychiatry.
20. Posternak MA, Zimmerman M, Solomon DA.
Integrating outcomes research into clinical practice.
Psychiatr Serv. 2002;53:335-336.
21. Solomon DA, Keller MB, Leon AC, et al. Recovery
from major depression: a 10-year prospective followup
across multiple episodes. Arch Gen Psychiatry.
22. Weisz JR, Weiss B. Effects of Psychotherapy
With Children and Adolescents. London: Sage
Publications, Inc; 1993.