How Do Physicians Use Practice Guidelines?

Article

There are still many questions about how to develop useful guidelines, introduce them into clinical practice, and measure their effectiveness. What does it mean to use a guideline in practice?

August 2006, Vol. XXIII, No. 9

A clinical practice guideline (CPG) is a set of rules or suggestions for the treatment of a clinical disorder. In my last article, I asked whether physicians used CPGs. I reviewed several studies of the problems with guidelines, and discussed some of the barriers to their use by physicians.

The studies demonstrated that there are still many questions about how to develop useful guidelines, introduce them into clinical practice, and measure their effectiveness. In this article, I want to discuss a more fundamental question: What does it mean to use a guideline in practice? Do clinicians have to consult it daily or turn to it only when they have a question? Do they follow it exactly or use it only as a guide?

CPGs and clinical judgment

Any discussion of CPGs has to consider the ongoing debate about the relative roles of evidence-based medicine and clinical judgment in the treatment of patients.1-4 The argument is between evidence-based advocates, who want to model patient care on statistical information derived from therapeutic randomized controlled trials (RCTs) of large groups of patients, and those who think that clinical judgment or common sense gained in the process of caring for individual patients should be sufficient.5

CPGs are based on interpretations of RCT results by clinicians who are experts in the treatment of the disorder. Many physicians think that this process produces valid and reliable standards of care that should be followed as closely as possible. They argue that clinical judgment alone can be misleading because humans commonly make cognitive errors, particularly in the assessment of probability, that might bias their clinical decisions about the effectiveness of a specific therapy.6,7

Other physicians are skeptical about the increasing emphasis on RCTs and the CPGs derived from them. They argue that RCTs are not always as scientific, objective, or comprehensive as they are purported to be because researchers can be biased in the way in which they conduct their studies, and experts can be biased in how they interpret the results of those studies.8 Skeptics claim that guidelines are often too simplistic to deal with the variability of expression of an illness in different patients.

Most CPGs state that physicians should use their clinical judgment in applying the guidelines.9 Critics, however, contend that CPGs have the subtle effect of deemphasizing the importance of a physician's judgment in the treatment of a patient. Moreover, they argue that many important clinical decisions are based on common sense and don't require a CPG.3,10Some have even charged that the evidence-based medicine movement has become a cult that hinders the provision of appropriate care.11

There is some truth on each side of the argument. Clinicians make common errors in judgment and experts can be biased in their interpretation of research data. Guidelines are unlikely to explicitly address the variability of expression of illness in every patient. It is also true that many clinical decisions do not require a CPG; few physicians need a guideline to justify suturing a large wound to stop bleeding and promote healing. Yet few CPGs are that simplistic. Most are developed to address more complex treatment issues that contain too many variables to be easily resolved by common sense. At the same time, physicians always have an obligation to use clinical judgment to question the validity of these guidelines for their patients.12

Types of CPGs

The 2 main types of CPGs-comprehensive guidelines and clinical algorithms-differ in the amount and detail of information they provide and the comprehensiveness, specificity, and strength of their treatment recommendations.

Comprehensive guidelines are detailed clinical monographs that discuss the advantages and disadvantages of the various treatment options for a disorder and rate the strength of the scientific evidence for each of the treatments.13 They typically make general recommendations and cede broad discretion to the clinician to decide which of several treatment approaches to prescribe.

Clinical algorithms are narrower in scope and more structured than comprehensive guidelines.14 They usually focus on a specific type of treatment. Medication algorithms, for example, provide step-by-step flowcharts that specify the types of medication, doses to be prescribed, number of treatment sessions, and other factors. They allow patients and physicians to make treatment choices, but these choices are more restricted than those in the comprehensive guidelines. Algorithms try to explicitly outline every step in the provision of care so that treatment can be standardized and data can be collected to monitor physician compliance with the guideline and patient response to treatment.

Once clinicians try to apply these CPGs they soon realize that even the most comprehensive or structured guideline cannot anticipate every variation in a patient's clinical response to treatment. Some patients will respond to medication at a lower than recommended dose. Others will have partial responses to medication, take much longer than expected to respond to treatment, or require higher doses of medication than those specified in the guideline. Still others will have debilitating adverse effects. At each step along the way, physicians will have to make decisions about many factors, including the type and dosage of medication. Given these problems, how do physicians incorporate guidelines into their practices and integrate them with their clinical judgment?

Mindlines, Not Guidelines

One intriguing answer to this question comes from an ethnographic study by Gabbay and le May15 of how physicians in 2 British general medical practices used CPGs in the course of making clinical decisions. The investigators discovered that the physicians in the study examined CPGs informally (eg, if they came from an authoritative source such as a professional group) to make sure that their own practice was in reasonable compliance with the standards of care. They had access to the Internet and several expert systems, but rarely used information from these sources or from research in their work.

Instead, they relied on what the authors called mindlines-“collectively reinforced, internalized tacit guidelines”-that were based on some reading, but mainly on interactions with colleagues, especially with “trusted” sources such as opinion leaders. These mindlines were adjusted over time by experiences with patients, reading, and continuing meetings and discussions with colleagues within their “communities of practice.” CPGs were only one of the many sources of information that the clinicians used to inform their clinical practice.

These results are consistent with the findings of an earlier study of the diffusion of medical innovation.16 That study of the prescription of a new antibiotic by a large group of general practitioners in the Midwest also showed that discussions with colleagues who had prescribed the antibiotic were one of the major factors influencing physicians to adopt the new drug.

The concept of tacit knowledge used by Gabbay and le May requires some elaboration. Tacit knowledge was defined by Polanyi17,18 as knowledge gained in practice as opposed to empirical knowledge gained from formal sources, such as research, lectures, or reading. Clinicians usually cannot explain the basis of their tacit knowledge because it often contains hazy ideas that do not follow the rigorous forms of traditional logic. It could be thought of as professional intuition or common sense based on extended clinical experience.

In Gabbay and le May's model, CPGs are digested in the context of a clinician's tacit knowledge and that of his professional community to produce mindlines. The authors caution, however, that the 2 medical practices they studied may not be representative of other clinical practices. Yet the results do raise interesting questions about how clinicians gather and use information and knowledge in their practices.

Other authors have studied the use of brief heuristics, or rules of thumb, in treating patients. Andre and colleagues19,20 used focus groups to investigate the use of clinical rules of thumb by 23 Swedish general practitioners. The rules were grouped into 2 categories-those used to evaluate potential medical emergencies and those used to evaluate routine somatic and psychosocial problems. Examples included: “For me a chest pain in someone whom I've never seen before becomes . . . a quick thought toward the hospital”; “Flashes in the eye and some spots, it's of course vitreous detachment”; and “So when I notice that I've gone over the line, I say,'Now maybe you think what I've said has been quite troublesome? Shall we step back a little?'”

Some of the rules have the rather inarticulate, fuzzy nature of tacit knowledge. They are often used to make clinical decisions that involve assessments of risk and probability. Gabbay and le May argued that mindlines were similar to, but more complex and comprehensive than, the rules of thumb. Both are forms of clinical judgment; mindlines have the advantage of validation by the peer professional community.

A false dichotomy

It might be useful to think of a CPG as an intellectual construct analogous to a legal constitution. Both systems represent general rules that govern human behavior. Each has to be interpreted for the individual event. Lawyers use their legal judgment to apply the constitution to specific cases. Physicians use clinical judgment to apply the guidelines to the treatment of specific patients. In that sense, a CPG represents the broad outline of treatment for a specific disorder and defines the limits of acceptable care as determined by the profession. The role of clinical judgment is to finetune this outline for each patient.

The debate between evidence-based medicine and clinical judgment represents a false dichotomy. The 2 approaches address different levels of clinical information. They overlap and periodically encroach on each other's territory. They represent a constant, sometimes contentious dialog about the most appropriate care, but they are not mutually exclusive.

Dr Fauman is the author of Negotiating Managed Care: A Manual for Clinicians and Study Guide to DSM-IV-TR, both published by American Psychiatric Publishing, Inc. He is adjunct clinical associate professor of psychiatry at the University of Michigan in Ann Arbor and medical director of Magellan Health Services of Michigan.

References

1. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn't. BMJ. 1996;312:71-72.
2. Jones M, Higgs J, eds. Will Evidence-based Practice Take the Reasoning Out of Practice? Clinical Reasoning in the Health Professions. 2nd ed. Oxford, England: Butterworth-Heinemann; 2000:307-315.
3. Michelson J. Critique of (im)pure reason: evidencebased medicine and common sense. J Eval Clin Pract. 2004;10:157-161.
4. Porta M. Is there life after evidence-based medicine? J Eval Clin Pract. 2004;10:147-152.
5. Miles A, Grey JE, Polychronis A, et al. Developments in the evidence-based health care debate-2004. J Eval Clin Pract. 2004;10:129-142.
6. Tversky A, Kahneman D. Judgment under uncertainty: heuristics and biases. Science. 1974;185: 1124-1131.
7. Chapman GB, Elstein AS. Cognitive processes and biases in medical decision making. In: Chapman GB, Sonnenberg FA, eds. Decision Making in Health Care: Theory, Psychology, and Applications. New York: Cambridge University Press; 2000:183-210.
8. Hampton JR. Guidelines-for the obedience of fools and the guidance of wise men? Clin Med. 2003;3:279-284.
9. Suppes T, Dennehy EB, Hirschfeld RM, et al. Texas Consensus Conference Panel on Medication Treatment of Bipolar Disorder. The Texas Implementation of medication algorithms: update to the algorithms for treatment of bipolar I disorder. J Clin Psychiatry. 2005;66:870-886.
10. Smith GC, Pell JP. Parachute use to prevent death and major trauma related to gravitational challenge: systematic review of randomised controlled trials. BMJ. 2003;327:1459-1461.
11. Clinicians for the Restoration of Autonomous Practice (CRAP) Writing Group. EBM: unmasking the ugly truth. BMJ. 2002;325:1496-1498.
12. Genuis SK, Genuis SJ. Exploring the continuum: medical information to effective clinical practice. Paper I: the translation of knowledge into clinical practice. J Eval Clin Pract. 2006;12:49-62.
13. American Psychiatric Association. Practice guideline for the treatment of patients with major depressive disorder (revision). Am J Psychiatry. 2000; 157(suppl 4):1-45.
14. Dennehy EB, Suppes T, John Rush A, et al. Development of a computerized assessment of clinician adherence to a treatment guideline for patients with bipolar disorder. J Psychiatr Res. 2004;38:285-294.
15. Gabbay J, le May A. Evidence based guidelines or collectively constructed "mindlines?" Ethnographic study of knowledge management in primary care. BMJ. 2004;329:1013.
16. Coleman JS, Katz E, Menzel H. Medical Innovation: A Diffusion Study. New York: Bobs-Merrill; 1966.
17. Polanyi M. The Tacit Dimension. Garden City, New York: Anchor Books, Doubleday & Company, Inc; 1966.
18. Scott D. Everyman Revived: The Common Sense of Michael Polanyi. Grand Rapids, Mich: William B. Eerdmans Publishing Company; 1995.
19. Andre M, Borgquist L, Foldevi M, Molstad S. Asking for 'rules of thumb': a way to discover tacit knowledge in general practice. Family Pract. 2002; 19:617-622.
20. Andre M, Borgquist L, Molstad S. Use of rules of thumb in the consultation in general practice-an act of balance between the individual and the general perspective. Family Pract. 2003;20:514-519.

Related Videos
brain depression
brain
nicotine use
brain
© 2024 MJH Life Sciences

All rights reserved.