In recent years, we have witnessed an increasing focus on “evidence-based medicine.” Indeed, for the first time, the American Diabetes Association (ADA) this year has provided evidence gradings for its position statement on “Standards of Medical Care for Patients With Diabetes Mellitus.” This position statement is reprinted in abridged form in this issue (page 24). The entire document can be found in Diabetes Care1 or on the ADA Web site at http://care.diabetesjournals.org/cgi/content/full/25/1/213.

What is evidence-based medicine? What are its strengths and limitations? Does it reflect a passing craze or a true evolution in clinical practice?

Sackett and colleagues defined evidence-based medicine as “the conscientious, explicit, and judicious use of clinically relevant research in making decisions about the care of individual patients.”2 The strength of evidence-based medicine is that it moves clinical practice from anecdotal experience and expert opinion to a strong scientific foundation. It integrates clinical medicine with basic and clinical research and thus enhances the effectiveness and safety of diagnostic, preventive, and therapeutic measures.

In general, evidence-based medicine advocates that experimental methods—that is, randomized, controlled clinical trials (RCTs)—provide the gold standard for evaluation and the basis for clinical practice. The strength of RCTs lies in their internal validity. In RCTs, randomization ensures that treatment groups differ only in their exposure to the intervention, and hence, differences in observed effects can be attributed to differences in the intervention.

Conducting systematic reviews of the literature and developing hierarchies for grading evidence have been necessary components of evidence-based medicine.3 The latter have often assessed both the strength of the effect and the quality of the evidence. Using these systems, evidence-based guidelines have evolved and proliferated.

The ADA is moving in this direction. Already, the National Guideline Clearinghouse, an Internet-based public resource, offers access to evidence-based clinical practice guidelines and allows comparisons of recommendations produced by different organizations.3 Commercial groups have also sought to facilitate access to high-quality, evidence- based information. For example, Ovid’s evidence-based medicine reviews enables searches of databases for articles that meet criteria for evidence-based decision-making.3 Similarly, the Cochran collaboration prepares and maintains rigorous, systematic, and up-to-date reviews and meta-analyses of the benefits and risks of health care interventions.3 

Yet it is clear that there are limitations to evidence-based medicine. RCTs may not meet all of the needs of patients, practitioners, and policy makers.4 The task of conducting all of the required RCTs is overwhelming. There are a huge number of health care interventions that, when added together, have many components. It is simply impossible to subject all of these components to experimental evaluation.

One result is that some interventions are studied, and some are not. In addition, some types of interventions are more likely to be studied than others. For example, drug interventions are studied more extensively than nonpharmacological interventions, because of both regulatory requirements and industry support and the technical and methodological difficulties in designing RCTs for nondrug interventions. As a result, the literature often fails to provide convincing evidence for complex behavioral interventions, such as education, diet, and lifestyle modification.

In some instances, RCTs may be unnecessary (as in the use of insulin in type 1 diabetes), inappropriate (to measure accurately infrequent adverse outcomes, such as liver failure associated with troglitazone [Rezulin] therapy), or impossible.4 The latter may occur if there are political, ethical, or legal obstacles; if some interventions cannot be allocated on a random basis; or if potential subjects and investigators refuse to participate.4 

Finally, RCTs may simply be inadequate to answer the question at hand. Comorbidities are a usual reason for exclusion of patients from RCTs. Unfortunately, they are also a common feature of patients in clinical practice. How do the findings of RCTs apply to patients with multiple comorbidities? Such patients may require multiple elementary interventions, and combinations of evidence-based interventions may not themselves be evidence-based.5 

Perhaps the major limitation of RCTs lies in their external validity—that is, the extent to which they are generalizable to all potential subjects, practitioners, and settings.4 When the patients who participate in RCTs are not typical, the health care professionals who participate are unrepresentative, or the setting is atypical, the external validity or “generalizability” of the results of the RCT may be low. As a result, an RCT generally offers an indication of the efficacy of an intervention, evidence of what can be achieved in the most favorable circumstances, rather than its effectiveness or what can be achieved in everyday clinical practice.

Although RCTs may be the best way to assess whether interventions work, they may not adequately assess who will benefit from them. Well-designed observational studies may provide useful guidance as to who is most likely to adopt and benefit from an intervention.6 

A further limitation of evidence-based medicine derives from an understanding of the limits of the scientific method in clinical practice. Clinical decisions involve people, and the application of results from clinical trials and basic research to clinical practice must take into account people in their social context.7 

Clinical judgment is central to clinical practice and involves weighing the benefits and risks in any medical choice. Clinical trials explicitly focus on hard endpoints, such as physiological measures and disease incidence or mortality. They often fail to focus on soft endpoints, such as patient preferences or quality of life. To the extent that the latter influence clinical decision-making, nonscientific mechanisms may guide decisions.7 Indeed, as stated by Sackett, “External clinical evidence can inform, but can never replace, individual clinical expertise. It is this expertise that decides whether the external evidence applies to the individual patient at all and, if so, how it should be integrated into a clinical decision. Any external guideline must be integrated with individual clinical expertise in deciding whether and how it matches the patient’s clinical state, predicament, and preferences and thus whether it should be applied. Clinicians who fear top-down cookbooks will find the advocates of evidence-based medicine joining them at the barricades.”2 

In summary, clinical medicine should be subject to empirical assessment, and RCTs provide the most rigorous evaluation of clinical effectiveness. Evidence-based medicine should not diminish the importance of human relationships or ignore the fact that clinical decisions in primary care involve consideration of the unique problems and concerns of individual patients. Evidence-based medicine has the potential to decrease practice variation and to improve the effectiveness and efficiency of care. Evidenced-based medicine should be welcomed as a positive development in clinical medicine.

William H. Herman, MD, MPH, is a professor of internal medicine and epidemiology in the Division of Endocrinology and Metabolism and Interim Director of the Michigan Diabetes Research and Training Center at the University of Michigan in Ann Arbor.

1.
American Diabetes Association: Standards of Medical Care for Patients With Diabetes Mellitus (Position Statement).
Diabetes Care
25
:
213
–229,
2002
2.
Sackett DL, Rosenberg WMC, Gray JAM, Haynes RB, Richardson WS: Evidence-based medicine: what it is and what it isn’t.
BMJ
312
:
71
–72,
1996
3.
Jadad AR, Haynes RB, Hunt D, Browman GP: The Internet and evidence-based decision-making: a needed synergy for efficient knowledge management in health care.
Can Med Assoc J
162
:
362
–365,
2000
4.
Black N: Why we need observational studies to evaluate the effectiveness of health care.
BMJ
312
:
1215
–1218,
1996
5.
van Weel C, Knottnerus JA: Evidence-based interventions and comprehensive treatment.
Lancet
353
:
916
–918,
1999
6.
McKee M, Britton A, Black N, McPherson K, Sanderson C, Bain C: Methods in health services research: interpreting the evidence: choosing between randomised and non-randomised studies.
BMJ
319
:
312
–315,
1999
7.
Kenny NP: Does good science make good medicine? Incorporating evidence into practice is complicated by the fact that clinical practice is as much art as science.
Can Med Assoc J
157
:
33
–36,
1997