Advertisement
Editorial Free access | 10.1172/JCI59185
Find articles by Turka, L. in: JCI | PubMed | Google Scholar
Find articles by Caplan, A. in: JCI | PubMed | Google Scholar
Published July 1, 2011 - More info
The term evidence-based medicine is overused, abused, and is beginning to ring hollow. It is not that evidence (or at least of what most people in biomedicine think evidence-based medicine should strive to be) is a bad thing. Rather, there is more rhetoric about evidence than there is actual evidence to support the degree of talk.
In medical and/or graduate school, we are taught to seek new knowledge and to question common wisdom. We learn that therapies for disease have evolved over time. Some outmoded approaches, such as bloodletting, purging, the Feingold diet, and gastric freezing, did not live up to their initial promise. These techniques (presumably) all were tested before being applied to patients, but, given their ultimate lack of efficacy, just how rigorous was the testing? Other, more contemporary treatments, such as chelation therapy, acupuncture, biofeedback, transcutaneous electrical nerve stimulation, and laser spine surgery, remain highly controversial and may never undergo the type of evaluation that provides convincing evidence of efficacy.
Medical students today are taught to practice evidence-based medicine. This philosophy was pioneered by Archie Cochrane in the late 1950s while he was studying lung diseases in residents of the Welsh coal mining community Rhonnda Fach. This work culminated in his classic 1972 book Effectiveness and Efficiency: Random Reflections on Health Services. Another pioneer in the field of evidence-based medicine, David Sackett, has observed that “good doctors use both individual clinical expertise and the best available external evidence, and neither alone is enough” (1), suggesting a role for individual judgment, values, and experience (2).
Yet exactly what “evidence” means is not clear. Nor is it clear that there is all that much actual evidence to guide practice. While virtually all agree that the ideal evidence comes in the form of randomized placebo-controlled double-blinded clinical trials, there are huge problems if evidence of this sort is really to be our guide.
First, a surprisingly and dismayingly large percentage of clinical trials fail to fully enroll. As most trials are not greatly overpowered to begin with, this means that they often fail to reach a convincing conclusion, wasting money, time, and often violating the implied bargain we make with subjects, whose good faith efforts in agreeing to participate in a trial come to little.
Second, and of even more fundamental importance, many illnesses are treated with therapies that have never been tested in any type of clinical trial. Some therapies seem patently obvious (e.g., dialysis is superior to no treatment for patients with end-stage renal failure); others are probably harmless (e.g., hot tea with lemon for a stuffy nose). Yet there is a lot of room in the middle of these two extremes where a great number of treatments still are based on conventional wisdom but little hard evidence. One might argue that such treatments are experimental and, as such, should even require informed consent. However, because they are standard of care, they are not. While not suggesting that patients should sign a waiver before applying Neosporin to a blister, we doubt that most patients understand that the effectiveness of many treatments has never been rigorously proven.
Third, many treatments rest on one or two clinical trials, which often involve a narrow sample of those who ultimate receive treatment. Women, children and the elderly, and the institutionalized are notoriously underrepresented in most trials. Trials are often conducted on a narrow range of subjects, in terms of age, ethnicity, race, class, geography, and sex. Yet a demonstration of efficacy in a focused population leads rather rapidly to utilization in a much more diverse population of patients. The road from efficacy to effectiveness, in terms of the evidence used, is too often both narrow and short.
We need a wholesale rethink about how to approach evidence. If definitive trials exist, then the indicated therapies should be the norm. Anything different is a deviation of standard of care and should only be undertaken as part of a clinical trial. Conversely, when there is a lack of a clear consensus about existing treatments, we must strive for proper trials. In instances for which there are huge challenges to such clinical trials, we must develop databases that can track anonymized patient information and outcomes so as to identify both problems and successes. Patients should be told in general terms just how solid the evidence is for their treatments — what is based on gold standard trials and what is based more on personal clinical experience, custom, or intuition.
We recognize that there are many problems with these proposals. Trials are expensive; they can be hard to conduct if the diseases are rare; physicians are reluctant to enroll patients for treatments of long-standing duration; and patients may be even more reluctant to participate when they believe that what providers customarily offer has already been proven to be effective treatment, though often it has not. Even compilation of patient results into databases for subsequent analysis when trials simply cannot be done raises legitimate concerns about privacy. IRBs are overwhelmed at present, finding independent trialists to do the mountain of studies needed is next to impossible, and the cost involved is staggering. . . . Need we go on?
Yet, if we simply shrug our shoulders and accept that grounding practice on evidence is difficult to achieve, then we certainly will never succeed. What to do?
Researchers must support efforts to encourage efficacy and effectiveness assessment as part of health reform. Clinicians must understand the protection they have and the vulnerability they face in terms of liability when operating outside a widely recognized standard of care. Students must not simply be taught to respect evidence but must be taught more about how to interpret it, how to generate it, and where to find it. The evidence that we are doing what needs to be done to put health care on a firm foundation of evidence is not yet persuasive.