Placebo: Mind over Matter in Modern Medicine. Dylan Evans

Placebo: Mind over Matter in Modern Medicine - Dylan  Evans


Скачать книгу
most likely explanation. Despite what some people may say, there is no evidence that the placebo response can cure cancer.

      NO EVIDENCE?

       ‘In my experience’ is a phrase that usually introduces a statement of rank prejudice or bias. The information that follows it cannot be checked, nor has it been subjected to any analysis other than some vague tally in the speaker’s memory.

      MICHAEL CRICHTON, New England Journal of Medicine (1971)

      To say that there is ‘no evidence’ that the placebo response can cure cancer might seem too strong. After all, there is the story of Mr Wright. Surely, it might be objected, that has some evidential value. Andrew Weil, one of the most famous proponents of alternative approaches to medicine, claims that individual case-histories and personal testimonials should be taken more seriously by medical scientists. He values ‘anecdotal evidence’ and wonders, with a hint of Freudian suspicion, ‘why so many doctors have a hard time with it’.21

      In fact, the scepticism shown by many doctors today towards claims based on individual case-histories has nothing to do with any emotional unease. If anything, it is statistics that doctors have a hard time with, rather than individual case-histories. Doctors have to learn to override their natural tendencies to be swayed by personal narrative and anecdote, and it is not an easy lesson. It is a vitally important one, though, as the history of medicine has shown, over and over again, that anecdotes are worthless without a proper statistical analysis. Many hundreds of ideas about the origins of disease and claims for surefire remedies have been accepted by doctors on the basis of ‘anecdotal evidence’, only to be shown, by eventual statistical analysis, to be completely false. Take bloodletting, for example. The technique was first introduced in Egypt around 1000 BC, and then spread to Europe via Greece. For almost three thousand years it was the mainstay of medical practice in the West. Every doctor could testify to its efficacy from his own experience, and tell dozens of anecdotes about how a certain patient got better after being bled. No attempt was made to evaluate bloodletting by statistical methods until the nineteenth century, when the French physician Pierre Louis and others found that it was useless at best, and at worst positively harmful. Only then did doctors finally abandon the ancient technique that had been handed down to them by generations of physicians, all of whom had been convinced it was therapeutic.

      As has already been noted, the statistical methods of modern medical research have attracted more than their fair share of critical remarks. These criticisms reveal much about human preferences, but nothing about the value of statistics. Certainly, stories of individual patients and their triumphs over disease grip us in a way that statistics do not. This is what makes the self-help books and the New Age treatises so convincing. These volumes are littered with amazing anecdotes about this person’s miraculous recovery from cancer, or that person’s amazing triumph over arthritis. Such books are notoriously lacking in statistics. The serious scientific books that do contain statistics, on the other hand, leave most of us cold and unconvinced. The personal immediacy of a single human narrative tends to have more impact than the dry numerical objectivity of a mass of statistics.

      It takes a real effort of will to pay more attention to the statistical information, but this is what we must do if we are to make our decisions on a rational basis rather than by hearsay and rumour. Statistics may be unromantic, but they are a vital remedy for the instinctive human tendency to be persuaded by isolated cases and individual stories. Of course, the statistics need to be interpreted with care, and this requires skill, intelligence and patient attention to mathematical detail. And not even the most sophisticated clinical trial can guarantee truth. It follows from the very nature of statistical research that some clinical trials are bound to generate false conclusions. The doctor and writer James Le Fanu has observed that statistical research ‘has been shown to result in the adoption of ineffective treatments in 32 per cent of cases’.22 The irony of this remark should be clear; we only know that statistical research is flawed because of statistics. There is a more serious point, however, and that is that ‘anecdotal evidence’ is even less reliable than statistical evidence. Statistics are not infallible, but when it comes to medical research, they are the best tool we have.

      THE HIERARCHY OF EVIDENCE

      The hard-won lessons about the relative value of anecdotal and statistical evidence have been condensed by medical researchers into a simple formula that is now referred to as the ‘hierarchy of evidence’.23 Individual case-histories and clinical vignettes are quite properly located at the bottom of the ladder. Strictly speaking, then, we should not dismiss such stories altogether, but rather emphasise their limited evidential value. Various statistical methods of research are assigned different grades on the hierarchy of evidence, with randomised controlled trials coming very near the top. The pinnacle of the hierarchy, however, is reserved not for individual clinical trials, but for systematic reviews and meta-analyses. In these research papers, all the clinical trials on a particular topic are hunted down and their results analysed by means of yet more statistical devices.

      The prestige attached to meta-analysis, a set of statistical techniques developed in the 1970s, by medical researchers has not met with universal agreement. One epidemiologist, for example, has written that ‘meta-analysis begins with scientific studies, usually performed by academics or government agencies, and sometimes incomplete or disputed. The data from the studies are then run through computer models of bewildering complexity, which produce results of implausible precision.’24 It is certainly true that, since the techniques of meta-analysis first began to emerge in the 1970s, they have been refined into a somewhat arcane art form. Yet the fundamental idea that rigorous numerical methods should be used in summarising the results of clinical trials is surely sound.

      Nevertheless, there is a delicious irony about the search for ever greater evidential support in medicine that is behind the rise of meta-analysis. The discounting of anecdotal evidence is certainly in accord with the spirit of science. When the Royal Society, Britain’s premier scientific institution, was founded in 1662, it adopted as its motto the Latin phrase Nullius in verba – nothing by word alone. Rejecting the deference to authority that had stifled the advance of knowledge for so long, the motto nicely sums up the emphasis on experiment and observation that lies at the heart of the scientific endeavour. Yet in those days it was much simpler for scientists to observe things for themselves. There were only a handful of them, so they could all fit quite comfortably in the same room, and witness important experiments directly. It seemed as if the vagaries and Chinese whispers that beset the reliance on word of mouth had been forever vanquished. The old days, when knowledge was all about scholarship – reporting and commenting on the reports and commentaries of others – had been superseded by an insistence on first-hand observation.

      Today we seem to have come full circle. The new regents of medical research can compile their meta-analyses without putting a foot outside their office, let alone actually speaking to a real patient. The papers that sit at the top of the hierarchy of evidence are works of pure scholarship, reports of reports. The ‘methods’ section contains, not a description of a laboratory procedure, but a string of terms that make up the ‘search strategy’ used to extract references to medical papers from one or more of the huge electronic databases, such as Medline, that are the present-day equivalent of the vast medieval libraries. And the conclusions of these papers must clearly be taken on trust, as it is impossible for every reader – busy consultants and harried doctors – to check the sources for himself.

      Beecher’s infamous 1955 paper on the ‘powerful placebo’ is a case in point. Although lacking the sophisticated statistical apparatus of current meta-analyses, it contains the seeds of the modern idea. It collects a set of clinical trials – no mean feat in those days, when clinical trials themselves were relatively few and far between – and extracts one or two simple figures which everybody remembers. A placebo effect of 35 per cent! This astonishing figure soon became set in stone, transformed


Скачать книгу