Placebo: Mind over Matter in Modern Medicine. Dylan Evans

Placebo: Mind over Matter in Modern Medicine - Dylan  Evans


Скачать книгу
two knights and then sending one out hunting while ordering the other to bed. After several hours, he killed both and examined the contents of their alimentary canals; digestion had, apparently, proceeded further in the stomach of the sleeping knight.7 A century later, Petrarch reported, in a letter to Boccaccio, a remark by a fellow physician that explicitly recommended experimental studies of therapeutic methods by comparative means.8 But, like the story of Daniel, these early gestures toward comparative studies lack one of the most distinctive features of the modern clinical trial – a formal mathematical treatment. This had to wait until the birth of statistics in the seventeenth century.

      Some philosophers and historians of science have argued that the development of statistics and probability theory in the eighteenth and nineteenth centuries constituted a revolution no less dramatic and influential than the ‘scientific revolution’ of the seventeenth century.9 In reality the so-called ‘probabilistic revolution’ was a pretty slow affair, more akin to the stately orbit of a celestial body than to a political upheaval. Its impact on medical research was positively sluggish. Statistical methods were not explicitly used to investigate a therapeutic intervention until the 1720s, when the French physician James Jurin showed that smallpox inoculation was a safe procedure by comparing the mortality of inoculated people with the death rates of those with natural smallpox. Even then, the new methods did not meet with much respect; Jurin’s findings were ignored by his colleagues, and smallpox inoculation remained illegal in France until 1769.

      A hundred years later, the same mistrust of statistical methods led Viennese physicians to reject the recommendations of Ignaz Semmelweis on the need for better hygiene by doctors. In 1847 Semmelweis noticed that there were marked differences between the death rates on two wards in the obstetric hospital in Vienna. Mortality was much higher on the ward run by physicians and medical students than on the ward run by student midwives. Moreover, the difference between the two wards had only begun in 1841, when courses in pathology were included in medical training. Semmelweis guessed that physicians and students were coming to the obstetric ward with particles of corpses from the dissection room still clinging to their fingers. He made them wash more thoroughly with chlorinated lime (which, by luck, just happened to be a disinfectant), and the death rate on the medical ward immediately returned to the same level as on that run by the midwives. Despite this startling evidence, the antiseptic measures proposed by Semmelweis were not embraced by his colleagues for several decades, by which time Semmelweis had, quite understandably, gone insane.

      British doctors were, in general, more accepting of statistical research than were their colleagues on the Continent. In the eighteenth century, a few physicians on board British naval vessels employed comparative methods to study the effects of various treatments for scurvy and fever. John Lind noted that sailors on his ship who had scurvy recovered when given citrus fruits, and the navy responded by issuing lemons (and later limes) to all sailors – which is, of course, the origin of the epithet ‘limey’. But the British were not so open-minded with all such statistical research. In the late 1860s, Joseph Lister published a series of articles showing that the use of antiseptics at the Glasgow Royal Infirmary had reduced the mortality from amputations, but his findings were not universally accepted by the British medical establishment until the end of the century.

      By the first half of the twentieth century, there was a growing acceptance of comparative methods in medical research among doctors in Europe and America, but even then it was a slow process. The term ‘clinical trial’ does not appear in the medical literature until the early 1930s, and when Linford Rees presented the results of a trial comparing electro-convulsive therapy (ECT) with insulin coma therapy to a meeting of the Royal Medico-Psychological Association in 1949, his research methodology caused as much of a stir as his results.10 Very few of the psychiatrists at that meeting could have guessed that, within half a century, the randomised clinical trial would have become the standard tool for medical research.

      THE PLACEBO CONTROL

      The pre-twentieth-century progenitors of the clinical trial established the basic principle of comparing various groups of patients undergoing different treatment regimes. The twentieth century added two more refinements: randomisation and the placebo control. Randomisation simply means that patients are assigned to the various groups on a random basis. The placebo control means that the control group is treated with a fake version of the experimental therapy – one which, ideally, should be identical in every way to the treatment being tested with the exception of the crucial component. With one or two notable exceptions, the few clinical trials that were carried out before World War II did not include a placebo control group. Rather, they compared one treatment with another, or with no treatment at all. Placebos were used as controls in the studies of effects of substances such as caffeine on healthy volunteers, but the idea of deliberately withholding a treatment believed to be active from someone who was ill and in danger of death was felt by most doctors to be unethical.

      Beecher played a major role in persuading doctors that placebo controls were both ethical and scientifically necessary. He countered the ethical objections by arguing that the administration of a placebo was far from ‘doing nothing’. If placebos could provide at least half as much relief as a real drug, and often even more, then the patients in the control group would not be that much worse off than those in the experimental arm. Similar considerations were used to support the claim that placebo-controlled studies were the most sound from a scientific point of view. After all, if a therapy was simply shown to be better than no treatment at all, how could doctors be sure that the effect was not due to the placebo response? And if one therapy were compared to another and found to be equally effective, how could scientists be sure that both were not placebos? By the end of the 1950s, the work by Beecher, Gold and others had convinced most medical researchers that only by comparing a therapy with a placebo could they discover its specific effect.

      Beecher argued that all kinds of treatment, even active drugs and invasive surgery, produced powerful placebo effects in addition to their specific effects. Therefore, to determine the specific effect of a treatment, medical researchers would have to subtract the placebo effect from the total therapeutic effect of the treatment being tested. If they simply compared the experimental treatment with a no-treatment control group, they would overestimate the specific effect by confounding it with the placebo effect. To support this argument, Beecher needed to provide evidence showing that the placebo effect was large enough to worry about. This was the whole point of the 1955 article whose many flaws we have briefly glimpsed. Without misquotation and systematic misrepresentation, the original studies that Beecher cited would not have provided the evidence he needed.

      At the time, nobody noticed the flaws in Beecher’s article. His evidence was cited again and again in support of the placebo-controlled clinical trial, which continued its rise to dominance. Crucial in this process was the decision in the 1970s by the US Food and Drug Administration (FDA) that new drugs be tested by clinical trials before they could be licensed. As one expert on the history of psychiatry has remarked, the FDA occupies something of a magisterial role in global medicine.11 It has no legal powers to control the health policies of nations other than the United States, yet its influence is enormous. The decision of the FDA to require new drugs to prove their mettle in randomised, placebo-controlled clinical trials paved the way for similar policies in other countries. During the 1980s, scientific journals followed suit by requiring that claims for the efficacy of new drugs be backed up by evidence from clinical trials. Finally, the 1990s saw the emergence of a movement known as ‘evidence-based medicine’ whose proponents urged GPs to make use of the evidence from clinical trials in their everyday clinical practice.12

      A FLAW IN THE METHOD

       A physician who tries a remedy and cures his patients, is inclined to believe that the cure is due to his treatment. But the first thing to ask them is whether they have tried doing nothing,


Скачать книгу