Bad Pharma: How Medicine is Broken, And How We Can Fix It. Ben Goldacre
a secret contract, with secret terms, requiring secrecy about trial data, in a discussion about the safety and efficacy of a drug that has been taken by hundreds of thousands of people around the world. Jefferson asked for clarification, and never received a reply.
Then, in October 2009, the company changed tack: they would like to hand the data over, they explained, but another meta-analysis was being conducted elsewhere. Roche had given them the study reports, so Cochrane couldn’t have them. This was a simple non-sequitur: there is no reason why many groups shouldn’t all work on the same question. In fact, quite the opposite: replication is the cornerstone of good science. Roche’s excuse made no sense. Jefferson asked for clarification, but never received a reply.
One week later, unannounced, Roche sent seven short documents, each around a dozen pages long. These contained excerpts of internal company documents on each of the clinical trials in the Kaiser meta-analysis. This was a start, but it didn’t contain anything like enough information for Cochrane to assess the benefits, or the rate of adverse events, or fully to understand exactly what methods were used in the trials.
At the same time, it was rapidly becoming clear that there were odd inconsistencies in the information on this drug. Firstly, there was considerable disagreement at the level of the broad conclusions drawn by different organisations. The FDA said there were no benefits on complications, while the Centers for Disease Control and Prevention (in charge of public health in the USA – some wear nice naval uniforms in honour of their history on the docks) said it did reduce complications. The Japanese regulator made no claim for complications, but the EMA said there was a benefit. In a sensible world, we might think that all these organisations should sing from the same hymn sheet, because all would have access to the same information. Of course, there is also room for occasional, reasonable disagreement, especially where there are close calls: this is precisely why doctors and researchers should have access to all the information about a drug, so that they can make their own judgements.
Meanwhile, reflecting these different judgements, Roche’s own websites said completely different things in different jurisdictions, depending on what the local regulator had said. It’s naïve, perhaps, to expect consistency from a drug company, but from this and other stories it’s clear that industry utterances are driven by the maximum they can get away with in each territory, rather than any consistent review of the evidence.
In any case, now that their interest had been piqued, the Cochrane researchers also began to notice that there were odd discrepancies between the frequency of adverse events in different databases. Roche’s global safety database held 2,466 neuropsychiatric adverse events, of which 562 were classified as ‘serious’. But the FDA database for the same period held only 1,805 adverse events in total. The rules vary on what needs to be notified to whom, and where, but even allowing for that, this was odd.
In any case, since Roche was denying them access to the information needed to conduct a proper review, the Cochrane team concluded that they would have to exclude all the unpublished Kaiser data from their analysis, because the details could not be verified in the normal way. People cannot make treatment and purchasing decisions on the basis of trials if the full methods and results aren’t clear: the devil is often in the detail, as we shall see in Chapter 4, on ‘bad trials’, so we cannot blindly trust that every study is a fair test of the treatment.
This is particularly important with Tamiflu, because there are good reasons to think that these trials were not ideal, and that published accounts were incomplete, to say the least. On closer examination, for example, the patients participating were clearly unusual, to the extent that the results may not be very relevant to normal everyday flu patients. In the published accounts, patients in the trials are described as typical flu patients, suffering from normal flu symptoms like cough, fatigue, and so on. We don’t do blood tests on people with flu in routine practice, but when these tests are done – for surveillance purposes – then even during peak flu season only about one in three people with ‘flu’ will actually be infected with the influenza virus, and most of the year only one in eight will really have it. (The rest are sick from something else, maybe just a common cold virus.)
Two thirds of the trial participants summarised in the Kaiser paper tested positive for flu. This is bizarrely high, and means that the benefits of the drug will be overstated, because it is being tested on perfect patients, the very ones most likely to get better from a drug that selectively attacks the flu virus. In normal practice, which is where the results of these trials will be applied, doctors will be giving the drug to real patients who are diagnosed with ‘flu-like illness’, which is all you can realistically do in a clinic. Among these real patients, many will not actually have the influenza virus. This means that in the real world, the benefits of Tamiflu on flu will be diluted, and many more people will be exposed to the drug who don’t actually have flu virus in their systems. This, in turn, means that the side effects are likely to creep up in significance, in comparison with any benefits. That is why we strive to ensure that all trials are conducted in normal, everyday, realistic patients: if they are not, their findings may not be relevant to the real world.
So the Cochrane review was published without the Kaiser data in December 2009, alongside some explanatory material about why the Kaiser results had been excluded, and a small flurry of activity followed. Roche put the short excerpts it had sent over online, and committed to make full study reports available (it still hasn’t done so).
What Roche posted was incomplete, but it began a journey for the Cochrane academics of learning a great deal more about the real information that is collected on a trial, and how that can differ from what is given to doctors and patients in the form of brief, published academic papers. At the core of every trial is the raw data: every single record of blood pressure of every patient, the doctors’ notes describing any unusual symptoms, investigators’ notes, and so on. A published academic paper is a short description of the study, usually following a set format: an introductory background; a description of the methods; a summary of the important results; and then finally a discussion, covering the strengths and weaknesses of the design, and the implications of the results for clinical practice.
A clinical study report, or CSR, is the intermediate document that stands between these two, and can be very long, sometimes thousands of pages.78 Anybody working in the pharmaceutical industry is very familiar with these documents, but doctors and academics have rarely heard of them. They contain much more detail on things like the precise plan for analysing the data statistically, detailed descriptions of adverse events, and so on.
These documents are split into different sections, or ‘modules’. Roche has shared only ‘module 1’, for only seven of the ten study reports Cochrane has requested. These modules are missing vitally important information, including the analysis plan, the randomisation details, the study protocol (and the list of deviations from that), and so on. But even these incomplete modules were enough to raise concerns about the universal practice of trusting academic papers to give a complete story about what happened to the patients in a trial.
For example, looking at the two papers out of ten in the Kaiser review which were published, one says: ‘There were no drug-related serious adverse events,’ and the other doesn’t mention adverse events. But in the ‘module 1’ documents on these same two studies, there are ten serious adverse events listed, of which three are classified as being possibly related to Tamiflu.79
Another published paper describes itself as a trial comparing Tamiflu against placebo. A placebo is an inert tablet, containing no active ingredient, that is visually indistinguishable from the pill containing the real medicine. But the CSR for this trial shows that the real medicine was in a grey and yellow capsule, whereas the placebos were grey and ivory. The ‘placebo’ tablets also contained something called dehydrocholic acid, a chemical which encourages the gall bladder to empty.80 Nobody has any clear idea of why, and it’s not even mentioned in the academic paper; but it seems that this was not actually an inert, dummy pill placebo.
Simply making a list of all the trials conducted on a subject is vitally important if we want to avoid seeing only a biased summary of the research done on a subject; but in the case of Tamiflu even