Randomised Clinical Trials. David Machin
A Fisher (1890–1962) in laying the foundations of good experimental design, although in an agricultural and biological context, advocated the use of randomisation in allocating experimental treatments. Thus, for example, in agricultural trials various plots in a field are randomly assigned to the different experimental interventions. The argument for randomisation is that it will prevent systematic differences between the allocated plots receiving the different interventions, whether or not these can be identified by the investigator concerned, before the experimental treatment is applied. Then, once the experimental treatments are applied and the outcome observed, the randomisation enables any differences between treatments to be estimated objectively and without bias. In these and many other contexts, randomisation has long been a keystone to good experimental design.
The need for random allocation extends to all experimental situations including those concerned with patients as opposed to agricultural plots of land. The difficulty arises because clinical trials (more emotive than experiments) do indeed concern human beings who cannot be regarded as experimental units and so should not be allocated the interventions without their consent. The consent process clearly complicates the allocation process and, at least in the past, has been used as a reason to resist the idea of randomisation of patients to treatment. Unfortunately, the other options, perhaps a comparison of patients receiving a ‘new’ treatment with those from the past receiving the ‘old’, are flawed in the sense that any observed differences (or lack thereof) may not reflect the true situation. Thus, in the context of controlled clinical trials, Pocock (1983) concluded, many years ago and some 30 years after the first randomised trials were conducted, that:
The proper use of randomization guarantees that there is no bias in the selection of patients for the different treatments and so helps considerably to reduce the risk of differences in experimental environment. Randomized allocation is not difficult to implement and enables trial conclusions to be more believable than other forms of treatment allocation.
As a consequence, we are focussing on randomised controlled trials and not giving much attention to less scientifically rigorous options.
1.3.3 Design hierarchy
The final choice of design for a clinical trial will depend on many factors, key amongst these are clearly the specific research question posed, the practicality of recruiting patients to such a design and the resources necessary to support the trial conduct. We shall discuss these and other issues pertinent to the design choice in later chapters. Nevertheless, we can catalogue the main types of design options available and these are listed in Figure 1.2. This gives a relative weight to the evidence obtained from these different types of clinical trial. All other things being equal, the design that maximises the weight of the resulting evidence should be chosen. For expository purposes, we assume that a comparison of a new test treatment with the current standard for the specific condition in question is being made.
1.3.3.1 Randomisation
The design that provides the strongest type of evidence is the double‐blind (or double‐masked) randomised controlled trial (RCT). In this, the patients are allocated to treatment at random and this ensures that in the long run patients, before treatment commences, will be comparable in the test and standard groups. Clearly, if the important prognostic factors that influence outcome were known, one could match the patients in the standard and test groups in some way. However, the advantage of randomisation is that it balances for unknown and the known prognostic factors and this could not be achieved by matching. Thus, the reason for the attraction of the randomised trial is that it is the only design that can give an absolute certainty that there is no bias in favour of one group compared to another at the start of the trial. Indeed, in Example 1.12, Erbel, Di Mario, Bartunek, et al. (2007), who essentially conducted a single‐arm prospective case study, admitted that failure to conduct a randomised comparison compromised their ability to draw definitive conclusions concerning the stent on test.
Figure 1.2 The relative strength of evidence obtained from alternative designs for comparative clinical trials
1.3.3.2 Blinding or masking
For the simple situation in which the attending clinician is also the assessor of the outcome, the trial should ideally be double‐blind (or double‐masked). This means that neither the patient nor the attending clinician will know the actual treatment allocated. Having no knowledge of which treatment has been taken, neither the patient nor the clinician can be influenced at the assessment stage by such knowledge. In this way, an unprejudiced evaluation of the patient response is obtained. Thus Meggitt, Gray and Reynolds (2006) used double‐blind formulations of Azathioprine or Placebo so that neither the patients with moderate‐to‐severe eczema, nor their attending clinical team, were aware of who received which treatment. Although they did not give details, the blinding is best broken only at the analysis stage once all the data had been collated.
Despite the inherent advantage of this double‐blind design, most clinical trials cannot be conducted in this way as, for example, a means has to be found for delivering the treatment options in an identical way. This may be a possibility if the standard and test are available in tablet form of identical colour, shape, texture, smell and taste. If such ‘identity’ cannot be achieved, then a single‐blind design may ensue. In such a design, one of the patient or the clinical assessor has knowledge of the treatment being given but the other does not. In trials with patient survival time as the endpoint, double‐blind usually means that both the patient and the treating physician and other staff are blinded. However, assessment is objective (death) and the blinding irrelevant by this stage.
Finally, and this is possibly the majority situation, there will circumstances in which neither the patient nor the assessor can be blind to the treatments actually received. Such designs are referred to as ‘open’ or ‘open‐label’ trials.
1.3.3.3 Non‐randomised designs
In certain circumstances, when a new treatment has been proposed for evaluation, all patients are recruited prospectively but allocation to treatment is not made at random. In such cases, the comparisons may well be biased and hence are unreliable. The bias arises because the clinical team choose which patients receive which intervention and in so doing may favour (even subconsciously) giving one treatment to certain patient types and not to others. In addition, the requirement that all patients should be suitable for all options may not be fulfilled – in that if it is known that a certain option is to be given to a particular subject then one may not so rigorously check if the other options are equally appropriate. Similar problems arise if investigators have recruited patients into a single‐arm study, and the results from these patients are then compared with information on similar patients having (usually in the past) received a relevant standard therapy for the condition in question. However, such historical comparisons are likely to be biased also and to an unknown extent so again it will not be reasonable to ascribe the difference (if any) observed entirely to the treatments themselves. Of course, in either case, there will be situations when one of these designs is the only option available. In such cases, a detailed justification for not using the ‘gold standard’ of the randomised controlled trial is required.
Understandably, in this era of EBM, information from non‐randomised comparative studies is categorised as providing weaker evidence than that from randomised trials.
The before‐and‐after design is one in which, for example, patients are treated with the Standard option for a specified period and then, at some fixed point