Jump to ContentJump to Main Navigation
Contents

Randomized Controlled Trials

Chapter:
Study Design and Statistics
Author(s):

Margaret A. Winker

and Stephen J. Lurie

Randomized Controlled Trials

The randomized controlled trial (RCT) generally leads to the strongest inferences about the effect of medical treatments.3 Randomized controlled trials assess efficacy of the treatment intervention in controlled, standardized, and highly monitored settings, and usually among highly selected samples of patients. Thus, their results might not reflect the effects of the treatment in real-world settings, or in other groups of individuals who were not enrolled in the trial. Information from RCTs may thus be supplemented by results of observational studies (see 20.3, Observational Studies) as well as other types of studies.

The methods of RCTs must be described in detail to allow the reader to judge the quality of the study, replicate the study intervention, and extract pertinent information for comparison with other studies. The CONSORT statement4 provides a checklist (Table 1) to help ensure complete reporting of RCTs. JAMA and the Archives Journals require that authors complete the checklist, and the International Committee of Medical Journal Editors (ICMJE) (www.icmje.org) recommends following this reporting procedure. While completing the checklist does not guarantee that a study has been performed well, it can help ensure that the information critical to interpretation of the study is provided and accessible to editors, reviewers, and, if published, readers. Journal editors may nonetheless also ask authors to provide a more detailed description of the study protocol. Although such information may not necessarily appear in the published article, it may help reviewers and editors to more thoroughly evaluate the manuscript.

Table 1. Checklist of Items to Include When Reporting a Randomized Triala

Section and Topic

Item No.

Descriptor

Reported on Page No.

Title and abstract

1

How participants were allocated to interventions (eg, “random allocation,” “randomized,” or “randomly assigned”).

Introduction

    Background

2

Scientific background and explanation of rationale.

Methods

    Participants

3

Eligibility criteria for participants and the settings and locations where the data were collected.

    Interventions

4

Precise details of the interventions intended for each group and how and when they were actually administered.

    Objectives

5

Specific objectives and hypotheses.

    Outcomes

6

Clearly defined primary and secondary outcome measures and, when applicable, any methods used to enhance the quality of measurements (eg, multiple observations, training of assessors).

    Sample size

7

How sample size was determined and, when applicable, explanation of any interim analyses and stopping rules.

Randomization

    Sequence generation

8

Method used to generate the random allocation sequence, including details of any restriction (eg, blocking, stratification).

    Allocation concealment

9

Method used to implement the random allocation sequence (eg, numbered containers or central telephone), clarifying whether the sequence was concealed until interventions were assigned.

    Implementation

10

Who generated the allocation sequence, who enrolled participants, and who assigned participants to their groups.

Blinding (masking)

11

Whether or not participants, those administering the interventions, and those assessing the outcomes were blinded to group assignment. If done, how the success of blinding was evaluated.

Statistical methods

12

Statistical methods used to compare groups for primary outcome(s); methods for additional analyses, such as subgroup analyses and adjusted analyses.

Results

    Participant flow

13

Flow of participants through each stage (a diagram is strongly recommended). Specifically, for each group report the numbers of participants randomly assigned, receiving intended treatment, completing the study protocol, and analyzed for the primary outcome. Describe protocol deviations from study as planned, together with reasons.

    Recruitment

14

Dates defining the periods of recruitment and follow-up.

    Baseline data

15

Baseline demographic and clinical characteristics of each group.

    Numbers analyzed

16

Number of participants (denominator) in each group included in each analysis and whether the analysis was by“intention-to-treat.” State the results in absolute numbers when feasible (eg, 10 of 20, not 50%).

    Outcomes and

estimation

17

For each primary and secondary outcome, a summary of results for each group, and the estimated effect size and its precision (eg, 95% confidence interval).

    Ancillary

analyses

18

Address multiplicity by reporting any other analyses performed, including subgroup analyses and adjusted analyses, indicating those prespecified and those exploratory.

    Adverse events

19

All important adverse events or side effects in each intervention group.

    Comment

Interpretation

20

Interpretation of the results, taking into account study hypotheses, sources of potential bias or imprecision, and the dangers associated with multiplicity of analyses and outcomes.

    Generalizability

21

Generalizability (external validity) of the trial findings.

    Overall evidence

22

General interpretation of the results in the context of current evidence.

a From Piaggio et al.10

A flow diagram is also important to outline the flow of participants in the study, including when and why participants dropped out or were lost to follow-up and how many participants were evaluated for the study end points. Authors should include a flow diagram (Figure 1), and, if the manuscript is accepted for publication, the flow diagram generally should be published with the study. The number of groups after randomization shown in the diagram should correspond to the number of intervention and control groups in the study. CONSORT continues to be adapted to specific types of RCTs.5 Current information is available from the CONSORT website (www.consort-statement.org).

Figure 1. CONSORT flow diagram showing the progress of patients throughout the trial. From Instructions for Authors. JAMA. 2006;296(1):107-115.

The report should include a comparison of characteristics of the participants in the different groups in the trial, usually as a table. However, performing significance testing on the baseline differences between groups is controversial. (Even with perfect random assignment, an average of 1 in every 20 comparisons will be appear to be “significant” at the .05 level by chance alone; such random findings illustrate the dangers of post hoc analyses.) Furthermore, in small studies, large differences may nonetheless be statistically nonsignificant due to limited statistical power. Nonetheless, it is usually helpful for authors to report statistical comparisons between groups. Such information should be interpreted not as a test of a null hypothesis of baseline differences between groups, but rather as a general estimate of the magnitude of any baseline differences that may have been confounded with the intervention. These results should be reported either in a table or in running text. This information would help the reader decide whether the authors should have accounted for these baseline differences in their statistical analysis of the prespecified outcomes.

In analyzing the data from a randomized trial, it is usually best to report the results of an intention-to-treat (ITT) analysis. That is, the final results are based on analysis of data from all of the participants who were originally randomized, whether or not they actually completed the trial. Such participants may have varying degrees of missing data, however, and thus ITT analyses usually involve some method for imputation of these missing results. For noninferiority and equivalence trials, however, ITT analysis may overstate the equivalence of experimental conditions. In these trial designs, results should also be reported only for those participants who completed the trial (as-treated analysis, completers' analysis, etc). (See 20.2.3, Randomized Controlled Trials, Equivalence and Noninferiority Trials.)

There is ongoing debate about the circumstances in which it may be unethical to perform an RCT.6,7 There is general agreement, however, that RCTs are unethical if the intervention is already known to be superior to the control in the population under investigation, or if participants could be unduly harmed by any condition in the experiment.

The decision to perform specific interim analyses is usually made before the study begins.8(pp130,258) (Data and safety monitoring boards, however, may monitor adverse events continually throughout the course of the study.) Investigators also usually define prospective stopping rules for such analyses; if the stopping rule is met, this generally means that collection of additional data would not change the interpretation of the study. If the criteria for the stopping rules have not been met, the results of interim analyses should not be reported unless the treatment has important adverse effects and reporting is necessary for patient safety. If a report is an interim analysis, this should be clearly stated in the manuscript with the reason for reporting the interim results. The plans for interim analyses and reports contained in the original study protocol should be described and, if the interim analysis deviates from those plans, the reasons for the change should be justified. If a manuscript reports the final results of a study for which an interim analysis was previously published, the reason for publishing both reports should be stated and the interim analysis referenced.

Publication bias is the tendency of authors to submit and journals to preferentially publish studies with statistically significant results (see also 20.4, Meta-analysis). To address the problem of publication bias, the ICMJE now requires, as a condition of publication, that a clinical trial be registered in a public trials registry.9 The ICMJE policy applies to any clinical trial starting enrollment after July 1, 2005. For trials that began enrollment prior to this date, the ICMJE member journals required registration by September 13, 2005. The policy defines a clinical trial as “any research project that prospectively assigns human subjects to intervention or comparison groups to study the cause-and-effect relationship between a medical intervention and a health outcome.”

Previous | Next