# Principles of New Trial Designs and their Practical Implications

## 3. Possible Approaches in Adaptive Design

The term ‘adaptive’ covers a varied set of designs, but most of them follow a simple structure. Within an adaptive trial, there are learning and confirming stages, which follow a similar approach to the overall clinical development process across multiple trial settings (Phase I, Phase II, and Phase III). As a result, changes might be made to hypotheses or the design parameters.

##### Learning stages

- Major design elements may be changed (for instance dropping treatment arms).
- Statistical uncertainty (for instance bias, variability, incorrect selection).
- Estimation of the treatment effects (beneficial or adverse).

How the ‘learning’ part is done is crucial to the integrity and the validity of the trial results. Modifications based on blinded interim results will have less impact on the trial operating characteristics. However, adaptations based on unblinded interim comparative analyses can introduce all sorts of bias.

##### Confirming stages

- Control of statistical errors and operational biases are of utmost importance.
- Strong control of Type I errors is required (for example, finding a treatment efficient when it, in fact, is not).

These rules are predetermined and are verified by one or more interim analyses. They prevent the participants from taking medicines that will not provide a beneficial effect or are unsafe. Most importantly, if it is found that the trial medicine is clinically more effective than the control, it would be unethical to continue administering the less-effective control medicine. Early stopping rules for futility allow a halt in the administration of a less-effective control medicine.

There are also designs where treatment arms are modified or dropped over the course of a trial, or where a sub-population is selected based on a biomarker of interest (´pick the winner´).

Some designs allow for sample size re-estimation, for instance an increase in the patient population if the results appear promising; or to maintain overall statistical power. At the interim analysis, a check is done for efficacy or futility. At this point the trial can be stopped in the presence of overwhelming evidence of efficacy or futility. If not, the conditional power (CP) is determined. The CP is the probability that the final study results will be statistically significant given the data currently observed. If this CP is either high or low then the trial is continued as planned. If the CP is somewhere in the middle, then the sample size is further increased.

During trial recruitment, if the expected number of participants under the original eligibility criteria cannot be enrolled, modifications to non-critical eligibility criteria can be made based examination of baseline characteristics.

Adaptive randomisation is another example of an intuitively appealing design. In this design, a higher proportion of patients would be treated with the ‘better’ arm (if there is one). These adaptive trial designs are mostly based on unblended interim analyses that estimate the treatment effects – meaning that the analysts are aware of which treatment participants have been allocated to.