Content area
Background
Value-adaptive designs for clinical trials are a novel set of emerging methods for delivering greater value for clinical research. There is increasing interest in using them within publicly funded health systems. A value-adaptive design permits ‘in progress’ changes to be made to the trial according to criteria which reflect its overall value to the healthcare system, including the cost-effectiveness of the technologies under investigation, the cost of running the trial and the total health benefit delivered to patients. These trial designs offer the potential to explicitly balance the costs and benefits of adaptive clinical trials with the health economic benefits expected for populations that are affected by any subsequent health technology adoption decisions. They may also improve the expected value of learning from the budget that is spent within a trial.
Main body
This paper introduces value-adaptive designs for publicly funded clinical trials. It discusses the idea of delivering ‘value for money’ in health technology assessment, what is meant by being ‘value-adaptive’ and the key features that characterise these designs. The methodology behind one kind of value-adaptive design – the value-based sequential model of a two-armed clinical trial proposed by Chick et al. (2017) – is described and illustrated using three retrospective case studies from the United Kingdom. The paper concludes by reviewing a range of perspectives provided by stakeholders, together with our own thoughts, on the practical opportunities and changes required for implementing a value-adaptive approach.
Conclusions
Value-adaptive clinical trial designs offer the potential to align health research funding allocations with population health economic goals. Many of the systems required to deploy value-adaptive designs within a publicly funded health system already exist and, with increased application, experience, and refinement they have the potential to deliver improved value for money.
Introduction
Healthcare systems are experiencing rapid technological change and their ability to evaluate the potential offered by new treatments is under increasing scrutiny. Clinical trials have traditionally focused on assessing patient-level clinical effectiveness. However, in recent years, publicly funded healthcare systems have become increasingly focused on estimating the value for money offered by new health technologies [1, 2]. The observation that clinical trials and health technology adoption decisions are typically driven by different metrics – clinical effectiveness on the one hand and cost-effectiveness on the other – suggests an opportunity for designing clinical trials in a way that incorporates both health-related and cost-based criteria. This is the crux of taking a value-based approach to designing a clinical trial. Value-adaptive designs aim to make the value-based approach to assessing health technologies ‘adaptive’, that is, to exploit the flexibility offered by adaptive trials [3] in a cost-effective manner, to align the current value-for-money trend in healthcare delivery with that of trial design.
This paper explores value-adaptive designs and summarises the results of recent research which applies a specific kind of value-adaptive design – a sequential clinical trial with two arms whose stopping rule is determined by value-adaptive criteria – within the context of publicly funded trials in a single-payer system. The value-adaptive approach places additional demands on a clinical trial’s data collection processes, because the costs of both the health technologies and the research process must be estimated, together with the size of the population of patients to benefit from the technology adoption decision [4,5,6,7,8]. As it is becoming increasingly common to measure the costs of treatments as part of health technology assessments, it is natural to ask whether taking a value-adaptive approach can improve the value for money of publicly funded clinical research.
We consider value-adaptive designs from the perspective of a large public funder such as the UK’s National Institute of Health and Care Research (NIHR). Alongside fulfilling their clinical responsibilities, the NIHR is directly involved in the design and running of health technology assessments (HTAs). The NIHR supports the delivery of novel, complex and innovative clinical trials, including adaptive trials (e.g., STAMPEDE [9], Randomised Evaluation of COVID- 19 Therapy RECOVERY [10]). Additionally, evidence from NIHR-funded studies is used to inform national clinical guidelines and HTA decisions for new and existing health technologies.Footnote 1 The NIHR has prioritised improving the efficiency of clinical trials [11], stating that it is “keen to see the design, development and delivery of more efficient, faster, innovative studies to provide robust evidence to inform clinical practice and policy” [12].
Sect."Background"of this paper introduces the background to value-adaptive trials. Sect."Value-adaptive clinical trials"discusses a range of aspects of clinical trial design which could benefit from the value-adaptive approach, describes the methodology behind the value-based sequential design that is the focus of this paper and applies it to three retrospective case studies using data from UK clinical trials. Sect."Implications of taking a value-adaptive approach in publicly funded research"summarises the views of stakeholders including funders, clinicians, trial teams, the public and healthcare decision makers, as well as our own thoughts, on opportunities and changes required to adopt a value-adaptive approach in a publicly funded healthcare system such as the NHS.
This paper presents results from the EcoNomics of Adaptive Clinical Trials (ENACT project). ENACT was part of the NIHR’s Efficient Studies funding call (2019) for Clinical Trials Units (CTUs). The ENACT project team undertook a series of workshops with key stakeholders from across the NIHR on the potential use and implementation of value-adaptive methods in NIHR research. It also funded two of the retrospective case studies whose results are reported in this paper.
Background
Taking a ‘value-adaptive’ perspective to designing a clinical trial means being both ‘adaptive’ and ‘value-based’. An-adaptive clinical trial analyses data as it accumulates over the course of the trial to inform changes which meet pre-determined objectives for the healthcare system and/or its funder. An adaptive design might offer the option to stop the trial earlier or run the trial longer than planned, or to maintain or change the ratio of patients allocated to its arms, according to how persuasive the accumulated evidence is at one of the trial’s ‘interim analyses’. An adaptive trial differs from what we refer to in this paper as a ‘fixed sample size’ trial, which recruits to a predetermined sample size, using fixed allocation ratios to the trial’s arms, and for which no changes are permitted in response to accumulating evidence. Adaptive designs have the potential to prevent patients from being needlessly allocated to unpromising treatment arms and to deliver the better treatment to patients sooner. They are becoming more common [3, 13, 14] and there now exists guidance on their operation, as well as discussion of the methodological challenges that they pose [15, 16].
Adaptive trials can be designed according to frequentist or Bayesian principles. The frequentist approach assesses accumulating evidence using hypothesis tests which meet predefined criteria for statistical significance and power. The Bayesian approach uses evidence from the trial to update a so-called ‘prior probability distribution’ for an unknown value of interest—such as the difference between the average efficacy of two or more health technologies—in the population of patients which meet the trial’s inclusion criteria.
In a ‘value-based’ adaptive clinical trial – which we refer to hereafter as a ‘value-adaptive’ clinical trial – changes to the way the trial operates are informed by estimates of the costs and benefits of the health technologies and potentially the costs and benefits of the trial itself. In the approach that we take in this paper, value-adaptive designs are Bayesian designs. We discuss this approach in Sect."Value-adaptive clinical trials".
One of the key requirements of taking a value-based approach is that patients’ health outcomes are valued in monetary terms. This permits the costs and benefits of the health technologies, as well as the costs of carrying out the trial, to be valued in a common metric. Valuing health outcomes in monetary terms is becoming increasingly common in the HTA literature [17]. For example, when a body such as the UK’s National Institute for Health and Care Excellence (NICE) assesses new health technologies, it typically estimates the additional Quality Adjusted Life Years (QALYs) gained for a patient receiving the new technology, compared to existing care, as well as the additional cost that is likely to be incurred by the NHS and social services in providing that technology [18]. NICE uses this information to inform a decision about whether the new technology is cost-effective and whether it should be approved for use in the NHS. Typically, a new treatment is considered cost-effective for the NHS if it is expected to deliver one additional QALY at a cost that is less than between £20,000 and £30,000 [18].Footnote 2
Valuing health outcomes in monetary terms and comparing an accumulating estimate of cost-effectiveness with the cost of continuing the trial in its current form, versus changing it, permits a value-adaptive design to be informed by ‘value of information’ (VoI) methods. These methods have seen increasing use in UK HTAs in recent years [19], with published guidance on their use in non-adaptive settings [20, 21] and pre-trial cost-effectiveness modelling [22, 23]. The basic idea behind a VoI approach is that, as more patients are recruited into a trial, the estimate of cost-effectiveness becomes more precise, which reduces the risk of making an incorrect decision about which health technology is superior. However, the ‘value-added’ of information resulting from recruiting an additional patient declines as the trial’s sample size increases. As a result, the trial’s so-called ‘optimal’ sample size is determined when the expected benefit of recruiting another patient is equal to the expected cost of doing so.
Despite the interest in using VoI methods to design value-based fixed sample size trials [4, 19,20,21, 24], little work has considered extending the ideas to adaptive clinical trials. Flight et al. [24] found that cost-effectiveness criteria are not routinely incorporated into the design of adaptive trials, that adaptive trials rarely account for the costs of the research process and that, among those interviewed in a qualitative study, there was a perceived potential benefit to incorporate such issues into the design of future trials [25].
Value-adaptive clinical trials
A value-adaptive approach can be applied to a range of features of clinical trials, some of which are summarised in Table 1. These include stopping a two-armed sequential trial using value-based criteria (discussed in Sect."The Bayesian value-based approach to designing an adaptive clinical trial").
[IMAGE OMITTED: SEE PDF]
Regardless of the precise feature being addressed, the value-adaptive design’s focus on estimating the cost-effectiveness of the health technologies under investigation—measured using the incremental net monetary benefit (INMB). This means that the patient-level costs of the technologies must be measured or estimated, in addition to the health outcomes. A HTA agency may also place a monetary valuation on a measured health outcome, using a societal maximum ‘willingness to pay’ for one unit of the health outcome, such as a maximum willingness to pay for one QALY. Furthermore, the value-adaptive design’s focus on the cost-effectiveness of the research process means that the fixed and variable costs of carrying out the clinical trial should be estimated, because they inform the decision about whether to adapt the trial as it progresses. Finally, the focus on the overall benefit of the trial to the healthcare system requires an estimate of the size of the population that is expected to benefit from the technology adoption decision that the trial informs. The next two sections examine these ideas in more detail.
Estimating the cost-effectiveness of a health technology
We can measure the value that a health technology is expected to generate for a patient by converting its estimated health benefit – in the UK NICE the standard approach is to use the Quality Adjusted Life Year (QALY) – into a monetary measure [30]. In a clinical trial, this is achieved by calculating the average number of QALYs for patients who are treated with one of the health technologies of interest and multiplying the result by the maximum amount a decision maker, such as NICE, is willing to pay for one additional QALY (the so-called ‘willingness to pay’ (WTP) threshold). The estimated cost of providing the technology to a patient is obtained by calculating the average cost of treating the patients who have received that technology. These costs include the cost of the technology itself, the costs of administration, staff time and other resources.
These costs and benefits can be used to estimate the expected net monetary benefit (NMB) of the health technology for one patient. This is equal to the expected monetary benefit of the technology minus the expected cost:
$$ENMB=WTP\times EQALY-ECost.$$
One of the simplest trial designs compares two health technologies, such as a new health technology ‘A’ and one that is already in use (‘B’). We can estimate the expected incremental net monetary benefit (EINMB) from treating a patient with technology A instead of technology B (\({EINMB}_{per\;patient}\)), by subtracting the estimate of the ENMB for B from that of A:
$${EINMB}_{per\;patient}={ENMB}_{A}-{ENMB}_{B}$$
(1)
If there are no costs to the health service of switching from B to A, the new health technology A is adopted in preference to B if \({EINMB}_{per\;patient}\) in Eq. (1) exceeds zero. We can say that technology A is ‘cost-effective’, that is, it is expected to deliver a higher NMB than technology B. If \(\star{{EINMB}_{per\;patient}}\) = 0, the technologies are expected to perform equally well and B is said to be cost-effective if \({EINMB}_{per\;patient}\) < 0. The value-based sequential design that we consider in Sect."The Bayesian value-based approach to designing an adaptive clinical trial"uses the cost and benefit data collected during the trial to estimate \(\star{{EINMB}_{per\;patient}}\). A description of estimating the EINMB using data from a two-arm clinical trial is provided in Table 2.
[IMAGE OMITTED: SEE PDF]
These methods measure cost-effectiveness at the level of the individual patient. A range of approaches can be used to calculate the total, or population, expected incremental value of the technology to the healthcare system. Under the assumption that the number of patients who will benefit from the technology adoption decision, P, is not related to the estimate of EINMB that results from the trial, population EINMB can be calculated by multiplying the per patient expected incremental net monetary benefit in Eq. (1) by P. One way to estimate P is to multiply the annual incidence of the condition that is being studied by the number of years over which the adoption decision is expected to apply. If P and EINMB are related – for example, if a higher EINMB is believed to lead to a larger size of the population to benefit, it is straightforward to model INMB as a function of P and then calculate the population benefit as E[P x EINMB(P)].
Finally, if the cost of switching from technology B to technology A is greater than zero, the population incremental benefit becomes E[P x EINMB(P)] – C, where C is the switching cost. Accounting for the total benefit provided by the health technology, and subtracting the cost of switching, therefore permits the total economic value of the technology adoption decision to reflect societal costs and benefits.
A Bayesian value-based approach to designing a fixed sample size clinical trial
One of the key ideas underlying a value-based clinical trial is that the uncertainty surrounding a health technology assessment decision can be reduced by paying to recruit more patients to the trial. The presence of uncertainty means that there is a risk that better outcomes for patients, on average, could be achieved if an alternative technology adoption decision is made [20]. Reducing this uncertainty, by recruiting more patients, reduces this risk. This is because running the trial costs money, the ‘added value’ provided by recruiting additional patients and allocating them to the arms of the trial where they provide the maximum value should be compared with the cost of acquiring and retaining the patients, to judge their ‘value for money’ in reducing uncertainty.
Rooted in Bayesian decision theory, VoI analysis provides a framework for comparing the costs and benefits of running a fixed sample size clinical trial. Table 3 shows four levels at which VoI analysis can be conducted for such trials [34]. Evidence available before the start of the trial can be used to specify a ‘prior probability distribution’ for the unknown value of EINMB defined in Eq. (1). The prior probability distribution reflects the evidence available to the researchers at the start of the trial. Data accumulating during the trial are then used to update the prior distribution and obtain a ‘posterior distribution’ for the EINMB, using standard Bayesian methods [35]. The resulting expected value of the posterior distribution is a weighted average of prior information and the sampled data. Bayesian updating can take place on multiple occasions as the trial progresses, with multiple updates being made to the original prior distribution, giving a succession of posterior probability distributions.
[IMAGE OMITTED: SEE PDF]
As more patients are recruited into the trial, the estimate of cost-effectiveness becomes more precise and this is reflected in the reduced variance in the posterior distribution for EINMB. Increased precision reduces the risk of making an incorrect decision about which health technology is superior on cost-effectiveness grounds, but it reduces the ‘value-added’ of recruiting an additional patient. Eventually, a point is reached at which the expected cost of recruiting another patient is equal to the expected benefit. This point defines the fixed sample size trial’s so-called ‘optimal’ sample size: prior to reaching the optimal sample size, the expected benefits of recruiting another patient outweigh the expected costs; beyond it, the expected costs outweigh the expected benefits.
This value-based approach has the advantage of incorporating parameters that are all meaningful from a clinical, medical and health policy standpoint because it includes measures of patients’ health outcomes, treatment costs, research costs, technology switching costs and the size of the population to benefit from the technology adoption decision, as well as the willingness to pay of the HTA agency for health gain. These measures reflect the increasing emphasis on delivering value to publicly funded health care systems. In particular, the design of the trial is not focused solely on estimating an incremental treatment effect at the level of an individual patient, nor is it governed by the traditional, frequentist, type I and type II error criteria. These might not adequately represent quantities such as disease prevalence, average health benefit and the incremental costs generated by the health technologies.
The Bayesian value-based approach to designing an adaptive clinical trial
The Bayesian value-based approach to designing an adaptive clinical trial extends the value-based approach to designing a fixed sample size trial by allowing the trial’s accruing cost-effectiveness evidence to be compared with the cost of running the trial to inform changes to the trial as it progresses. The expected costs of continuing the trial in its current form versus changing it can be calculated and used to decide whether to change the trial or maintain the status quo.
If desired, simulations of value-adaptive trials can be run to produce estimates of frequentist power, bias and other characteristics, in line with published guidelines for reporting the characteristics of complex innovative trials [36].
In the next two sections, we illustrate an application of the value-adaptive approach to designing an adaptive clinical trial by reviewing the value-based design of a two-armed clinical trial with pairwise allocations to the arms proposed by Chick and collaborators [5, 7]. In Sect."The value-based sequential two-arm clinical trial design with adaptive sample size"we present an overview of the methodology and in Sect."Application of the Bayesian value-based sequential design to three published retrospective case studies"we summarise the results of three published applications of the model to retrospective data from UK clinical trials.
The value-based sequential two-arm clinical trial design with adaptive sample size
The value-based sequential clinical trial design proposed by Chick et al. 2017 [7] is a specific type of value-adaptive design. In this design, patients are randomised to one of two treatments and the trial’s stop/continue decisions are informed by collecting information on the accumulating estimate of individual and population EINMB, the number of patients whose outcomes have been observed (which determine the precision of the estimate of EINMB) and the expected costs of running the trial and of switching technologies.
The design assumes that follow-up of the cost-effectiveness data for each patient takes place after a defined period. This can be as small as a couple of hours, or as large as several years. For a value-based sequential design, the follow-up period must be smaller than the trial’s planned recruitment length so that, given the cost-effectiveness evidence accumulated at a given interim analysis, a ‘stop trial/continue trial’ decision makes sense; if the follow-up period is greater than the recruitment period, there is no opportunity to stop the trial before it reaches its maximum planned sample size.
The value-based sequential design uses the VoI methods described in Sect."Estimating the cost-effectiveness of a health technology", but within a dynamic framework. It uses what is termed a ‘dynamic programming’ approach to define the trial’s ‘stopping rule’ [7, 40,41,42]. The stopping rule halts recruitment of further patients when the expected benefit of continuing is not worth the expected cost. In this way, the overall expected value of the trial to the funder is maximised. The expected value is measured by the total net monetary benefit that patients expect to receive from the health technology assessment decision, less the cost of the research and, if relevant, of adopting one of the two technologies.
The trial’s stopping rule is operationalised by defining a ‘stopping boundary’, which is best viewed graphically as in Fig. 1. The stopping boundary is obtained at the start of the trial and, as the trial progresses, the research team compares the expected value of the posterior distribution for EINMB after outcomes for n patients have been observed with the stopping boundary. If the posterior mean goes outside of the continuation region defined by the stopping boundary, recruitment to the trial halts and the remaining patients ‘in the pipeline’ – those who have been treated but whose outcomes are yet to be observed – are all followed up, prior to the adoption decision being made.
[IMAGE OMITTED: SEE PDF]
Figure 1 illustrates a typical stopping boundary for this kind of trial. The vertical axis displays the prior/posterior mean of EINMB. The horizontal axis shows the trial’s sample size, measured in terms of the number of pairs of patients recruited and randomised into the trial, up to the maximum sample size for the trial (marked ‘Maximum sample size’). The axis extends beyond this point to permit patient data to be monitored when there exists delay in observing the patient-level cost-effectiveness data. The figure shows that the design has three distinct stages:
1. 1.
During Stage I, patients are recruited to the trial and randomised to the two treatments, but cost-effectiveness data are not available until the defined follow-up point – labelled ‘Delay’ in Fig. 1 – is reached for the first pair of patients recruited to the trial. For example, if the time to follow-up of cost-effectiveness data is one year, Delay is equal to the number of pairs of patients recruited to the trial over one year.
2. 2.
During Stage II, outcome data and treatment cost data are accruing and are used to update the prior distribution for cost-effectiveness. There is the option to recruit and randomise more patients to the two arms of the trial, or to stop recruitment. The stopping boundary demarcates the region where the trial should continue to recruit patients (the shaded ‘continuation region’) from the region where the trial should stop. The shape of the stopping boundary and the size of the continuation region will depend on the trial-specific parameters that are used to solve the value-based sequential model. These include P, the variance of EINMB in the population (sometimes called the ‘sampling variance’), the time to follow-up of the cost-effectiveness data, the cost of sampling, the cost of switching technologies and the societal willingness to pay for one unit of health, such as a QALY.
3. 3.
During Stage III, recruitment to the trial has finished (either because the Stage II stopping boundary has been crossed or because the trial’s maximum planned sample size has been reached). Data from patients who were ‘in the pipeline’ when the trial stopped are observed and recorded as their time to follow-up is reached. When all data for all recruited patients have been observed, the technology that is estimated to be cost-effective, according to the criteria in Sect."Estimating the cost-effectiveness of a health technology", is considered for adoption.
As well as providing a rule for stopping the sequential trial using value-based criteria, the choice of prior mean for the trial may be used to choose the best kind of trial design from the following choices: run no trial at all; run a non-adaptive value-based design of the kind described in Sect."A Bayesian value-based approach to designing a fixed sample size clinical trial"; run the value-based sequential trial. These ideas are also illustrated in Fig. 1: if the prior mean is sufficiently high or low (that is, above A or below B in Fig. 1), the expected value of immediately adopting one of the two technologies exceeds the expected value of running any trial. This might happen, for example, if earlier-stage trial data were extremely favourable towards one of the two technologies, warranting an immediate adoption recommendation. For values of the prior mean between the points labelled C and D, it is optimal to run a value-based sequential design. For intermediate values– between points A and C or between B and D – it is optimal to run a design with a fixed sample size, selected by maximising the expected net benefit of sampling as described in Sect."A Bayesian value-based approach to designing a fixed sample size clinical trial".
Application of the Bayesian value-based sequential design to three published retrospective case studies
We review the application of the Bayesian value-based sequential design to three published case studies using retrospective clinical trial data from the United Kingdom: the ProFHER pragmatic trial, which was funded by the NIHR to compare surgical and nonsurgical intervention (sling immobilisation) for the treatment of proximal humerus fracture [44]; the CACTUS trial, which was funded by the NIHR and the North of Tyne PCT to evaluate the clinical and cost-effectiveness of a computer-based speech and language therapy (CSLT) in patients with aphasia following stroke [45]; the HERO trial, which evaluated whether hydroxychloroquine is superior to placebo for the treatment of hand osteoarthritis [46]. More detail about these analyses can be found in: for the ProFHER trial, Forster et al. [47]; for the HERO trial, Welch et al.[48]; for the CACTUS trial, Flight et al. [49]. Key features of the three trials are presented in Table 4. For all three trials, our analyses take the UK perspective. The three case studies were done at different times and built upon previous health economic evaluation work, with the willingness to pay for one QALY being different in the case studies but all within the typical threshold of £20,000 to £30,000 used by NICE [18] (£30,000 per QALY for ProFHER and HERO evaluations; the slightly lower £20,000 per QALY for CACTUS).
[IMAGE OMITTED: SEE PDF]
For each application, we compare performance characteristics for three different clinical trial designs:
1. 1.
The original, frequentist, fixed sample size trial, designed according to traditional frequentist principles for power and Type I error probabilities and not value-based principles;
2. 2.
A Bayesian value-based one-stage design that maximises the expected net benefit of sampling, as described in Sect."A Bayesian value-based approach to designing a fixed sample size clinical trial";
3. 3.
The Bayesian value-based sequential design of Sect."The value-based sequential two-arm clinical trial design with adaptive sample size", with a maximum sample size chosen to be equal to the optimal sample size of the value-based one-stage design.
We assessed the performance of these designs by running Monte Carlo simulations based on 5000 bootstrapped samples for each simulated trial. To understand the general idea behind how these Monte Carlo simulations work, consider Fig. 2. This shows the stopping boundary for the value-based sequential design from the HERO application of Welch et al.[48], when the maximum sample size is equal to 124 pairwise allocations (red, continuous boundary; this is the sample size chosen for the HERO trial) and 177 pairwise allocations (blue, dashed boundary; this is the optimal sample size of the value-based one stage design). The trial data path for the posterior mean for the expected value of incremental net monetary benefit of hydroxychloroquine compared with placebo is shown in black, with interim analyses marked using small circles. Each circle is labelled with the number of pairwise allocations which contribute to the respective interim analysis, which we set at every ten pairwise allocations. No interim analysis took place after ten pairwise allocations owing to data sparsity (further details can be found in Welch et al. [48]).
[IMAGE OMITTED: SEE PDF]
Two bootstrap sample paths are also shown in Fig. 2. ‘Resampled path 1’ is a dashed, pink line and ‘Resampled path 2’ is a dashed, green line. The point at which each of the paths starts marks the start of Stage II of the sequential trial. The black and pink paths remain in the Stage II continuation region for a value-based sequential design whose maximum sample size is 124 pairwise allocations (red stopping boundary), meaning that if either of these two paths had been the path from the clinical trial, it would not have stopped early and would, instead, have run to the maximum sample size of 124 pairwise allocations. The green path crosses the upper stopping boundary between the interim analyses for 30 and 40 pairwise allocations, so if it had been the path from the trial, recruitment would have stopped at 40 pairwise allocations and the sample size of the trial would have been 114 pairwise allocations.
Once all pipeline data have been observed, the final points on both the black and pink paths are negative, meaning that hydroxychloroquine is estimated not to be cost-effective. Instead, the final point on the green path is positive, meaning that hydroxychloroquine is estimated to be cost-effective. For each of our three applications, the proportion of bootstrapped paths which show the new technology to be cost-effective is used as the estimate of the probability that the new technology is cost-effective. The trial’s expected sample size (measured in the number of pairs of patients recruited) is calculated by averaging the sample sizes of the bootstrapped paths; the expected cost of the trial is calculated by multiplying the average sample size by the estimated cost of randomising a pair of patients into the trial and adding the estimated fixed costs of the trial. Finally, the expected net benefit is calculated by multiplying the final value of the posterior mean for each path by the willingness to pay of the funder and the number of patients expected to benefit from the technology adoption decision and subtracting the estimated cost of the trial.
Table 5 summarises some of the operating characteristics from the bootstrap analysis of these trials. In all three case studies, it shows that the value-based sequential design delivers the highest expected net benefit for the healthcare system: the sample sizes for the original trial designs were not chosen according to value-based criteria, so it is no surprise that they deliver less value. The largest gain in expected net benefit of the value-based sequential design versus the original design is found for the CACTUS case study (+ 6.7%). Regarding the comparison of the non-adaptive value-based design with the value-based sequential design, the value-based sequential design offers the flexibility to stop the trial when the expected benefits of randomising a further pair of patients is not worth the cost, which is an option that is not available in the value-based one-stage design. The additional value generated by the value-based sequential design comes from this flexibility. For the ProFHER and HERO case studies this gain is very small, being less than 1%.
[IMAGE OMITTED: SEE PDF]
In the CACTUS case study, the optimal sample size of the value-based sequential design is 74.7% higher than the sample size of the original Big CACTUS trial. This greater sample size is due to the considerable residual uncertainty as to which arm is more cost-effective and the extra observations result in significant additional expected net benefit (+ 6.7% when compared with the original design of the trial). In contrast, for the HERO case study, the optimal sample sizes of the value-based one-stage design and value-based sequential design are 40–43% higher, but they deliver little additional expected net monetary benefit (less than 1%). The ProFHER case study value-based designs have a smaller expected sample size; however, they deliver only a small additional expected net monetary benefit.
The analysis for Table 4 was carried out under the assumption that the research costs of each trial, as actually incurred, could be used to inform the research costs of the value-based designs. However, it could be the case that the research costs of the value-based designs could differ. For example, extra data collection and analysis costs might accompany the more frequent interim analyses. Those costs should also be considered when choosing an appropriate design and would be straightforward to incorporate (we note that digital technology developments are reducing those costs through time).
Making trials value-adaptive
The three case studies illustrate that, by taking appropriate care in choosing and valuing parameters which appropriately measure the overall value of health technologies and the clinical trial to the healthcare system, it is possible to obtain retrospective applications of a value-adaptive design using real-world data. It is likely that retrospective results such as these could be achieved for the other value-adaptive designs listed in Table 1 of this paper.
Unsurprisingly, the retrospective application results suggest that a value-adaptive design can deliver varying degrees of ‘value’ to the healthcare system, according to the strength of the cost-effectiveness signal that arises in the trial and the precise parameter values that apply to the health technology assessment. For example, had the value-based sequential trial been used instead of the traditional fixed sample design that was used in the ProFHER pragmatic trial, results suggest that the trial could have stopped earlier than planned with a sample size 42% lower than that used in the trial itself, reducing the time to adoption of the more cost-effective treatment and delivering a modest saving in research costs to the healthcare system. This result is primarily due to the strong cost-effectiveness signal favouring one of the two health technologies (non-surgical intervention) that emerged during the trial. In contrast, the weak cost-effectiveness signal from the HERO trial – the cost-effectiveness evidence suggested that hydroxychloroquine was shown to be little different to placebo for the treatment of hand osteoarthritis – suggests that a value-based sequential approach would not have led to earlier stopping.
There are many areas for future research in value-adaptive designs. The retrospective applications assume that there is a fixed size population to benefit from the treatment adoption decision. Alban et al. [5] illustrates how patent protection periods that decrease in the length of the trial affect the optimal fixed duration length of a value-based trial design. Expert elicitation techniques might be used to assess the requisite prior distributions [29, 54]. Pilot data and machine learning techniques could also be used to inform the choice of prior distribution, even for multi-arm value-adaptive trials [8]. Bias is a known issue in the analysis of adaptive trials [3]. It has been shown to have effects in health economic analysis of adaptive trials [55]. Existing corrections to adjust for the mean EINMB for the patients to be treated can be used in case the trial participant population differs somewhat from the population of patients to be treated post-adoption. Further work should consider how to incorporate bias adjustment of primary and secondary trial endpoints into the value-adaptive calculations.
One challenge for researchers in this field is to establish to what extent value-adaptive designs can deliver greater value to publicly funded healthcare systems through prospective application. It is likely that ‘increased application, experience and refinement’ of such methods is required to answer this question. One sensible ‘next step’ would be to use a value-adaptive design and use Monte Carlo simulation to generate power curves for the trial and to revisit how to make per patient trial costs or the size of the adopting population suitable to meet any relevant additional power curve constraints. Another option might be to use a value-adaptive design to ‘shadow’ a clinical trial which has been approved according to more traditional design criteria. This would allow a research team to run the value-based sequential design in the context of prospective data collection to understand the processes that need to be in place for a future clinical trial using solely the value-based design.
Implications of taking a value-adaptive approach in publicly funded research
This section discusses some of the practical considerations for taking a value-adaptive approach in a publicly funded health system. These were developed as part of the ENACT project, in discussions with stakeholders from clinical trials units, research networks and the NIHR during two workshops in November 2019, through the experience of the ENACT collaborators who worked on the CACTUS and HERO case studies reported above and from feedback on the ENACT project report that was prepared for the NIHR [57].Footnote 3 Reflecting the importance of accounting for cross-stakeholder views, as highlighted by the joint guidance from the NIHR and NHS England on “baking in” assessment of the value and real world cost of research as part of clinical research projects [58], we also considered the roles and activities of the following important stakeholder groups: research funders, trial research teams, patients and the public; Research Delivery Teams in health and care organisations, Health and Care Commissioners, HTA decision makers and clinicians.
Table 6 summarises some of the actions needed to implement value-adaptive designs. They are, classified into three stages: design and funding, conduct and analysis and reporting and implementation. Regarding the design and funding stage, by reflecting the overall objectives of the public healthcare system, a value-adaptive approach can provide useful guidance about the best design of the clinical trial, using the information that is available at the planning stage. For example, it can help guide a decision about whether it is worth making a technology adoption decision immediately, whether a value-based fixed sample size design is best or whether a value-adaptive design is preferable. The value delivered by the trial could be considered one of several important criteria for trial funding decisions, with others including (but not being limited to) fairness, access to, and exploration of, new health technologies [3, 15]. Stakeholders also noted that value-based designs could help inform analysis of the potential health gain and value delivered by different trial designs across intervention/disease areas, thereby prioritising research topics across a portfolio of studies in a particular disease area or funded by programmes such as the NIHR Health Technology Assessment funding stream [59,60,61, 63, 63].
[IMAGE OMITTED: SEE PDF]
The main opportunities arising at the conduct and analysis stage of a trial come in the flexibility of being able to amend the trial dynamically as data accrues, using the value-adaptive criteria described above. It was noted that more traditional, frequentist, adaptive designs already offer the flexibility to make amendments to the trial, using clinical effectiveness criteria alone, and that value-based methods could be assessed alongside traditional adaptive designs such as those using the O’Brien-Fleming or Pocock stopping rules [25, 63, 63]. Although discussions noted that a value-adaptive design is likely to cost more than a traditional trial design, owing to the added complexity from introducing interim analyses. These additional costs are likely to be mitigated if a value-adaptive design is used alongside a traditional, adaptive, clinical trial. The requirement for ongoing data monitoring to update the health economic outcomes would most likely build on existing data monitoring processes that already take place during the trial. There is a trade-off between these additional costs and the expected value of being value-adaptive: “How much value is your value-based method expected to create for me, over an existing design?” is an important question.
A further challenge, common to all Bayesian approaches (whether they are adaptive or not), is that it is necessary to obtain valid and justifiable prior distributions to describe the key existing uncertainties in the likely trial outcomes before the trial happens. This process has been well discussed in the literature [35]. The evidence to inform current uncertainty in outcomes could build on existing literature for the specification of prior distributions and come from expert elicitation processes [54], from relevant grey or archival literature, or data from a pilot study [63]. The research funding body could invest in funding for short pilot studies as part of providing such prior evidence.
Patients and public are an important stakeholder group and research teams might benefit from actively engaging patients and public in any value-adaptive design proposal early. As discussed by Flight et al. [25], value-based designs change the focus of clinical trials somewhat from the traditional clinical effectiveness viewpoint and the acceptability of this to the public and potential trial patients would need to be explored.
Regarding the reporting and implementation stage, as with all clinical trials research, methods and results should be reported in a transparent and understandable way for all stakeholders [63]. Clear sections reporting the value-adaptive approach would be necessary. To facilitate this, our ENACT project has added two case studies to the available worked examples, along with the open-source code [43, 48, 49].
Stakeholders also identified that planning communication to the clinical community by key opinion leaders could be important. This would include the methods, their use in practice and understanding relevant case study results and decisions. The work of the Health Innovation Network and analogous agencies in diffusing innovations in complex systems could be helpful [63].
Another challenge discussed around implementation concerned clinician familiarity with fixed sample size design trial evidence and it was discussed that this might generate some reluctance to implement an intervention based on findings from a value-adaptive design that terminates earlier than a fixed design might have done. Appropriate training and understanding of the methods should mitigate this.
A strength and limitation of our ENACT study programme was its strong focus on publicly funded research and hence little detailed discussion of commercially funded research. Much of the methodology and statistical approach could apply to commercially funded research and work would be needed to consider how commercial companies’ objectives could be incorporated, building on initial work that allows collaborative bargaining of prices as a function of health value created [32].
Potential activities towards implementing value-adaptive designs in publicly funded research
Stakeholders were also asked to consider the activities and actions which could be useful or needed to enable implementation of value-adaptive designs (see Table 7). The suggestions reflect many of the recommendations reported by Blagden et al., in relation to the effective delivery of complex and innovative clinical trial designs [63] and also the work of Jaki and Dimairo et al., who considered why uptake of adaptive approaches has lagged behind their methodological development [63, 63].
[IMAGE OMITTED: SEE PDF]
Research funders are in a strong position to support initiatives for greater use of value-adaptive designs. One way to encourage this would be to include an indication in calls that funders would welcome trial applications which included a value-adaptive perspective. For example, a funding call could focus on projects which develop a pilot study to inform prior probability distributions for use in the value-adaptive design.
Research funding bodies might also provide guidance for researchers on how to set out the plans for a value-adaptive trial in research proposals. Activities in reporting progress to the funding body would build on existing processes and infrastructure. The trial team would need to undertake and report pre-specified analyses. An independent group, usually the data monitoring and ethics committee (DMEC) would still need to receive reports on trial progress and help to make recommendations on study adaptations to the trial team and funder. When the trial concludes, the DMEC, trial management group and trial steering committee would need to agree and approve the final analysis and the reasoning for criteria to end the trial given the available evidence. Stakeholders believed that the final reporting of trial results and the use of evidence in health technology assessment and clinical commissioning would have very little difference from current practice. The full research report for the funding body and the associated peer reviewed journal articles would follow usual procedures.
Finally, it was considered important that patients, clinicians, funders and health technology assessors can understand the approaches taken, to critically assess them and to interpret the results. This understanding could benefit from careful presentation by research teams including standardised reporting of relevant aspects, which could well be facilitated, for example, by the Adaptive Designs CONSORT Extension [63].
Conclusions
Value-adaptive approaches to clinical trial design provide a range of novel techniques to improve the societal value of clinical trials by seeking to improve the expected learning for trial budgets relative to population health goals. This can include stopping the trial early, running the trial longer, changing the fraction of patients allocated to the arms, or making other adaptations which align better the current value-for-money trend in healthcare delivery with that of the design of the trial. This paper sets out the key methods involved, summarises the methods and results from three case studies and assesses the opportunities and challenges which arise for publicly funded research using the UK NIHR as an exemplar. Many of the systems to deploy value-adaptive designs already exist and some refinement to processes are likely to be needed. Increased experience and application of these methods will be useful on the pathway to implementation of the value-adaptive design approaches which offer potential for more efficient publicly funded health research.
Data availability
No datasets were generated or analysed during the current study.
Notes
1.
Methodological innovations from the NIHR are often adopted by the Association of Medical Research Charities (AMRC) and UK Research and Innovation (UKRI), which plays an important part in the UK non-commercial research sector. Similar public funders exist in other countries with developed healthcare systems, including Canada (Canadian Agency for Drugs and Technologies in Health. Guidelines for the Economic Evaluation of Health Technologies: Canada. 2017. https://www.cadth.ca/guidelines-economic-evaluation-health-technologies-canada-0) and Australia (Pharmaceutical Benefits Advisory Committee. Guidelines for preparing submissions to the Pharmaceutical Benefits Advisory Committee. Version 4.3. 2008. https://pbac.pbs.gov.au/), making the findings relevant internationally.
2.
Although the QALY is one of the most common health outcomes that is used to provide a monetary valuation, it is not the only one. For example, Meltzer et al. (2011) used a monetary estimate of a day free from sinusitis in their study of the impact of antibiotics on the average time to recovery from acute bacterial sinusitus.
3.
We structured these workshop discussions around the three stages of clinical trials research identified in the “NIHR Clinical Trials Toolkit Route map” [63]. Stage 1, Design and Funding, covers trial planning and design, funding proposal development, funding panel review; Stage 2, Conduct and Analysis, includes protocol and trial documentation development, ethics approval, internal pilot, trial management and monitoring, safety monitoring, statistical analysis, health economic analysis, and monitoring by funders; Stage 3, Reporting and Implementation, covers reporting results, health technology assessment and implementation of proven interventions into clinical practice.
Abbreviations
CONSORT:
Consolidated Standards of Reporting Trials
CRN:
Clinical Research Network
DMEC:
Data Monitoring and Ethics Committee
ENACT:
EcoNomics of Adaptive Clinical Trials
EINMB:
Expected Incremental Net Monetary Benefit
ENBS:
Expected Net Benefit of Sampling
EVPI:
Expected Value of Perfect Information
EVPPI:
Expected Value of Partially Perfect Information
EVSI:
Expected Value of Sample Information
HTA:
Health Technology Assessment
INMB:
Incremental Net Monetary Benefit
NIHR:
National Institute for Health and Care Research
NHS:
National Health Service
NICE:
National Institute for Health and Care Excellence
NMB:
Net Monetary Benefit
QALY:
Quality Adjusted Life Year
UK:
United Kingdom
UKRI:
UK Research and Innovation
VoI:
Value of Information
WTP:
Willingness to Pay
Pitt C, Goodman C, Hanson K. Economic Evaluation in Global Perspective: A Bibliometric Analysis of the Recent Literature. Health Econ. 2016 Feb 25;25(S1):9–28. Available from: https://onlinelibrary.wiley.com/doi/https://doi.org/10.1002/hec.3305
Husereau D, Drummond M, Augustovski F, de Bekker-Grob E, Briggs AH, Carswell C, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) 2022 Explanation and Elaboration: A Report of the ISPOR CHEERS II Good Practices Task Force. Value Heal. 2022;25(1):10–31. Available from: https://linkinghub.elsevier.com/retrieve/pii/S1098301521017952
Pallmann P, Bedding AW, Choodari-Oskooei B, Dimairo M, Flight L, Hampson LV, et al. Adaptive designs in clinical trials: why use them, and how to run and report them. BMC Med. 2018;16(1):29.
Pertile P, Forster M, La TD. Optimal Bayesian sequential sampling rules for the economic evaluation of health technologies. J R Stat Soc Ser A Stat Soc. 2014;177(2):419–38.
Alban A, Chick S, Forster M. Value-based clinical trials: selecting recruitment rates and trial lengths in different regulatory contexts. Manage Sci. 2023;69(6):3516–35.
Chick S, Gans N, Yapar O. Sequential, value-based designs for certain clinical trials with multiple arms having correlated rewards. In: Winter Simulation Conference (WSC); 2019. 2019. Available from: https://doi.org/10.1287/mnsc.2021.4137
Chick S, Forster M, Pertile P. A Bayesian decision theoretic model of sequential experimentation with delayed response. J R Stat Soc Ser B Stat Methodol. 2017;79(5):1439–62.
Chick S, Gans N, Yapar O. Bayesian sequential learning for clinical trials of multiple correlated medical interventions. Manage Sci. 2022;68(7):4919–38.
Sydes M, Parmar M, Mason M, Clarke N, Amos C, Anderson J. Flexible trial design in practice-stopping arms for lack-of-benefit and adding research arms mid-trial in STAMPEDE: a multi-arm multi-stage randomized controlled trial. Trials. 2012;13(1):168.
Nuffield_Department_of_Population_Health. Randomised Evaluation of COVID-19 Therapy RECOVERY trial 2020. Available from: https://www.recoverytrial.net/. Cited 2020 Nov 14.
National Instituite of Health Research. Delivering complex and innovative trials 2020. 2020. Available from: https://www.nihr.ac.uk/partners-and-industry/industry/run-your-study-in-the-nhs/complex-innovative-trials.htm. Cited 2020 Nov 13.
National Institute of Health Research. Annual Efficient Studies funding calls for CTU projects. 2019. Available from: https://www.nihr.ac.uk/documents/ad-hoc-funding-calls-for-ctu-projects/20141
Hatfield I, Allison A, Flight L, Julious SA, Dimairo M. Adaptive designs undertaken in clinical research: a review of registered clinical trials. Trials. 2016;17(1):150.
Thorlund K, Haggstrom J, Park J, Mills E. Key design considerations for adaptive clinical trials: a primer for clinicians. BMJ. 2018;360:k698. https://doi.org/10.1136/bmj.k698.
U.S. Food and Drug Administration. Guidance for Industry: Adaptive Design Clinical Trials for Drugs and Biologics. 2019. Available from: http://www.fda.gov/downloads/Drugs/Guidances/ucm201790.pdf. Cited 2020 Mar 26.
Committee for Medicinal Products for Human Use, (CHMP) European Medicines Agency. Reflection Paper on Methodological Issues in Confirmatory Clinical Trials Planned with an Adaptive Design. 2007. Available from: https://www.ema.europa.eu/en/documents/scientific-guideline/reflection-paper-methodological-issues-confirmatory-clinical-trials-planned-adaptive-design_en.pdf
Lakdawalla DN, Doshi JA, Garrison LP, Phelps CE, Basu A, Danzon PM. Defining Elements of Value in Health Care—A Health Economics Approach: An ISPOR Special Task Force Report [3]. Value Heal. 2018;21(2):131–9. Available from: https://linkinghub.elsevier.com/retrieve/pii/S1098301517338925
National Institute for Health and Care Excellence (NICE). NICE health technology evaluations: the manual. 2022. Available from: https://www.nice.org.uk/process/pmg20/chapter/incorporating-economic-evaluation
Mohiuddin S, Fenwick E, Payne K. Use of value of information in UK health technology assessments. Int J Technol Assess Health Care. 2014;30(6):553.
Fenwick E, Steuten L, Knies S, Ghabri S, Basu A, Murray JF, et al. Value of Information Analysis for Research Decisions—An Introduction: Report 1 of the ISPOR Value of Information Analysis Emerging Good Practices Task Force. Value Heal. 2020;23(2):139–50.
Rothery C, Strong M, Koffijberg HE, Basu A, Ghabri S, Knies S, et al. Value of information analytical methods: report 2 of the ISPOR value of information analysis emerging good practices task force. Value Heal. 2020;23(3):277–86.
Ahern A, Woolston J, Wells E, Sharp S, Islam N, Lawlor E, et al. Clinical and cost-effectiveness of a diabetes education and behavioural weight management programme versus a diabetes education programme in adults with a recent diagnosis of type 2 diabetes : study protocol for the glucose lowering through weight managem. BMJ Open. 2020;10(4):e035020. https://doi.org/10.1136/bmjopen-2019-035020.
Pollard D, Brennan A, Coates L, Heller S. PDB65 Pretrial Modelling Methods to Justify and Inform the Design of Large RCTS - Expected Value of Sample Information for the DAFNEPLUS Diabetes Education Cluster RCT. Value Heal. 2019;S584–S584.
Flight L, Arshad F, Barnsley R, Patel K, Julious S, Brennan A, et al. A Review of Clinical Trials With an Adaptive Design and Health Economic Analysis. Value Health. 2019;22(4):391–8. https://doi.org/10.1016/j.jval.2018.11.008.
Flight L, Julious SA, Brennan A, Todd S, Hind D. How can health economics be used in the design and analysis of adaptive clinical trials? A qualitative analysis Trials. 2020;21(1):1–12.
Wu J, Frazier P. The parallel knowledge gradient method for batch Bayesian optimization. In: Advances in Neural Information Processing Systems. 2016.
Villar S, Rosenberger W. Covariate-adjusted response-adaptive randomization for multi-arm clinical trials using a modified forward looking Gittins index rule. Biometrics. 2018;74(1):49–57.
Chick S, Inoue K. New two-stage and sequential procedures for selecting the best simulated system. Oper Res. 2001;49(5):732–43.
Oakley J, O’Hagan A. School of Mathematics and Statistics, University of Sheffield, UK. 2010. SHELF: the Sheffield elicitation framework (version 2.0).
Ahuja V, Birge J. Simultaneous learning from multiple patients. Eur J Oper Res. 2016;248(2):619–33.
Ryzhov I, Powell W, Frazier P. The knowledge gradient algorithm for a general class of online learning problems. Oper Res. 2012;60(1):180–95.
Yapar Ö, Chick SE, Gans N. Conditional Approval and Value-Based Pricing for New Health Technologies. Manage Sci. 2024 Nov 19; Available from: https://pubsonline.informs.org/doi/https://doi.org/10.1287/mnsc.2022.03628
Alban A, Chick, Stephen E Zoumpoulis S. Learning Personalized Treatment Strategies with Predictive and Prognostic Covariates in Adaptive Clinical Trials. NSEAD Work Pap No 2022/33/TOM/DSC,. 2024; Available from: https://ssrn.com/abstract=4160045
Raiffa H, Schlaifer R. Applied statistical decision theory. 1961.
Spiegelhalter D, Freedman L, Parmar M. Bayesian approaches to randomized trials. J R Stat Soc Ser A Stat Soc. 1994;157(3):357–87.
U.S. Food and Drug Administration. Draft guidance for industry: Interacting with the FDA on Complex Innovative Trial Designs for drugs and biological products. 2019.
Strong M, Oakley JE, Brennan A, Breeze P. Estimating the expected value of sample information using the probabilistic sensitivity analysis sample: A fast, nonparametric regression-based method. Med Decis Mak. 2015;35(5):570–83.
Brennan A, Kharroubi S, O’Hagan A, Chilcott J. Calculating Partial Expected Value of Perfect Information via Monte Carlo Sampling Algorithms. Med Decis Mak. 2007;27(4):448–70.
Brennan A, Kharroubi S. Efficient computation of partial expected value of sample information using Bayesian approximation. J Health Econ. 2007;26(1):122–48.
Alban A, Chick S, Forster M. Extending a Bayesian decision-theoretic approach to value-based sequential clinical trial design. In: Winter Simulation Conference (WSC); 2018:2018. Available at: https://www.informs-sim.org/wsc18papers/includes/files/216.pdf.
Powell W, Ryzhov I. Optimal learning. Wiley; 2012. p. 414. ISBN: 978-0-470-59669-2.
DeGroot M. Optimal statistical decisions. Wiley; 2005. p. 512. ISBN: 978-0-471-68029-1.
Forster M, Flight L, Corbacho B, Keding A, Ronaldson S, Tharmanathan P, et al. Report for the EcoNomics of Adaptive Clinical Trials (ENACT) project : Application of a Bayesian Value-Based Sequential Model of a Clinical Trial to the CACTUS and HERO Case Studies (with Guidance Material for Clinical Trials Units). 2021. Available from: https://eprints.whiterose.ac.uk/180084/
Handoll H, Brealey S, Rangan A, Keding A, Corbacho B, Jefferson L. The ProFHER (PROximal Fracture of the Humerus: Evaluation by Randomisation) trial-a pragmatic multicentre randomised controlled trial evaluating the clinical effectiveness and cost-effectiveness of surgical compared with non-surgical treatment for proxima. Heal Technol Assess (Winchester, England). 2015;19(27):1.
Palmer R, Enderby P, Cooper C, Latimer N, Julious S, Paterson G, et al. Computer therapy compared with usual care for people with long-standing aphasia poststroke: A pilot randomized controlled trial. Stroke. 2012;43(7):1904–11.
Kingsbury S, Tharmanathan P, Adamson J, Arden N, Birrell F, Cockayne S. Hydroxychloroquine effectiveness in reducing symptoms of hand osteoarthritis (HERO): study protocol for a randomized controlled trial. Trials. 2019;14(14):64.
Forster M, Brealey S, Chick S, Keding A, Corbacho B, Alban A, et al. Cost-Effective Clinical Trial Design: Application of a Bayesian Sequential Stopping Rule to the ProFHER Pragmatic Trial. Clin Trials. 2021;18(6):647–56.
Welch C, Forster M, Ronaldson S, Keding A, Corbacho-Martín B, Tharmanathan P. The performance of a Bayesian value-based sequential clinical trial design in the presence of an equivocal cost-effectiveness signal: evidence from the HERO trial. BMC Med Res Methodol. 2024;24(1):155. Available from: https://bmcmedresmethodol.biomedcentral.com/articles/https://doi.org/10.1186/s12874-024-02248-9
Flight L, Brennan A, Wilson I, Chick SE. A Tutorial on Value-Based Adaptive Designs: Could a Value-Based Sequential 2-Arm Design Have Created More Health Economic Value for the Big CACTUS Trial? Value Heal. 2024 Oct;27(10):1328–37. Available from: https://linkinghub.elsevier.com/retrieve/pii/S1098301524027426
Handoll H, Keding A, Corbacho B, Brealey S, Hewitt C, Rangan A. Five-year follow-up results of the PROFHER trial comparing operative and non-operative treatment of adults with a displaced fracture of the proximal humerus. Bone Joint J. 2017;99(3):383–92.
National Institute of Health Research. Funding and Awards. 2022. Clinical and cost effectiveness of aphasia computer therapy compared with usual stimulation or attention control long term post stroke (CACTUS). Available from: https://fundingawards.nihr.ac.uk/award/12/21/01
Latimer N, Bhadhuri A, Alshreef A. Self-managed, computerised word finding therapy as an add-on to usual care for chronic aphasia post-stroke: An economic evaluation. Clin Rehabil. 2020;
Ronaldson S, Keding A, Tharmanathan P, Arundel C, Kingsbury S, Conaghan P, et al. Cost-effectiveness of hydroxychloroquine versus placebo for hand osteoarthritis: economic evaluation of the HERO trial. F1000Res. 2021;10:821. https://doi.org/10.12688/f1000research.55296.1.
O’Hagan A, Buck C, Daneshkhah A, Eiser J, Garthwaite P, Jenkinson D. Uncertain judgements: eliciting experts’ probabilities. John Wiley & Sons; 2006.
Flight L. The use of health economics in the design and analysis of adaptive clinical trials. PhD thesis. University of Sheffield; 2020. Available at: https://etheses.whiterose.ac.uk/id/oai_id/oai:etheses.whiterose.ac.uk:27924.
Chick SE, Forster M, Pertile P. htadelay package. 2017. Available from: https://github.com/sechick/htadelay
Flight L. Report to NIHR. Value-based Adaptive Clinical Trial Designs for Efficient Delivery of NIHR Research - EcoNomics of Adaptive Clinical Trials (ENACT) [Internet]. 2021. Available from: https://www.nihr.ac.uk/documents/explore-nihr/Efficient studies/Sheffield CTRU ENACT - NIHR Final Report 31March2021.docx#:~:text=The EcoNomics of Adaptive Clinical,and weaknesses by assessing its
NHS England. 12 Actions to support and apply research in the NHS 2017. 2017. Available from: https://www.england.nhs.uk/publication/12-actions-to-support-and-apply-research-in-the-nhs/.
Claxton K, Eggington S, Ginnelly L, Griffin S, McCabe C, Philips Z. A pilot study of value of information analysis to support research recommendations for NICE. 2017.
Chilcott J, Brennan A, Booth A, Karnon J, Tappenden P. The role of modelling in prioritising and planning clinical trials. Health Technol Assess. 2003;7(23):iii, 1–125. https://doi.org/10.3310/hta7230.
Claxton K, Sculpher M. Using value of information analysis to prioritise health research: some lessons from recent UK experience. Pharmacoeconomics. 2006;24(11):1055–68. Available from: http://www.ncbi.nlm.nih.gov/pubmed/17067191
Woods B, Schmitt L, Rothery C, Phillips A, Hallett T, Revill P. Practical metrics for establishing the health benefits of research to support research prioritisation. BMJ Glob Health. 2020;5(8):e002152. https://doi.org/10.1136/bmjgh-2019-002152.
Welton N, Ades A. Research decisions in the face of heterogeneity: what can a new study tell us? Health Econ. 2012;21(10):1196–200.
Pocock SJ. Group sequential methods in the design and analysis of clinical trials. Biometrika. 1977;64:191–9.
O’Brien PC, Fleming TR. A multiple testing procedure for clinical trials. Biometrics. 1979;35:549–56.
Gelman A, Carlin J, Stern H, Dunson D, Vehtari A. DB R. Bayesian data analysis: CRC Press; 2013.
Dimairo M, Pallmann P, Wason J, Todd S, Jaki T, Julious SA, et al. The Adaptive designs CONSORT Extension (ACE) Statement: a checklist with explanation and elaboration guideline for reporting randomised trials that use an adaptive design. BMJ. 2020;17:369.
Lamont T, Barber N, de Pury J, Fulop N, Garfield-Birkbeck S, Lilford R, Mear L, Raine R, Fitzpatrick R. New approaches to evaluating complex health and care systems. BMJ (Clinical research ed). 2016;352:i154. https://doi.org/10.1136/bmj.i154.
Blagden SP, Billingham L, Brown LC, Buckland SW, Cooper AM, Ellis S, et al. Effective delivery of Complex Innovative Design (CID) cancer trials—A consensus statement. Br J Cancer. 2020;122:473–82.
Jaki T. Uptake of novel statistical methods for early-phase clinical studies in the UK public sector. Clin Trials. 2013;10(2):344–6.
Dimairo M, Boote J, Julious SA, Nicholl JP, Todd S. Missing steps in a staircase: a qualitative study of the perspectives of key stakeholders on the use of adaptive designs in confirmatory trials. Trials. 2015;16(1):1–16.
Morris TP, White IR, Crowther MJ. Using simulation studies to evaluate statistical methods. Stat Med. 2019;38(11):2074–102.
NIHR/NETSCC. The Clinical Trials Toolkit - Routemap 2020. 2020. The Clinical Trials Toolkit - Routemap 2020. Available from: https://www.ct-toolkit.ac.uk/routemap/dissemination-of-results/downloads/ct-toolkit-v1.1.pdf. Cited 2020 Nov 14.
© 2025. This work is licensed under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.