The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
This vignette demonstrates how to use the R package bhmbasket
to reproduce parts of “Robust exchangeability designs for early phase clinical trials with multiple strata” by Neuenschwander et al. (2016).
Two examples are shown in this vignette: the analysis of a basket trial’s outcome with the ExNex approach, and the assessment of the operating characteristics of a planned basket trial under several scenarios.
library(bhmbasket)
<- 123
rng_seed set.seed(rng_seed)
Section 2 of Neuenschwander et al. (2016) discusses the application of the ExNex approach to the results of the phase II sarcoma basket trial by Chugh et al. (2009). The prior specifications and the estimated response rates per cohort are shown in the Appendix of Neuenschwander et al. (2016).
A single trial with a given outcome can be created with the function createTrial()
, which takes as arguments the number of subjects and the number of responders of the trial. The outcome of the basket trial by Chugh et al. (2009) can be set up as follows:
<- createTrial(
chugh_trial n_subjects = c(15, 13, 12, 28, 29, 29, 26, 5, 2, 20),
n_responders = c(2, 0, 1, 6, 7, 3, 5, 1, 0, 3))
The analysis of the trial is handled by the function performAnalyses()
, to which to the chugh_trial
, the name of the method "exnex"
, and the prior parameters as shown in Appendix 5.3 of Neuenschwander et al. (2016) are provided:
<- setPriorParametersExNex(
chugh_prior_parameters mu_mean = c(-1.735, 0.847),
mu_sd = c(0.146, 0.266)^-0.5,
tau_scale = 1,
mu_j = rep(-1.734, 10),
tau_j = rep(0.128^-0.5, 10),
w_j = c(0.5, 0, 0.5))
if (file.exists("chugh_analysis.rds")) {
<- readRDS("chugh_analysis.rds")
chugh_analysis
else {
}
<- performAnalyses(
chugh_analysis scenario_list = chugh_trial,
method_names = "exnex",
prior_parameters_list = chugh_prior_parameters,
seed = rng_seed,
n_mcmc_iterations = 5e4,
n_cores = 2L,
verbose = FALSE)
saveRDS(chugh_analysis, "chugh_analysis.rds")
}
One can use the function getEstimates()
to get point estimates and credible intervals of the response rates:
getEstimates(analyses_list = chugh_analysis)
#> $exnex
#> Mean SD 2.5% 50% 97.5%
#> p_1 0.15077467 0.06293024 0.0392848259 0.14745358 0.2915622
#> p_2 0.05287414 0.05895604 0.0002959401 0.02684776 0.1941751
#> p_3 0.12699904 0.06781268 0.0123062080 0.12706481 0.2672502
#> p_4 0.18849350 0.06009882 0.0930300121 0.18020901 0.3305355
#> p_5 0.20512931 0.06550208 0.1060958755 0.19408070 0.3631065
#> p_6 0.13140558 0.05094556 0.0383863302 0.13079420 0.2355352
#> p_7 0.17669509 0.05732098 0.0821657158 0.17034967 0.3101239
#> p_8 0.17857554 0.10398925 0.0271685311 0.16135326 0.4669779
#> p_9 0.13019866 0.11375943 0.0010129288 0.12100970 0.4291611
#> p_10 0.15732452 0.05807734 0.0552504578 0.15348422 0.2879676
By comparing these values to the respective numbers provided by Neuenschwander et al. (2016) in Appendix 5.3 one can see that the results are equal to the second decimal place.
Section 3.1 of Neuenschwander et al. (2016) presents the design and the operating characteristics of a phase II basket trial in four indications. In their publication, the authors present the scenarios with the respective true response rates per cohort in Table V along with the respective cohort-wise go probabilities, biases, and mean squared errors. The number of subjects is 20 each for the first two cohorts and 10 each for the last two cohorts.
The scenarios are simulated with the function simulateScenarios()
. There are seven different scenarios presented in Table V. In this example, the Scenarios 1, 3, and 4 are reproduced. For each scenario, 10000 basket trial realizations are simulated:
<- simulateScenarios(
scenarios_list n_subjects_list = list(c(20, 20, 10, 10),
c(20, 20, 10, 10),
c(20, 20, 10, 10)),
response_rates_list = list(c(0.1, 0.1, 0.1, 0.1),
c(0.1, 0.1, 0.3, 0.3),
c(0.1, 0.1, 0.1, 0.5)),
scenario_numbers = c(1, 3, 4),
n_trials = 1e4)
The analysis of each simulated basket trial is performed with the function performAnalyses()
. Neuenschwander et al. (2016) use the posterior means of the cohorts’ response rates to calculate the biases and mean squared errors, as well as to derive cohort-wise decisions. Setting the argument post_mean
to TRUE
will save the posterior means along with the posterior probability thresholds, which are required for the decision making laid out below. The prior parameters for the ExNex model are taken from Section 3.1 and Appendix 5.1.2 of Neuenschwander et al. (2016).
<- setPriorParametersExNex(
prior_parameters mu_mean = c(logit(0.1), logit(0.3)),
mu_sd = c(3.18, 1.94),
tau_scale = 1,
mu_j = rep(logit(0.2), 4),
tau_j = rep(2.5, 4),
w_j = c(0.25, 0.25, 0.5))
if (file.exists("analyses_list.rds")) {
<- readRDS("analyses_list.rds")
analyses_list
else {
}
<- performAnalyses(
analyses_list scenario_list = scenarios_list,
method_names = "exnex",
prior_parameters_list = prior_parameters,
seed = rng_seed,
n_mcmc_iterations = 5e4,
n_cores = 2L)
saveRDS(analyses_list, "analyses_list.rds")
}
Note that although the total number of simulated trials is 3 x 10000, there are only 2314 unique trial realizations among the three scenarios. As the function performAnalyses()
takes advantage of this, its run time is reduced when compared to applying the model to all simulated trials. However, some additional time is required to map the results from the unique trials back to the simulated trials of each scenario.
The bias and mean squared error (MSE) per cohort, as well as some posterior quantiles, are estimated with getEstimates()
. The argument point_estimator = "mean"
ensures that the mean instead of the median will be used for calculating the biases and MSEs. In order to better assess and compare the results with the numbers presented by Neuenschwander et al. (2016), scaling and rounding may be applied with scaleRoundList()
:
<- getEstimates(
estimates analyses_list = analyses_list,
point_estimator = "mean")
scaleRoundList(
list = estimates,
scale_param = 100,
round_digits = 2)
#> $exnex
#> $exnex$scenario_1
#> Mean SD 2.5% 50% 97.5% Bias MSE
#> p_1 10.76 6.01 2.46 9.65 25.26 0.76 0.34
#> p_2 10.61 5.97 2.39 9.50 25.05 0.61 0.33
#> p_3 11.54 8.06 1.66 9.64 31.86 1.54 0.56
#> p_4 11.62 8.10 1.68 9.73 32.02 1.62 0.56
#>
#> $exnex$scenario_3
#> Mean SD 2.5% 50% 97.5% Bias MSE
#> p_1 11.19 6.21 2.54 10.07 26.07 1.19 0.37
#> p_2 11.34 6.25 2.60 10.21 26.30 1.34 0.38
#> p_3 28.75 12.22 9.11 27.42 55.60 -1.25 1.65
#> p_4 28.82 12.23 9.14 27.50 55.70 -1.18 1.64
#>
#> $exnex$scenario_4
#> Mean SD 2.5% 50% 97.5% Bias MSE
#> p_1 11.15 6.21 2.54 10.01 26.10 1.15 0.37
#> p_2 11.07 6.18 2.51 9.93 25.97 1.07 0.37
#> p_3 12.21 8.42 1.80 10.26 33.30 2.21 0.66
#> p_4 46.73 14.01 20.69 46.49 74.06 -3.27 2.32
Comparing the biases and MSEs with the respective numbers provided in Table V of Neuenschwander et al. (2016), one can see that the results are rather similar.
The cohort-wise go decisions are derived with the function getGoDecisions()
, which applies the specified decision rules to each simulated basket trial for each scenario. The estimated go probabilities for each scenario can then be derived with the function getGoProbabilities()
.
The set up of the decision rules in getGoDecisions()
warrants further explanation. Generally, a simple decision rule for a go in a single cohort \(j\) can be written as \[P(p_j|\text{data} > \bar{p}_{j}) > \gamma_j,\] where \(p_j|\text{data}\) is the posterior response rate, \(\bar{p}_{j}\) is the is the boundary response rate, and \(\gamma_j\) is the posterior evidence level for a go decision. Thus, the decision rule consists of three parts: the posterior response rate, the boundary response rate, and the posterior probability threshold. The arguments for setting up the decision rules correspond to these three parts: cohort_names
picks the posterior response rates, boundary_rules
specifies the boundary response rates and type of comparison, and evidence_levels
provides the posterior evidence levels. For example, the decision rule \(P(p_1|\text{data} > 0.1) > 0.9\) would then be implemented as
= "p_1"
cohort_names = quote(c(x[1] > 0.1))
boundary_rules = 0.9 evidence_levels
It is possible to specify different boundary rules and evidence levels for different analysis methods with list()
. In this case, only the ExNex method is applied and no list is needed. Note that only evidence levels specified in performAnalyses()
can be utilized.
It is possible to implement combined decision criteria, such as utilized by Neuenschwander et al. (2016) who require in Section 3.1 for a go decision in the first cohort that \[P(p_1|\text{data} > 0.1) > 0.9 \quad\land\quad \text{Mean}(p_1|\text{data}) > 0.2\] holds true. This combined decision rule would then be implemented as
= c("p_1", "p_1")
cohort_names = quote(c(x[1] > 0.1 & x[2] > 0.2))
boundary_rules = c(0.9, "mean") evidence_levels
Thus, the go probabilities for the Scenarios 1, 3, and 4 can be estimated by:
<- getGoDecisions(
decisions_list analyses_list = analyses_list,
cohort_names = c("p_1", "p_1",
"p_2", "p_2",
"p_3", "p_3",
"p_4", "p_4"),
evidence_levels = c(0.9, "mean",
0.9, "mean",
0.8, "mean",
0.8, "mean"),
boundary_rules = quote(c(x[1] > 0.1 & x[2] > 0.2,
3] > 0.1 & x[4] > 0.2,
x[5] > 0.1 & x[6] > 0.2,
x[7] > 0.1 & x[8] > 0.2)))
x[
<- getGoProbabilities(decisions_list)
go_probabilities
scaleRoundList(
list = go_probabilities,
scale_param = 100,
round_digits = 0)
#> $exnex
#> $exnex$scenario_1
#> overall decision_1 decision_2 decision_3 decision_4
#> Go 21 5 5 8 8
#>
#> $exnex$scenario_3
#> overall decision_1 decision_2 decision_3 decision_4
#> Go 87 10 10 69 68
#>
#> $exnex$scenario_4
#> overall decision_1 decision_2 decision_3 decision_4
#> Go 95 7 7 15 95
By comparing these go probabilities to the values provided in Table V of Neuenschwander et al. (2016) one can see that the results differ at most by 1 percentage point.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.