The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
library(BayesianMCPMod)
library(clinDR)
#> Loading required package: rstan
#> Loading required package: StanHeaders
#>
#> rstan version 2.32.3 (Stan version 2.26.1)
#> For execution on a local, multicore CPU with excess RAM we recommend calling
#> options(mc.cores = parallel::detectCores()).
#> To avoid recompilation of unchanged Stan programs, we recommend calling
#> rstan_options(auto_write = TRUE)
#> For within-chain threading using `reduce_sum()` or `map_rect()` Stan functions,
#> change `threads_per_chain` option:
#> rstan_options(threads_per_chain = 1)
#> Loading required package: shiny
#> Warning: package 'shiny' was built under R version 4.2.3
library(dplyr)
#> Warning: package 'dplyr' was built under R version 4.2.3
#>
#> Attaching package: 'dplyr'
#> The following objects are masked from 'package:stats':
#>
#> filter, lag
#> The following objects are masked from 'package:base':
#>
#> intersect, setdiff, setequal, union
set.seed(7015)
In this vignette, we will show the use of the Bayesian MCPMod package for trial planning for continuous distributed data. As in the analysis example vignette, we focus on the indication MDD and make use of historical data that is included in the clinDR package. More specifically, trial results for BRINTELLIX will be utilized to establish an informative prior for the control group.
In a first step, a meta analytic predictive prior will be calculated using historical data from 5 trials with main endpoint Change from baseline in MADRS score after 8 weeks. Please note that only information from the control group will be integrated leading to an informative mixture prior for the control group, while for the active groups a non-informative prior will be specified.
data("metaData")
testdata <- as.data.frame(metaData)
dataset <- filter(testdata, bname == "BRINTELLIX")
histcontrol <- filter(dataset, dose == 0, primtime == 8, indication == "MAJOR DEPRESSIVE DISORDER")
hist_data <- data.frame(
trial = histcontrol$nctno,
est = histcontrol$rslt,
se = histcontrol$se,
sd = histcontrol$sd,
n = histcontrol$sampsize)
sd_tot <- with(hist_data, sum(sd * n) / sum(n))
We will make use of the same getPriorList() function as in the analysis example vignette to create a MAP prior.
For the hypothetical new trial, we plan with 4 active dose levels and
we specify a broad set of potential dose-response relationships,
including a linear, an exponential, an emax and 2 sigEMax models.
Furthermore, we assume a maximum effect of -3 on top of control
(i.e. assuming that active treatment can reduce the MADRS score after 8
weeks by up to 15.8) and plan a trial with 80 patients for all active
groups and 60 patients for control.
exp <- DoseFinding::guesst(
d = 5,
p = c(0.2),
model = "exponential",
Maxd = max(dose_levels))
emax <- DoseFinding::guesst(
d = 2.5,
p = c(0.9),
model = "emax")
sigemax <- DoseFinding::guesst(
d = c(2.5, 5),
p = c(0.1, 0.6),
model = "sigEmax")
sigemax2 <- DoseFinding::guesst(
d = c(2, 4),
p = c(0.3, 0.8),
model = "sigEmax")
mods <- DoseFinding::Mods(
linear = NULL,
emax = emax,
exponential = exp,
sigEmax = rbind(sigemax, sigemax2),
doses = dose_levels,
maxEff = -3,
placEff = -12.8)
n_patients <- c(60, 80, 80, 80, 80)
To calculate success probabilities for the different assumed dose-response models and the specified trial design we will apply the assessDesign function. For illustration purposes, the number of simulated trial results is reduced to 100 in this example.
success_probabilities <- assessDesign(
n_patients = n_patients,
mods = mods,
prior_list = prior_list,
sd = sd_tot,
n_sim = 100) # speed up example run-time
success_probabilities
#> $linear
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.68
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.68 0.59 0.21 0.59
#>
#> $emax
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.82
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.82 0.26 0.78 0.23
#>
#> $exponential
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.63
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.63 0.61 0.16 0.60
#>
#> $sigEmax1
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.82
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.82 0.63 0.52 0.58
#>
#> $sigEmax2
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.84
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.84 0.43 0.73 0.37
#>
#> attr(,"avgSuccessRate")
#> [1] 0.758
#> attr(,"placEff")
#> [1] -12.8
#> attr(,"maxEff")
#> [1] -3
#> attr(,"sampleSize")
#> [1] 60 80 80 80 80
#> attr(,"priorESS")
#> Ctr DG_1 DG_2 DG_3 DG_4
#> 20.6 1.0 1.0 1.0 1.0
As an alternative, we will evaluate a design with the same overall sample size but allocating more patients on the highest dose group and control.
success_probabilities_uneq <- assessDesign(
n_patients = c(80, 60, 60, 60, 120),
mods = mods,
prior_list = prior_list,
sd = sd_tot,
n_sim = 100) # speed up example run-time
success_probabilities_uneq
#> $linear
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.8
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.80 0.76 0.50 0.75
#>
#> $emax
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.84
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.84 0.39 0.83 0.34
#>
#> $exponential
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.81
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.81 0.76 0.48 0.76
#>
#> $sigEmax1
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.87
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.87 0.73 0.66 0.71
#>
#> $sigEmax2
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.86
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.86 0.58 0.79 0.52
#>
#> attr(,"avgSuccessRate")
#> [1] 0.836
#> attr(,"placEff")
#> [1] -12.8
#> attr(,"maxEff")
#> [1] -3
#> attr(,"sampleSize")
#> [1] 80 60 60 60 120
#> attr(,"priorESS")
#> Ctr DG_1 DG_2 DG_3 DG_4
#> 20.6 1.0 1.0 1.0 1.0
For this specific trial setting the adapted allocation ratio leads to increased success probabilities under all assumed dose response relationships.
Instead of specifying the assumed effects via the models it is also possible to directly specify the effects for the individual dose levels via the dr_means input. This allows e.g. also the simulation of scenarios with a prior-data conflict.
success_probabilities <- assessDesign(
n_patients = c(60, 80, 80, 80, 80),
mods = mods,
prior_list = prior_list,
sd = sd_tot,
dr_means = c(-12, -14, -15, -16, -17),
n_sim = 100) # speed up example run-time
success_probabilities
#> $dose_resp
#> Bayesian Multiple Comparison Procedure
#> Estimated Success Rate: 0.99
#> N Simulations: 100
#> Model Significance Frequencies
#> linear emax exponential sigEmax
#> 0.99 0.90 0.89 0.88
#>
#> attr(,"avgSuccessRate")
#> [1] 0.99
#> attr(,"placEff")
#> [1] -12
#> attr(,"maxEff")
#> [1] 5
#> attr(,"sampleSize")
#> [1] 60 80 80 80 80
#> attr(,"priorESS")
#> Ctr DG_1 DG_2 DG_3 DG_4
#> 20.6 1.0 1.0 1.0 1.0
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.