The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
In this vignette, we describe how to run a Bayesian network meta-analysis model using this package. First, we’ll need to load the package.
It is essential to specify the input data in a correct format. We
have chosen to use arm-level data with following input variable names:
Outcomes, N or SE, Study, and Treat. Outcomes is the trial results. N is
the number of respondents used for binary or multinomial model. SE is
the standard error used for normal model. Study is the study indicator
for the meta analysis. Lastly, Treat is the treatment indicator for each
arm. We will use a dataset parkinsons
for illustration.
parkinsons
#> $Outcomes
#> [1] -1.22 -1.53 -0.70 -2.40 -0.30 -2.60 -1.20 -0.24 -0.59 -0.73 -0.18 -2.20
#> [13] -2.50 -1.80 -2.10
#>
#> $SE
#> [1] 0.504 0.439 0.282 0.258 0.505 0.510 0.478 0.265 0.354 0.335 0.442 0.197
#> [13] 0.190 0.200 0.250
#>
#> $Treat
#> [1] "Placebo" "Ropinirole" "Placebo" "Pramipexole"
#> [5] "Placebo" "Pramipexole" "Bromocriptine" "Ropinirole"
#> [9] "Bromocriptine" "Ropinirole" "Bromocriptine" "Bromocriptine"
#> [13] "Cabergoline" "Bromocriptine" "Cabergoline"
#>
#> $Study
#> [1] 1 1 2 2 3 3 3 4 4 5 5 6 6 7 7
#>
#> $Treat.order
#> [1] "Placebo" "Pramipexole" "Ropinirole" "Bromocriptine"
#> [5] "Cabergoline"
In order to run network meta-analysis in JAGS, we need to relabel study names into to a numeric sequence, i.e. 1 to total number of studies, and relabel the treatment into a numeric sequence according to treatment order specified. If the treatment order is not specified, default is to use the alphabetical order. In the example below, we set placebo as the baseline treatment followed by Pramipexole, Ropinirole, Bromocriptine, and Cabergoline as the treatment order.
network <- with(parkinsons, network.data(Outcomes = Outcomes, Study = Study, Treat = Treat, SE = SE, response = "normal", Treat.order = Treat.order))
network$Treat.order
#> 1 2 3 4 5
#> "Placebo" "Pramipexole" "Ropinirole" "Bromocriptine" "Cabergoline"
network$Study.order
#> 1 2 3 4 5 6 7
#> 1 2 3 4 5 6 7
Another important preprocessing step that is done in
network.data
function is changing the arm-level data into
study-level data. We store the study-level data of Outcomes as r, Treat
as t, N or SE as n or se. We can see how Outcomes changed into a
study-level matrix given below (i.e. row = study). If the Outcomes are
multinomial, it will change to a 3 dimensional array.
Here are all the datasets available in the package for testing.
Priors can be set in the network.data
function. If left
unspecified, default values are used. For heterogeneity parameters of
the random effects model, we follow the data format from a similar
Bayesian network meta-analysis R package gemtc
. It should
be a list of length 3 where the first element should be the distribution
(one of dunif, dgamma, dhnorm, dwish) and the next two are the
parameters associated with the distribution. Here is an example of
assigning a half-normal distribution with mean 0 and standard deviation
5.
Now to run the model, we use the function network.run
.
The most important parameter is n.run
which determines the
number of final samples the user wants. Gelman-Rubin statistics is
checked automatically every setsize
number of iterations
and once the series have converged we store the last half of the
sequence. If the number of iteration is less than the number of final
samples (n.runs
), it will sample more to fill the
requirement. One of the nice features of this package is that it checks
for convergence automatically and will give an error if the sequence has
not converged. The parameters tested for convergence are the relative
treatment effects, baseline effect, and heterogeneity parameter. The
number that is printed during the running of the model is the point
estimate of the Gelman-Rubin statistics used to test convergence.
result <- network.run(network, n.run = 30000)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 50
#> Unobserved stochastic nodes: 54
#> Total graph size: 1129
#>
#> Initializing model
#>
#> NOTE: Stopping adaptation
#>
#>
#> [1] 1.001863
#> [1] 1.001619
Package includes many summary tools that can be used. One useful summary might be the forest plot.
# relative.effects.table(result)
# draw.network.graph(network)
# network.gelman.plot(result)
# network.autocorr.diag(result)
# network.autocorr.plot(result)
# network.rank.tx.plot(result)
# network.cumrank.tx.plot(result)
# sucra(result)
# network.deviance.plot(result)
# network.leverage.plot(result)
Another nice addition of this package is that multinomial outcome dataset can be analyzed. Here is an example.
network <- with(cardiovascular, network.data(Outcomes, Study, Treat, N, response = "multinomial"))
result <- network.run(network)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 34
#> Unobserved stochastic nodes: 37
#> Total graph size: 1301
#>
#> Initializing model
#>
#> NOTE: Stopping adaptation
#>
#>
#> [1] 1.006457
#> [1] 1.003343
summary(result)
#> $summary.samples
#>
#> Iterations = 1:50000
#> Thinning interval = 1
#> Number of chains = 3
#> Sample size per chain = 50000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> d[1,1] 0.000000 0.00000 0.000e+00 0.0000000
#> d[2,1] -0.104248 0.15175 3.918e-04 0.0011237
#> d[3,1] -0.046387 0.11261 2.907e-04 0.0007094
#> d[1,2] 0.000000 0.00000 0.000e+00 0.0000000
#> d[2,2] -0.189461 0.15442 3.987e-04 0.0011464
#> d[3,2] -0.270722 0.11253 2.906e-04 0.0006726
#> sigma[1,1] 0.114772 0.05080 1.312e-04 0.0002429
#> sigma[2,1] 0.004776 0.03448 8.902e-05 0.0001648
#> sigma[1,2] 0.004776 0.03448 8.902e-05 0.0001648
#> sigma[2,2] 0.114508 0.05041 1.302e-04 0.0002286
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> d[1,1] 0.00000 0.00000 0.000000 0.000000 0.00000
#> d[2,1] -0.40148 -0.20456 -0.105231 -0.005112 0.19893
#> d[3,1] -0.27005 -0.11951 -0.045789 0.027002 0.17573
#> d[1,2] 0.00000 0.00000 0.000000 0.000000 0.00000
#> d[2,2] -0.49659 -0.29013 -0.188343 -0.087681 0.11316
#> d[3,2] -0.49494 -0.34364 -0.270097 -0.197130 -0.05043
#> sigma[1,1] 0.05206 0.08060 0.103768 0.135785 0.24257
#> sigma[2,1] -0.06258 -0.01420 0.003979 0.022916 0.07647
#> sigma[1,2] -0.06258 -0.01420 0.003979 0.022916 0.07647
#> sigma[2,2] 0.05199 0.08058 0.103526 0.135752 0.24110
#>
#>
#> $Treat.order
#> 1 2 3
#> 1 2 3
#>
#> $deviance
#> Dbar pD DIC
#> 31.17665 60.50613 91.68278
#>
#> $total_n
#> [1] 34
#>
#> attr(,"class")
#> [1] "summary.network.result"
We can add continuous or discrete covariates to fit a network meta-regression. If the covariate is continuous, it is centered. Discrete variables need to be 0-1 dummy format. There are three different assumptions for covariate effect: “common”, “independent”, and “exchangeable”.
network <- with(statins, network.data(Outcomes, Study, Treat, N=N, response = "binomial", Treat.order = c("Placebo", "Statin"), covariate = covariate, covariate.type = "discrete", covariate.model = "common"))
result <- network.run(network)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 38
#> Unobserved stochastic nodes: 41
#> Total graph size: 877
#>
#> Initializing model
#>
#> NOTE: Stopping adaptation
#>
#>
#> [1] 1.029976
#> [1] 1.012461
summary(result)
#> $summary.samples
#>
#> Iterations = 1:50000
#> Thinning interval = 1
#> Number of chains = 3
#> Sample size per chain = 50000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> beta1[1] 0.00000 0.0000 0.0000000 0.000000
#> beta1[2] -0.29376 0.2580 0.0006663 0.003547
#> d[1] 0.00000 0.0000 0.0000000 0.000000
#> d[2] -0.06897 0.2014 0.0005201 0.002552
#> sd 0.24174 0.2025 0.0005228 0.005636
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> beta1[1] 0.000000 0.00000 0.00000 0.00000 0.0000
#> beta1[2] -0.863431 -0.43041 -0.27380 -0.14129 0.1830
#> d[1] 0.000000 0.00000 0.00000 0.00000 0.0000
#> d[2] -0.469746 -0.18039 -0.07379 0.03746 0.3564
#> sd 0.009191 0.09005 0.19119 0.33908 0.7590
#>
#>
#> $Treat.order
#> 1 2
#> "Placebo" "Statin"
#>
#> $deviance
#> Dbar pD DIC
#> 42.67063 25.12401 67.79463
#>
#> $total_n
#> [1] 38
#>
#> attr(,"class")
#> [1] "summary.network.result"
Covariate plot shows you how the relative treatment effect changes as the covariate varies.
Another useful addition of this network package is the ability to model baseline risk. We can have “common”, “independent”, or “exchangeable” assumption on the baseline slopes and “independent” and “exchangeable” assumption on the baseline risk. Here we demonstrate a common baseline slope and independent baseline risk model.
Note that Abe 2006 study is problematic in the certolizumab dataset because it has 0 events. If we specify an informative prior on the baseline risk (i.e. prec.Eta), this helps with the convergence of the model.
network <- with(certolizumab, network.data(Outcomes = Outcomes, Treat = Treat, Study = Study, N = N, response = "binomial", Treat.order = Treat.order, baseline = "common", baseline.risk = "independent", prec.Eta = 0.1))
result <- network.run(network)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 24
#> Unobserved stochastic nodes: 32
#> Total graph size: 665
#>
#> Initializing model
#>
#> NOTE: Stopping adaptation
#>
#>
#> [1] 1.199315
#> [1] 1.050586
#> [1] 1.012954
#> [1] 1.081792
summary(result)
#> $summary.samples
#>
#> Iterations = 1:50000
#> Thinning interval = 1
#> Number of chains = 3
#> Sample size per chain = 50000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> b_bl[1] 0.0000 0.0000 0.0000000 0.000000
#> b_bl[2] -0.8779 0.1563 0.0004036 0.002494
#> b_bl[3] -0.8779 0.1563 0.0004036 0.002494
#> b_bl[4] -0.8779 0.1563 0.0004036 0.002494
#> b_bl[5] -0.8779 0.1563 0.0004036 0.002494
#> b_bl[6] -0.8779 0.1563 0.0004036 0.002494
#> b_bl[7] -0.8779 0.1563 0.0004036 0.002494
#> d[1] 0.0000 0.0000 0.0000000 0.000000
#> d[2] 1.8600 0.2245 0.0005797 0.001764
#> d[3] 2.1555 0.2003 0.0005172 0.001798
#> d[4] 2.0847 0.4447 0.0011481 0.006025
#> d[5] 1.6761 0.1918 0.0004953 0.001483
#> d[6] 0.3605 0.5702 0.0014723 0.008478
#> d[7] 2.1932 0.2592 0.0006692 0.002250
#> sd 0.1969 0.1697 0.0004382 0.003172
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> b_bl[1] 0.000000 0.000000 0.0000 0.0000 0.0000
#> b_bl[2] -1.167097 -0.969222 -0.8867 -0.7944 -0.5498
#> b_bl[3] -1.167097 -0.969222 -0.8867 -0.7944 -0.5498
#> b_bl[4] -1.167097 -0.969222 -0.8867 -0.7944 -0.5498
#> b_bl[5] -1.167097 -0.969222 -0.8867 -0.7944 -0.5498
#> b_bl[6] -1.167097 -0.969222 -0.8867 -0.7944 -0.5498
#> b_bl[7] -1.167097 -0.969222 -0.8867 -0.7944 -0.5498
#> d[1] 0.000000 0.000000 0.0000 0.0000 0.0000
#> d[2] 1.402485 1.750183 1.8625 1.9719 2.3010
#> d[3] 1.787842 2.038587 2.1471 2.2615 2.5804
#> d[4] 1.234362 1.818059 2.0774 2.3418 2.9847
#> d[5] 1.311985 1.572872 1.6718 1.7735 2.0706
#> d[6] -0.823448 0.003589 0.3821 0.7437 1.4088
#> d[7] 1.701676 2.049865 2.1880 2.3290 2.7305
#> sd 0.008989 0.079593 0.1579 0.2647 0.6292
#>
#>
#> $Treat.order
#> 1 2 3 4 5
#> "Placebo" "CZP" "Adalimumab" "Etanercept" "Infliximab"
#> 6 7
#> "Rituximab" "Tocilizumab"
#>
#> $deviance
#> Dbar pD DIC
#> 24.61794 20.37798 44.99592
#>
#> $total_n
#> [1] 24
#>
#> attr(,"class")
#> [1] "summary.network.result"
We included another inconsistency model that can be used to test consistency assumption. Here we can specify a pair where we want to nodesplit and test the inconsistency assumptions. For instance if we specify treatment pair = c(3, 9), we are finding the difference in the direct and indirect evidence of treatment 3 and 9. Inconsistency estimate and the corresponding p-value are reported in the summary. If the p-value is small, it means that we can reject the null hypothesis that direct and indirect evidence agree. We can repeat for all the pairs in the network and identify pairs that might be inconsistent. Refer to Dias et al. 2010 (i.e. Checking consistency in mixed treatment comparison meta-analysis) for more details.
network <- with(thrombolytic, nodesplit.network.data(Outcomes, Study, Treat, N, response = "binomial", pair = c(3,9), type = "fixed"))
result <- nodesplit.network.run(network)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 102
#> Unobserved stochastic nodes: 59
#> Total graph size: 2263
#>
#> Initializing model
#>
#> NOTE: Stopping adaptation
#>
#>
#> [1] 1.002357
#> [1] 1.000529
summary(result)
#> $summary.samples
#>
#> Iterations = 1:50000
#> Thinning interval = 1
#> Number of chains = 3
#> Sample size per chain = 50000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> d[1] 0.000000 0.00000 0.000e+00 0.000e+00
#> d[2] -0.002001 0.03026 7.813e-05 1.872e-04
#> d[3] -0.160732 0.04338 1.120e-04 3.533e-04
#> d[4] -0.044209 0.04643 1.199e-04 2.362e-04
#> d[5] -0.113157 0.05985 1.545e-04 4.635e-04
#> d[6] -0.155725 0.07745 2.000e-04 5.718e-04
#> d[7] -0.465295 0.10092 2.606e-04 5.658e-04
#> d[8] -0.198404 0.22013 5.684e-04 1.272e-03
#> d[9] 0.003585 0.03693 9.535e-05 2.175e-04
#> diff 1.240187 0.41824 1.080e-03 3.964e-03
#> direct 1.404504 0.41410 1.069e-03 3.941e-03
#> oneminusprob 0.000620 0.02489 6.427e-05 9.519e-05
#> prob 0.999380 0.02489 6.427e-05 9.519e-05
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> d[1] 0.00000 0.00000 0.000000 0.00000 0.000000
#> d[2] -0.06122 -0.02255 -0.001991 0.01843 0.057266
#> d[3] -0.24576 -0.19003 -0.160631 -0.13139 -0.075876
#> d[4] -0.13524 -0.07564 -0.044089 -0.01278 0.046572
#> d[5] -0.22973 -0.15380 -0.113177 -0.07283 0.004391
#> d[6] -0.30753 -0.20808 -0.155666 -0.10317 -0.004746
#> d[7] -0.66375 -0.53323 -0.465266 -0.39717 -0.268178
#> d[8] -0.62984 -0.34681 -0.198168 -0.05035 0.234758
#> d[9] -0.06881 -0.02125 0.003661 0.02845 0.075941
#> diff 0.45789 0.95274 1.226529 1.51039 2.100896
#> direct 0.63354 1.12038 1.390744 1.67192 2.257658
#> oneminusprob 0.00000 0.00000 0.000000 0.00000 0.000000
#> prob 1.00000 1.00000 1.000000 1.00000 1.000000
#>
#>
#> $deviance
#> NULL
#>
#> $`Inconsistency estimate`
#> [1] 1.240187
#>
#> $p_value
#> [1] 0.00124
#>
#> attr(,"class")
#> [1] "summary.nodesplit.network.result"
Default summary measure when analyzing binary outcomes is the odds ratio. We have added an option to calculate risk difference, relative risk, and number needed to treat by incorporating external baseline risk in treatment A (i.e. placebo).
#Using metaprop function from meta package, we meta-analyze placebo event proportion.
#library(meta)
#placebo_index <- which(certolizumab$Treat == "Placebo")
#meta.pla <- metaprop(event = certolizumab$Outcomes[placebo_index], n = certolizumab$N[placebo_index], method = "GLMM", sm = "PLOGIT")
#mean.A = meta.pla$TE.random; prec.A = 1/meta.pla$tau^2
network <- with(certolizumab, network.data(Outcomes = Outcomes, Treat = Treat, Study = Study, N = N, response = "binomial", mean.A = -2.27, prec.A = 2.53))
result <- network.run(network)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 24
#> Unobserved stochastic nodes: 32
#> Total graph size: 698
#>
#> Initializing model
#>
#> NOTE: Stopping adaptation
#>
#>
#> [1] 1.004711
#> [1] 1.001694
#summary(result, extra.pars = c("RD", "RR", "NNT"))
summary(result, extra.pars = c("RR"))
#> $summary.samples
#>
#> Iterations = 1:50000
#> Thinning interval = 1
#> Number of chains = 3
#> Sample size per chain = 50000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> RR[2,1] 1.7501 1.712e+00 4.420e-03 1.038e-02
#> RR[3,1] 3.9992 4.015e+00 1.037e-02 3.031e-02
#> RR[4,1] 1.1152 1.460e+00 3.769e-03 1.238e-02
#> RR[5,1] 0.1903 2.337e-01 6.035e-04 1.838e-03
#> RR[6,1] 0.3813 1.023e+00 2.642e-03 6.297e-03
#> RR[7,1] 0.9177 1.172e+00 3.027e-03 7.412e-03
#> RR[3,2] 7.7501 2.987e+02 7.713e-01 7.953e-01
#> RR[4,2] 3.3880 3.827e+02 9.882e-01 1.001e+00
#> RR[5,2] 0.2192 5.543e+00 1.431e-02 1.630e-02
#> RR[6,2] 0.9261 5.443e+01 1.405e-01 1.508e-01
#> RR[7,2] 1.6301 8.071e+01 2.084e-01 2.156e-01
#> RR[4,3] 23.8139 3.521e+03 9.090e+00 9.846e+00
#> RR[5,3] 0.6237 7.067e+01 1.825e-01 1.867e-01
#> RR[6,3] 18.4641 4.292e+03 1.108e+01 1.115e+01
#> RR[7,3] 5.2248 6.679e+02 1.725e+00 1.734e+00
#> RR[5,4] 0.2450 5.489e-01 1.417e-03 1.717e-03
#> RR[6,4] 0.6881 1.442e+01 3.724e-02 3.989e-02
#> RR[7,4] 1.8316 1.197e+02 3.090e-01 3.098e-01
#> RR[6,5] 2.5623 4.800e+01 1.239e-01 1.282e-01
#> RR[7,5] 5.4775 1.079e+01 2.786e-02 4.171e-02
#> RR[7,6] 432.5594 1.141e+05 2.947e+02 2.953e+02
#> d[1] 0.0000 0.000e+00 0.000e+00 0.000e+00
#> d[2] 0.3452 1.095e+00 2.826e-03 6.422e-03
#> d[3] 1.4561 1.863e+00 4.809e-03 1.575e-02
#> d[4] -0.2461 1.027e+00 2.652e-03 7.898e-03
#> d[5] -1.9986 6.893e-01 1.780e-03 5.228e-03
#> d[6] -1.9954 1.509e+00 3.896e-03 9.137e-03
#> d[7] -0.4995 1.085e+00 2.802e-03 6.672e-03
#> sd 0.9310 6.357e-01 1.641e-03 9.587e-03
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> RR[2,1] 0.173624 0.83420 1.33070 2.08185 6.0489
#> RR[3,1] 0.156316 1.30740 2.84797 5.37417 14.4968
#> RR[4,1] 0.144080 0.47458 0.72863 1.19793 4.7053
#> RR[5,1] 0.036740 0.10673 0.15120 0.21165 0.5676
#> RR[6,1] 0.007509 0.06693 0.15108 0.33557 2.2393
#> RR[7,1] 0.070372 0.38730 0.64967 1.04052 3.6035
#> RR[3,2] 0.126892 0.95690 2.01134 4.12227 20.7921
#> RR[4,2] 0.114627 0.35860 0.55700 0.92562 5.0873
#> RR[5,2] 0.030264 0.08122 0.11662 0.16681 0.5326
#> RR[6,2] 0.006211 0.05185 0.11722 0.26061 1.8610
#> RR[7,2] 0.058447 0.29465 0.49840 0.80685 3.5177
#> RR[4,3] 0.033915 0.13911 0.28955 0.62796 5.5267
#> RR[5,3] 0.008345 0.02971 0.05788 0.11699 0.6950
#> RR[6,3] 0.002360 0.02181 0.05901 0.16221 1.7478
#> RR[7,3] 0.020040 0.11414 0.24554 0.53576 3.7857
#> RR[5,4] 0.043863 0.14376 0.21138 0.28512 0.5994
#> RR[6,4] 0.008670 0.09126 0.20788 0.44831 2.4110
#> RR[7,4] 0.080028 0.51019 0.89500 1.40882 4.2577
#> RR[6,5] 0.072291 0.48389 0.99986 2.04827 11.0268
#> RR[7,5] 0.797265 2.89135 4.24725 6.05223 16.6194
#> RR[7,6] 0.230098 1.85118 4.20058 9.37857 77.8810
#> d[1] 0.000000 0.00000 0.00000 0.00000 0.0000
#> d[2] -1.848015 -0.20190 0.32976 0.87792 2.6090
#> d[3] -1.953563 0.30772 1.33551 2.50566 5.4437
#> d[4] -2.030770 -0.80717 -0.34971 0.20529 2.1263
#> d[5] -3.405192 -2.33764 -1.98871 -1.64960 -0.6188
#> d[6] -5.002935 -2.81173 -1.99071 -1.17321 0.9746
#> d[7] -2.759574 -1.02106 -0.47398 0.04456 1.6526
#> sd 0.146865 0.51144 0.78204 1.17696 2.6443
#>
#>
#> $Treat.order
#> 1 2 3 4 5
#> "Adalimumab" "CZP" "Etanercept" "Infliximab" "Placebo"
#> 6 7
#> "Rituximab" "Tocilizumab"
#>
#> $deviance
#> Dbar pD DIC
#> 27.03099 22.66527 49.69626
#>
#> $total_n
#> [1] 24
#>
#> attr(,"class")
#> [1] "summary.network.result"
Generating reproducible results requires to set two set of seed values. First, the set.seed() function is used to allow bnma to generate reproducible initial values. Second, there is the JAGS RNG seed that needs to be set. Setting the JAGS RNG seed is not necessary in bnma as the program assigns default JAGS RNG seeds. However, users can specify their own seed if needed.
set.seed(1234) # seed for generating reproducible initial values
network <- with(blocker, network.data(Outcomes = Outcomes, Treat = Treat, Study = Study, N = N, response = "binomial"))
# JAGS RNG list of initial values
jags_inits <- list(
list(".RNG.name"="base::Wichmann-Hill", ".RNG.seed" = 94387),
list(".RNG.name"="base::Wichmann-Hill", ".RNG.seed" = 24507),
list(".RNG.name"="base::Wichmann-Hill", ".RNG.seed" = 39483)
)
result <- network.run(network, n.chains=3, RNG.inits=jags_inits)
#> Compiling model graph
#> Resolving undeclared variables
#> Allocating nodes
#> Graph information:
#> Observed stochastic nodes: 44
#> Unobserved stochastic nodes: 46
#> Total graph size: 971
#>
#> Initializing model
#>
#> NOTE: Stopping adaptation
#>
#>
#> [1] 1.016791
#> [1] 1.018379
# bnma initial values now contain initial values for the parameters and the JAGS RNG initial values
str(result$inits)
#> List of 3
#> $ :List of 6
#> ..$ Eta : num [1:22] -3.02 -1.88 -1.63 -2.61 -2.43 ...
#> ..$ d : num [1:2] NA -0.126
#> ..$ sd : num 0.271
#> ..$ delta : num [1:22, 1:2] NA NA NA NA NA NA NA NA NA NA ...
#> ..$ .RNG.name: chr "base::Wichmann-Hill"
#> ..$ .RNG.seed: num 94387
#> $ :List of 6
#> ..$ Eta : num [1:22] -2.59 -1.83 -2.19 -2.53 -2.4 ...
#> ..$ d : num [1:2] NA -0.103
#> ..$ sd : num 0.283
#> ..$ delta : num [1:22, 1:2] NA NA NA NA NA NA NA NA NA NA ...
#> ..$ .RNG.name: chr "base::Wichmann-Hill"
#> ..$ .RNG.seed: num 24507
#> $ :List of 6
#> ..$ Eta : num [1:22] -2.9 -2.23 -2.32 -2.51 -2.61 ...
#> ..$ d : num [1:2] NA -0.208
#> ..$ sd : num 0.297
#> ..$ delta : num [1:22, 1:2] NA NA NA NA NA NA NA NA NA NA ...
#> ..$ .RNG.name: chr "base::Wichmann-Hill"
#> ..$ .RNG.seed: num 39483
# reproducible results
summary(result)
#> $summary.samples
#>
#> Iterations = 1:50000
#> Thinning interval = 1
#> Number of chains = 3
#> Sample size per chain = 50000
#>
#> 1. Empirical mean and standard deviation for each variable,
#> plus standard error of the mean:
#>
#> Mean SD Naive SE Time-series SE
#> d[1] 0.0000 0.00000 0.0000000 0.0000000
#> d[2] -0.2489 0.06537 0.0001688 0.0007305
#> sd 0.1353 0.08160 0.0002107 0.0017319
#>
#> 2. Quantiles for each variable:
#>
#> 2.5% 25% 50% 75% 97.5%
#> d[1] 0.000000 0.00000 0.0000 0.0000 0.0000
#> d[2] -0.373586 -0.29239 -0.2502 -0.2068 -0.1169
#> sd 0.008369 0.07367 0.1283 0.1864 0.3149
#>
#>
#> $Treat.order
#> 1 2
#> 1 2
#>
#> $deviance
#> Dbar pD DIC
#> 41.75576 28.16711 69.92287
#>
#> $total_n
#> [1] 44
#>
#> attr(,"class")
#> [1] "summary.network.result"
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.