The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Example: Smoking cessation

library(multinma)
options(mc.cores = parallel::detectCores())
#> For execution on a local, multicore CPU with excess RAM we recommend calling
#> options(mc.cores = parallel::detectCores())
#> 
#> Attaching package: 'multinma'
#> The following objects are masked from 'package:stats':
#> 
#>     dgamma, pgamma, qgamma

This vignette describes the analysis of smoking cessation data (Hasselblad 1998), replicating the analysis in NICE Technical Support Document 4 (Dias et al. 2011). The data are available in this package as smoking:

head(smoking)
#>   studyn trtn                   trtc  r   n
#> 1      1    1        No intervention  9 140
#> 2      1    3 Individual counselling 23 140
#> 3      1    4      Group counselling 10 138
#> 4      2    2              Self-help 11  78
#> 5      2    3 Individual counselling 12  85
#> 6      2    4      Group counselling 29 170

Setting up the network

We begin by setting up the network. We have arm-level count data giving the number quitting smoking (r) out of the total (n) in each arm, so we use the function set_agd_arm(). Treatment “No intervention” is set as the network reference treatment.

smknet <- set_agd_arm(smoking, 
                      study = studyn,
                      trt = trtc,
                      r = r, 
                      n = n,
                      trt_ref = "No intervention")
smknet
#> A network with 24 AgD studies (arm-based).
#> 
#> ------------------------------------------------------- AgD studies (arm-based) ---- 
#>  Study Treatment arms                                                 
#>  1     3: No intervention | Group counselling | Individual counselling
#>  2     3: Group counselling | Individual counselling | Self-help      
#>  3     2: No intervention | Individual counselling                    
#>  4     2: No intervention | Individual counselling                    
#>  5     2: No intervention | Individual counselling                    
#>  6     2: No intervention | Individual counselling                    
#>  7     2: No intervention | Individual counselling                    
#>  8     2: No intervention | Individual counselling                    
#>  9     2: No intervention | Individual counselling                    
#>  10    2: No intervention | Self-help                                 
#>  ... plus 14 more studies
#> 
#>  Outcome type: count
#> ------------------------------------------------------------------------------------
#> Total number of treatments: 4
#> Total number of studies: 24
#> Reference treatment is: No intervention
#> Network is connected

Plot the network structure.

plot(smknet, weight_edges = TRUE, weight_nodes = TRUE)

Random effects NMA

Following TSD 4, we fit a random effects NMA model, using the nma() function with trt_effects = "random". We use \(\mathrm{N}(0, 100^2)\) prior distributions for the treatment effects \(d_k\) and study-specific intercepts \(\mu_j\), and a \(\textrm{half-N}(5^2)\) prior distribution for the between-study heterogeneity standard deviation \(\tau\). We can examine the range of parameter values implied by these prior distributions with the summary() method:

summary(normal(scale = 100))
#> A Normal prior distribution: location = 0, scale = 100.
#> 50% of the prior density lies between -67.45 and 67.45.
#> 95% of the prior density lies between -196 and 196.
summary(half_normal(scale = 5))
#> A half-Normal prior distribution: location = 0, scale = 5.
#> 50% of the prior density lies between 0 and 3.37.
#> 95% of the prior density lies between 0 and 9.8.

The model is fitted using the nma() function. By default, this will use a Binomial likelihood and a logit link function, auto-detected from the data.

smkfit <- nma(smknet, 
              trt_effects = "random",
              prior_intercept = normal(scale = 100),
              prior_trt = normal(scale = 100),
              prior_het = normal(scale = 5))

Basic parameter summaries are given by the print() method:

smkfit
#> A random effects NMA with a binomial likelihood (logit link).
#> Inference for Stan model: binomial_1par.
#> 4 chains, each with iter=2000; warmup=1000; thin=1; 
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#> 
#>                               mean se_mean   sd     2.5%      25%      50%      75%    97.5% n_eff
#> d[Group counselling]          1.10    0.01 0.44     0.28     0.81     1.08     1.39     2.00  1944
#> d[Individual counselling]     0.84    0.01 0.24     0.40     0.68     0.83     0.99     1.33  1278
#> d[Self-help]                  0.50    0.01 0.41    -0.30     0.22     0.49     0.76     1.34  1663
#> lp__                      -5767.98    0.22 6.67 -5781.78 -5772.22 -5767.64 -5763.33 -5755.83   959
#> tau                           0.85    0.01 0.19     0.55     0.71     0.82     0.95     1.32   994
#>                           Rhat
#> d[Group counselling]         1
#> d[Individual counselling]    1
#> d[Self-help]                 1
#> lp__                         1
#> tau                          1
#> 
#> Samples were drawn using NUTS(diag_e) at Mon Apr 29 16:31:48 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).

By default, summaries of the study-specific intercepts \(\mu_j\) and study-specific relative effects \(\delta_{jk}\) are hidden, but could be examined by changing the pars argument:

# Not run
print(smkfit, pars = c("d", "tau", "mu", "delta"))

The prior and posterior distributions can be compared visually using the plot_prior_posterior() function:

plot_prior_posterior(smkfit)

By default, this displays all model parameters given prior distributions (in this case \(d_k\), \(\mu_j\), and \(\tau\)), but this may be changed using the prior argument:

plot_prior_posterior(smkfit, prior = "het")

Model fit can be checked using the dic() function

(dic_consistency <- dic(smkfit))
#> Residual deviance: 54 (on 50 data points)
#>                pD: 43.8
#>               DIC: 97.8

and the residual deviance contributions examined with the corresponding plot() method

plot(dic_consistency)

Overall model fit seems to be adequate, with almost all points showing good fit (mean residual deviance contribution of 1). The only two points with higher residual deviance (i.e. worse fit) correspond to the two zero counts in the data:

smoking[smoking$r == 0, ]
#>    studyn trtn            trtc r  n
#> 13      6    1 No intervention 0 33
#> 31     15    1 No intervention 0 20

Checking for inconsistency

Note: The results of the inconsistency models here are slightly different to those of Dias et al. (2010, 2011), although the overall conclusions are the same. This is due to the presence of multi-arm trials and a different ordering of treatments, meaning that inconsistency is parameterised differently within the multi-arm trials. The same results as Dias et al. are obtained if the network is instead set up with trtn as the treatment variable.

Unrelated mean effects

We first fit an unrelated mean effects (UME) model (Dias et al. 2011) to assess the consistency assumption. Again, we use the function nma(), but now with the argument consistency = "ume".

smkfit_ume <- nma(smknet, 
                  consistency = "ume",
                  trt_effects = "random",
                  prior_intercept = normal(scale = 100),
                  prior_trt = normal(scale = 100),
                  prior_het = normal(scale = 5))
smkfit_ume
#> A random effects NMA with a binomial likelihood (logit link).
#> An inconsistency model ('ume') was fitted.
#> Inference for Stan model: binomial_1par.
#> 4 chains, each with iter=2000; warmup=1000; thin=1; 
#> post-warmup draws per chain=1000, total post-warmup draws=4000.
#> 
#>                                                     mean se_mean   sd     2.5%      25%      50%
#> d[Group counselling vs. No intervention]            1.10    0.02 0.81    -0.35     0.56     1.05
#> d[Individual counselling vs. No intervention]       0.90    0.01 0.27     0.39     0.73     0.90
#> d[Self-help vs. No intervention]                    0.34    0.01 0.57    -0.76    -0.03     0.33
#> d[Individual counselling vs. Group counselling]    -0.28    0.01 0.60    -1.46    -0.67    -0.28
#> d[Self-help vs. Group counselling]                 -0.61    0.02 0.72    -2.04    -1.08    -0.61
#> d[Self-help vs. Individual counselling]             0.19    0.02 1.01    -1.82    -0.48     0.18
#> lp__                                            -5765.34    0.21 6.37 -5778.48 -5769.46 -5765.06
#> tau                                                 0.93    0.01 0.23     0.59     0.77     0.90
#>                                                      75%    97.5% n_eff Rhat
#> d[Group counselling vs. No intervention]            1.61     2.84  2538    1
#> d[Individual counselling vs. No intervention]       1.06     1.46  1165    1
#> d[Self-help vs. No intervention]                    0.71     1.51  2078    1
#> d[Individual counselling vs. Group counselling]     0.11     0.93  2562    1
#> d[Self-help vs. Group counselling]                 -0.13     0.77  2165    1
#> d[Self-help vs. Individual counselling]             0.85     2.17  3796    1
#> lp__                                            -5760.91 -5753.80   908    1
#> tau                                                 1.06     1.47  1069    1
#> 
#> Samples were drawn using NUTS(diag_e) at Mon Apr 29 16:32:08 2024.
#> For each parameter, n_eff is a crude measure of effective sample size,
#> and Rhat is the potential scale reduction factor on split chains (at 
#> convergence, Rhat=1).

Comparing the model fit statistics

dic_consistency
#> Residual deviance: 54 (on 50 data points)
#>                pD: 43.8
#>               DIC: 97.8
(dic_ume <- dic(smkfit_ume))
#> Residual deviance: 53.9 (on 50 data points)
#>                pD: 45.2
#>               DIC: 99.1

We see that there is little to choose between the two models. However, it is also important to examine the individual contributions to model fit of each data point under the two models (a so-called “dev-dev” plot). Passing two nma_dic objects produced by the dic() function to the plot() method produces this dev-dev plot:

plot(dic_consistency, dic_ume, point_alpha = 0.5, interval_alpha = 0.2)

All points lie roughly on the line of equality, so there is no evidence for inconsistency here.

Node-splitting

Another method for assessing inconsistency is node-splitting (Dias et al. 2011, 2010). Whereas the UME model assesses inconsistency globally, node-splitting assesses inconsistency locally for each potentially inconsistent comparison (those with both direct and indirect evidence) in turn.

Node-splitting can be performed using the nma() function with the argument consistency = "nodesplit". By default, all possible comparisons will be split (as determined by the get_nodesplits() function). Alternatively, a specific comparison or comparisons to split can be provided to the nodesplit argument.

smk_nodesplit <- nma(smknet, 
                     consistency = "nodesplit",
                     trt_effects = "random",
                     prior_intercept = normal(scale = 100),
                     prior_trt = normal(scale = 100),
                     prior_het = normal(scale = 5))
#> Fitting model 1 of 7, node-split: Group counselling vs. No intervention
#> Fitting model 2 of 7, node-split: Individual counselling vs. No intervention
#> Fitting model 3 of 7, node-split: Self-help vs. No intervention
#> Fitting model 4 of 7, node-split: Individual counselling vs. Group counselling
#> Fitting model 5 of 7, node-split: Self-help vs. Group counselling
#> Fitting model 6 of 7, node-split: Self-help vs. Individual counselling
#> Fitting model 7 of 7, consistency model

The summary() method summarises the node-splitting results, displaying the direct and indirect estimates \(d_\mathrm{dir}\) and \(d_\mathrm{ind}\) from each node-split model, the network estimate \(d_\mathrm{net}\) from the consistency model, the inconsistency factor \(\omega = d_\mathrm{dir} - d_\mathrm{ind}\), and a Bayesian \(p\)-value for inconsistency on each comparison. Since random effects models are fitted, the heterogeneity standard deviation \(\tau\) under each node-split model and under the consistency model is also displayed. The DIC model fit statistics are also provided.

summary(smk_nodesplit)
#> Node-splitting models fitted for 6 comparisons.
#> 
#> ------------------------------ Node-split Group counselling vs. No intervention ---- 
#> 
#>                  mean   sd  2.5%   25%   50%  75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d_net            1.11 0.45  0.24  0.80  1.10 1.40  2.01     1720     2238    1
#> d_dir            1.09 0.76 -0.34  0.57  1.05 1.57  2.64     2751     2709    1
#> d_ind            1.14 0.54  0.06  0.80  1.14 1.49  2.21     1638     2222    1
#> omega           -0.05 0.89 -1.81 -0.65 -0.06 0.52  1.76     2053     2604    1
#> tau              0.86 0.19  0.56  0.73  0.84 0.98  1.30     1020     1740    1
#> tau_consistency  0.84 0.19  0.55  0.70  0.82 0.95  1.29     1177     1625    1
#> 
#> Residual deviance: 54 (on 50 data points)
#>                pD: 44.1
#>               DIC: 98
#> 
#> Bayesian p-value: 0.94
#> 
#> ------------------------- Node-split Individual counselling vs. No intervention ---- 
#> 
#>                 mean   sd  2.5%   25%  50%  75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d_net           0.86 0.24  0.39  0.69 0.85 1.01  1.37     1165     1736    1
#> d_dir           0.88 0.25  0.41  0.72 0.87 1.04  1.38     1970     2120    1
#> d_ind           0.57 0.67 -0.69  0.13 0.54 0.99  1.92     1547     1935    1
#> omega           0.31 0.69 -1.07 -0.11 0.33 0.78  1.60     1672     2021    1
#> tau             0.86 0.19  0.56  0.72 0.83 0.96  1.30      934     1616    1
#> tau_consistency 0.84 0.19  0.55  0.70 0.82 0.95  1.29     1177     1625    1
#> 
#> Residual deviance: 54.4 (on 50 data points)
#>                pD: 44.4
#>               DIC: 98.8
#> 
#> Bayesian p-value: 0.62
#> 
#> -------------------------------------- Node-split Self-help vs. No intervention ---- 
#> 
#>                  mean   sd  2.5%   25%   50%  75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d_net            0.49 0.42 -0.32  0.23  0.49 0.76  1.33     1743     2093    1
#> d_dir            0.34 0.55 -0.71 -0.02  0.34 0.69  1.41     2975     2630    1
#> d_ind            0.69 0.62 -0.53  0.29  0.68 1.08  1.95     1958     1847    1
#> omega           -0.35 0.81 -2.00 -0.88 -0.36 0.17  1.25     1990     2630    1
#> tau              0.86 0.19  0.55  0.73  0.84 0.97  1.31     1215     1764    1
#> tau_consistency  0.84 0.19  0.55  0.70  0.82 0.95  1.29     1177     1625    1
#> 
#> Residual deviance: 53.9 (on 50 data points)
#>                pD: 44.3
#>               DIC: 98.2
#> 
#> Bayesian p-value: 0.64
#> 
#> ----------------------- Node-split Individual counselling vs. Group counselling ---- 
#> 
#>                  mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d_net           -0.25 0.43 -1.11 -0.52 -0.24  0.03  0.58     2411     2460    1
#> d_dir           -0.12 0.48 -1.08 -0.43 -0.11  0.19  0.82     3901     3244    1
#> d_ind           -0.54 0.62 -1.76 -0.94 -0.54 -0.16  0.69     1448     1837    1
#> omega            0.42 0.69 -0.98 -0.02  0.45  0.88  1.72     1412     1907    1
#> tau              0.86 0.19  0.56  0.73  0.84  0.98  1.30     1415     1789    1
#> tau_consistency  0.84 0.19  0.55  0.70  0.82  0.95  1.29     1177     1625    1
#> 
#> Residual deviance: 53.7 (on 50 data points)
#>                pD: 44.2
#>               DIC: 97.8
#> 
#> Bayesian p-value: 0.53
#> 
#> ------------------------------------ Node-split Self-help vs. Group counselling ---- 
#> 
#>                  mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d_net           -0.61 0.50 -1.63 -0.91 -0.61 -0.29  0.34     2714     2478    1
#> d_dir           -0.62 0.66 -1.92 -1.05 -0.62 -0.18  0.67     3690     3017    1
#> d_ind           -0.64 0.68 -1.97 -1.08 -0.64 -0.19  0.67     1972     2270    1
#> omega            0.02 0.88 -1.74 -0.56  0.02  0.60  1.76     2153     2322    1
#> tau              0.87 0.20  0.57  0.73  0.84  0.97  1.35      948     1578    1
#> tau_consistency  0.84 0.19  0.55  0.70  0.82  0.95  1.29     1177     1625    1
#> 
#> Residual deviance: 54.1 (on 50 data points)
#>                pD: 44.4
#>               DIC: 98.5
#> 
#> Bayesian p-value: 0.98
#> 
#> ------------------------------- Node-split Self-help vs. Individual counselling ---- 
#> 
#>                  mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS Tail_ESS Rhat
#> d_net           -0.36 0.42 -1.19 -0.62 -0.35 -0.10  0.45     2205     2293    1
#> d_dir            0.07 0.66 -1.22 -0.37  0.08  0.49  1.38     3112     3065    1
#> d_ind           -0.61 0.53 -1.67 -0.94 -0.61 -0.28  0.45     1935     2127    1
#> omega            0.68 0.83 -0.95  0.15  0.68  1.19  2.36     2171     2323    1
#> tau              0.85 0.20  0.55  0.71  0.82  0.96  1.32     1108     1567    1
#> tau_consistency  0.84 0.19  0.55  0.70  0.82  0.95  1.29     1177     1625    1
#> 
#> Residual deviance: 54.3 (on 50 data points)
#>                pD: 44.5
#>               DIC: 98.8
#> 
#> Bayesian p-value: 0.39

The DIC of each inconsistency model is unchanged from the consistency model, no node-splits result in reduced heterogeneity standard deviation \(\tau\) compared to the consistency model, and the Bayesian \(p\)-values are all large. There is no evidence of inconsistency.

We can visually compare the posterior distributions of the direct, indirect, and network estimates using the plot() method. These are all in agreement; the posterior densities of the direct and indirect estimates overlap. Notice that there is not much indirect information for the Individual counselling vs. No intervention comparison, so the network (consistency) estimate is very similar to the direct estimate for this comparison.

plot(smk_nodesplit) +
  ggplot2::theme(legend.position = "bottom", legend.direction = "horizontal")

Further results

Pairwise relative effects, for all pairwise contrasts with all_contrasts = TRUE.

(smk_releff <- relative_effects(smkfit, all_contrasts = TRUE))
#>                                                  mean   sd  2.5%   25%   50%   75% 97.5% Bulk_ESS
#> d[Group counselling vs. No intervention]         1.10 0.44  0.28  0.81  1.08  1.39  2.00     1974
#> d[Individual counselling vs. No intervention]    0.84 0.24  0.40  0.68  0.83  0.99  1.33     1283
#> d[Self-help vs. No intervention]                 0.50 0.41 -0.30  0.22  0.49  0.76  1.34     1717
#> d[Individual counselling vs. Group counselling] -0.26 0.42 -1.12 -0.53 -0.25  0.01  0.53     2436
#> d[Self-help vs. Group counselling]              -0.61 0.49 -1.60 -0.93 -0.59 -0.29  0.33     2521
#> d[Self-help vs. Individual counselling]         -0.34 0.42 -1.19 -0.62 -0.34 -0.07  0.49     1853
#>                                                 Tail_ESS Rhat
#> d[Group counselling vs. No intervention]            2268    1
#> d[Individual counselling vs. No intervention]       1868    1
#> d[Self-help vs. No intervention]                    1844    1
#> d[Individual counselling vs. Group counselling]     2263    1
#> d[Self-help vs. Group counselling]                  2436    1
#> d[Self-help vs. Individual counselling]             2413    1
plot(smk_releff, ref_line = 0)

Treatment rankings, rank probabilities, and cumulative rank probabilities. We set lower_better = FALSE since a higher log odds of cessation is better (the outcome is positive).

(smk_ranks <- posterior_ranks(smkfit, lower_better = FALSE))
#>                              mean   sd 2.5% 25% 50% 75% 97.5% Bulk_ESS Tail_ESS Rhat
#> rank[No intervention]        3.89 0.32    3   4   4   4     4     2145       NA    1
#> rank[Group counselling]      1.36 0.61    1   1   1   2     3     2953     2956    1
#> rank[Individual counselling] 1.94 0.64    1   2   2   2     3     2584     1339    1
#> rank[Self-help]              2.80 0.70    1   3   3   3     4     2411       NA    1
plot(smk_ranks)

(smk_rankprobs <- posterior_rank_probs(smkfit, lower_better = FALSE))
#>                           p_rank[1] p_rank[2] p_rank[3] p_rank[4]
#> d[No intervention]             0.00      0.00      0.10      0.89
#> d[Group counselling]           0.71      0.23      0.06      0.00
#> d[Individual counselling]      0.23      0.59      0.17      0.00
#> d[Self-help]                   0.06      0.17      0.66      0.10
plot(smk_rankprobs)

(smk_cumrankprobs <- posterior_rank_probs(smkfit, lower_better = FALSE, cumulative = TRUE))
#>                           p_rank[1] p_rank[2] p_rank[3] p_rank[4]
#> d[No intervention]             0.00      0.00      0.11         1
#> d[Group counselling]           0.71      0.94      1.00         1
#> d[Individual counselling]      0.23      0.83      1.00         1
#> d[Self-help]                   0.06      0.23      0.90         1
plot(smk_cumrankprobs)

References

Dias, S., N. J. Welton, D. M. Caldwell, and A. E. Ades. 2010. “Checking Consistency in Mixed Treatment Comparison Meta-Analysis.” Statistics in Medicine 29 (7-8): 932–44. https://doi.org/10.1002/sim.3767.
Dias, S., N. J. Welton, A. J. Sutton, D. M. Caldwell, G. Lu, and A. E. Ades. 2011. NICE DSU Technical Support Document 4: Inconsistency in Networks of Evidence Based on Randomised Controlled Trials.” National Institute for Health and Care Excellence. https://www.sheffield.ac.uk/nice-dsu.
Hasselblad, V. 1998. “Meta-Analysis of Multitreatment Studies.” Medical Decision Making 18 (1): 37–43. https://doi.org/10.1177/0272989x9801800110.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.