The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
This R package boot.heterogeneity provides functions for testing the between-study heterogeneity in meta-analysis of standardized mean differences (d), Fisher-transformed Pearson’s correlations (r), and log odds ratio (OR).
In the following three examples, we describe how to use our package boot.heterogeneity to test the between-study heterogeneity for each of the three effect sizes (d, r, OR). Datasets, R codes, and Output are provided so that applied researchers can easily replicate each example or modify the codes for their own datasets.
The three example datasets are internal in our package, and researchers can load the datasets using boot.heterogeneity:::[dataset_name]. In each of the example datasets, the rows correspond to studies in meta-analysis, and the columns correspond to required input for that study, which includes, but is not limited to effect size, sample size(s), and moderators.
The example R codes adopt the default value for some of the arguments (e.g., default nominal alpha level is 0.05). To change the defaults, use help() or ? to access the documentation page of each function (e.g., help(boot.run.cor)).
The output is formatted to have the same layout across the examples.
Inclusion of moderators is an option for researchers who are interested in using factors to explain the systematic between-study heterogeneity. To see how we include moderators, please go to section 1.2.
Heterogeneity magnitude test is a test in which the researchers can compare the magnitude of the between-study heterogeneity against a specific level. This specific level is denoted as lambda in our alternative hypothesis. To see how we test a specific lambda in the alternative hypothesis, please go to section 2.2.
Parallel implementation of the bootstrapping process can save us considerable amount of computing time, especially when the number of bootstrap replications is large. To see how we accelerate the bootstrapping process with parallel implementation and computing nodes, please go to section 3.2.
In the main text of the article, an “Empirical Illustration” section is included to discuss the three examples in more detail.
For most recent updates, researchers are highly recommended to install the development version of this package from GitHub using the following syntax:
# install.packages("devtools")
library(devtools)
devtools::install_github("gabriellajg/boot.heterogeneity",
force = TRUE,
build_vignettes = TRUE,
dependencies = TRUE)
library(boot.heterogeneity)
The newest version of this package will also be available on CRAN shortly.
Note that you’ll need the following packages to install this package successfully:
library(metafor) # for Q-test
library(pbmcapply) # optional - for parallel implementation of bootstrapping
library(HSAUR3) # for an example dataset in the tutorial
library(knitr) # for knitting the tutorial
library(rmarkdown) # for knitting the tutorial
boot.d()
is the function to test the between-study heterogeneity in meta-analysis of standardized mean differences (d).
Load the example dataset selfconcept
first:
selfconcept <- boot.heterogeneity:::selfconcept
selfconcept
consists of 18 studies in which the effect of open versus traditional education on students’ self-concept was studied (Hedges et al., 1981). The columns of selfconcept
are: sample sizes of the two groups (n1
and n2
), Hedges’s g
, Cohen’s d
, and a moderator X
(X
not used in the current example).
head(selfconcept, 3)
#> n1 n2 g d X
#> 1 100 180 0.100 0.09972997 0.100
#> 2 131 138 -0.162 -0.16154452 -0.162
#> 3 40 40 -0.090 -0.08913183 -0.091
Extract the required arguments from selfconcept
:
# n1 and n2 are lists of samples sizes in two groups
n1 <- selfconcept$n1
n2 <- selfconcept$n2
# g is a list of effect sizes
g <- selfconcept$g
If g
is a list of biased estimates of standardized mean differences in the meta-analytical study, a small-sample adjustment must be applied:
cm <- (1-3/(4*(n1+n2-2)-1)) #correct factor to compensate for small sample bias (Hedges, 1981)
d <- cm*g
Run the heterogeneity test using function boot.d()
and adjusted effect size d
:
boot.run <- boot.d(n1, n2, est = d, model = 'random', p_cut = 0.05)
Alternatively, such an adjustment can be performed on unadjusted effect size g
by specifying adjust = TRUE
:
boot.run2 <- boot.d(n1, n2, est = g, model = 'random', adjust = TRUE, p_cut = 0.05)
boot.run
and boot.run2
will return the same results:
boot.run
#> stat p_value Heterogeneity
#> Qtest 23.391659 0.136929 n.s
#> boot.REML 2.037578 0.053100 n.s
boot.run2
#> stat p_value Heterogeneity
#> Qtest 23.391659 0.136929 n.s
#> boot.REML 2.037578 0.053100 n.s
Load an hypothetical dataset hypo_moder
first:
hypo_moder <- boot.heterogeneity:::hypo_moder
Three moderators (cov.z1, cov.z2, cov.z3) are included:
head(hypo_moder)
#> n1 n2 d cov.z1 cov.z2 cov.z3
#> 1 59 65 0.8131324 -0.005767173 0.80418951 1.2383041
#> 2 166 165 1.0243732 2.404653389 -0.05710677 -0.2793463
#> 3 68 68 1.5954236 0.763593461 0.50360797 1.7579031
#> 4 44 31 0.6809888 -0.799009249 1.08576936 0.5607461
#> 5 98 95 -1.3017946 -1.147657009 -0.69095384 -0.4527840
#> 6 44 31 -1.9398508 -0.289461574 -1.28459935 -0.8320433
Again, run the heterogeneity test using boot.d()
with all moderators included in a matrix mods
and model type specified as model = 'mixed'
:
boot.run3 <- boot.d(n1 = hypo_moder$n1,
n2 = hypo_moder$n2,
est = hypo_moder$d,
model = 'mixed',
mods = cbind(hypo_moder$cov.z1, hypo_moder$cov.z2, hypo_moder$cov.z3),
p_cut = 0.05)
The results in boot.run3
will in the same format as boot.run
and boot.run2
:
boot.run3
#> stat p_value Heterogeneity
#> Qtest 31.849952 0.000806 sig
#> boot.REML 9.283428 0.000400 sig
In the presence of moderators, the function above tests whether the variability in the true standardized mean differences after accounting for the moderators included in the model is larger than sampling variability alone (Viechtbauer, 2010).
In the first line, the Q-statistic is Q(df = 11) = 31.85 and the associated p-value is 0.0008. This statistic is significant (sig) at an alpha level of 0.05, meaning that the true effect sizes after accounting for the moderators are heterogeneous.
In the second line, the B-REML-LR statistic is 9.28 and the bootstrap-based p-value is 0.0004. This means that the true effect sizes after accounting for the moderators are heterogeneous at an alpha level of 0.05.
boot.fcor()
is the function to test the between-study heterogeneity in meta-analysis of Fisher-transformed Pearson’s correlations (r).
Load the example dataset sensation
first:
sensation <- boot.heterogeneity:::sensation
Extract the required arguments from sensation
:
# n is a list of samples sizes
n <- sensation$n
# Pearson's correlation
r <- sensation$r
# Fisher's Transformation
z <- 1/2*log((1+r)/(1-r))
Run the heterogeneity test using boot.fcor()
:
boot.run.cor <- boot.fcor(n, z, model = 'random', p_cut = 0.05)
The test of between-study heterogeneity has the following results:
boot.run.cor
#> stat p_value Heterogeneity
#> Qtest 29.060970 0.00385868 sig
#> boot.REML 6.133111 0.00400882 sig
In the first line, the Q-statistic is Q(df = 12) = 29.06 and the associated p-value is 0.004. This statistic is significant (sig) at an alpha level of 0.05, meaning that the true effect sizes are heterogeneous.
In the second line, the B-REML-LR statistic is 6.13 and the bootstrap-based p-value is 0.004. This means that the true effect sizes are heterogeneous at an alpha level of 0.05.
Run the heterogeneity test using boot.fcor()
:
boot.run.cor2 <- boot.fcor(n, z, lambda=0.08, model = 'random', p_cut = 0.05)
The test of between-study heterogeneity has the following results:
boot.run.cor2
#> stat p_value Heterogeneity
#> boot.REML 2.42325 0.04607372 sig
boot.lnOR()
is the function to test the between-study heterogeneity in meta-analysis of Natural-logarithm-transformed odds ratio (OR).
Load the example dataset smoking
from R package HSAUR3
:
library(HSAUR3)
#> Loading required package: tools
data(smoking)
Extract the required arguments from smoking
:
# Y1: receive treatment; Y2: stop smoking
n_00 <- smoking$tc - smoking$qc # not receive treatement yet not stop smoking
n_01 <- smoking$qc # not receive treatement but stop smoking
n_10 <- smoking$tt - smoking$qt # receive treatement but not stop smoking
n_11 <- smoking$qt # receive treatement and stop smoking
The log odds ratios can be computed, but they are not needed by boot.lnOR()
:
lnOR <- log(n_11*n_00/n_01/n_10)
lnOR
#> [1] 0.6151856 -0.0235305 0.5658078 0.4274440 1.0814445 0.9109288
#> [7] 0.9647431 0.7103890 1.0375520 -0.1407277 0.7747272 1.7924180
#> [13] 1.2021192 0.3607987 0.2876821 0.2110139 1.2591392 0.1549774
#> [19] 1.3411739 0.2963470 0.6116721 0.3786539 0.5389965 0.7532417
#> [25] 0.5653138 0.3786539
Run the heterogeneity test using boot.lnOR()
:
boot.run.lnOR <- boot.lnOR(n_00, n_01, n_10, n_11, model = 'random', p_cut = 0.05)
The test of between-study heterogeneity has the following results:
boot.run.lnOR
#> stat p_value Heterogeneity
#> Qtest 34.873957 0.09050857 n.s
#> boot.REML 3.071329 0.03706729 sig
In the first line, the Q-statistic is Q(df = 25) = 34.87 and the associated p-value is 0.091. This statistic is not significant (n.s) at an alpha level of 0.05, meaning that the assumption of homogeneity cannot be rejected.
In the second line, the B-REML-LR statistic is 3.07 and the bootstrap-based p-value is 0.037. This means that the assumption of homogeneity is rejected and the true effect sizes are heterogeneous at an alpha level of 0.05.
Run the heterogeneity test using boot.lnOR()
with parallel computing and 4 cores:
boot.run.lnOR2 <- boot.lnOR(n_00, n_01, n_10, n_11, model = 'random', p_cut = 0.05,
parallel = TRUE, cores = 4)
The test of between-study heterogeneity has the same results as those in 3.1:
boot.run.lnOR2
#|=====================================================| 100%, Elapsed 00:41
#> stat p_value Heterogeneity
#> Qtest 34.873957 0.09050857 n.s
#> boot.REML 3.071329 0.03706729 sig
sessionInfo()
#> R version 4.1.1 (2021-08-10)
#> Platform: x86_64-apple-darwin17.0 (64-bit)
#> Running under: macOS Big Sur 10.16
#>
#> Matrix products: default
#> BLAS: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRblas.0.dylib
#> LAPACK: /Library/Frameworks/R.framework/Versions/4.1/Resources/lib/libRlapack.dylib
#>
#> locale:
#> [1] C/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#>
#> attached base packages:
#> [1] tools stats graphics grDevices utils datasets methods
#> [8] base
#>
#> other attached packages:
#> [1] HSAUR3_1.0-11
#>
#> loaded via a namespace (and not attached):
#> [1] knitr_1.36 mathjaxr_1.4-0 magrittr_2.0.1
#> [4] lattice_0.20-44 R6_2.5.1 rlang_0.4.11
#> [7] fastmap_1.1.0 stringr_1.4.0 parallel_4.1.1
#> [10] grid_4.1.1 nlme_3.1-153 xfun_0.26
#> [13] jquerylib_0.1.4 metafor_3.1-23 htmltools_0.5.2
#> [16] yaml_2.2.1 digest_0.6.28 Matrix_1.3-4
#> [19] pbmcapply_1.5.0 sass_0.4.0 evaluate_0.14
#> [22] rmarkdown_2.11 boot.heterogeneity_1.1.5 stringi_1.7.5
#> [25] compiler_4.1.1 bslib_0.3.1 metadat_1.0-0
#> [28] jsonlite_1.7.2
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.