The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Let \(Y\) be a binary response, \(A\) a categorical treatment, and \(W\) a vector of confounders. Assume that we have observed \(n\) i.i.d. observations \((Y_i, W_i, A_i) \sim P, i=1,\ldots,n\). In the following we are interested in estimating the target parameter \(\psi(P) = (E[Y(a)]\), where \(Y(a)\) is the potential outcome we would have observed if treatment \(a\) had been administered, possibly contrary to the actual treatment that was observed, i.e., \(Y = Y(A)\).
Under the following assumptions
the target parameter can be identified from the observed data distribution as \[E(E[Y|W,A=a]) = E(E[Y(a)|W]) = E[Y(a)]\] or \[E[Y I(A=a)/P(A=a|W)] = E[Y(a)].\]
This suggests estimation via either outcome regression (OR, g-computation) or inverse probability weighting (IPW). These can eventually also be combined to a doubly-robust augmented inverse probability weighted (AIPW) estimator.
As an illustration we simulate from the following model \[Y \sim Bernoulli(\operatorname{expit}\{A+X+Z\})\] \[A \sim Bernoulli(\operatorname{expit}\{X+Z\})\] \[Z \sim \mathcal{N}(X,1)\]
The target parameter, \(E[Y(a)]\)
can be estimated with the targeted::ate
function:
args(ate)
#> function (formula, data = parent.frame(), weights, offset, family = stats::gaussian(identity),
#> nuisance = NULL, propensity = nuisance, all, labels = NULL,
#> ...)
#> NULL
The formula should be specified as
formula = response ~ treatment
, and the outcome regression
specified as nuisance = ~ covariates
, and propensity model
propensity = ~ covariates
. Alternatively, the formula can
be specified with the notation
formula = response ~ treatment | OR-covariates | propensity-covariates
.
Parametric models are assumed for both the outcome regression
and the propensity model which defaults to be logistic regression models
(as in the simulation). A linear model can be used for the outcome
regression part by setting binary=FALSE
.
To illustrate this, we can estimate the (non-causal) marginal mean of each treatment value
ate(Y ~ A, data=d, nuisance=~1, propensity=~1)
#> Estimate Std.Err 2.5% 97.5% P-value
#> A=0 0.2937 0.02029 0.2539 0.3334 1.741e-47
#> A=1 0.8367 0.01660 0.8042 0.8692 0.000e+00
or equivalently
ate(Y ~ A | 1 | 1, data=d)
#> Estimate Std.Err 2.5% 97.5% P-value
#> A=0 0.2937 0.02029 0.2539 0.3334 1.741e-47
#> A=1 0.8367 0.01660 0.8042 0.8692 0.000e+00
In this simulation we can compare the estimates to the actual expected potential outcomes which can be approximated by Monte Carlo integration by simulating from the model where we intervene on \(A\) (i.e., break the dependence on \(X, Z\)):
mean(sim(intervention(m, "A", 0), 2e5)$Y)
#> [1] 0.50023
mean(sim(intervention(m, "A", 1), 2e5)$Y)
#> [1] 0.637795
The IPW estimator can then be estimated
ate(Y ~ A | 1 | X+Z, data=d)
#> Estimate Std.Err 2.5% 97.5% P-value
#> A=0 0.4793 0.02678 0.4268 0.5318 1.278e-71
#> A=1 0.6669 0.03263 0.6029 0.7308 7.528e-93
Similarly, the OR estimator is obtained by
ate(Y ~ A | A*(X+Z) | 1, data=d)
#> Estimate Std.Err 2.5% 97.5% P-value
#> A=0 0.4858 0.02796 0.4310 0.5406 1.243e-67
#> A=1 0.7213 0.02600 0.6703 0.7723 2.248e-169
Both estimates are in this case consistent though we can see that the OR estimate is much more efficient compared to the IPW estimator. However, both of these models rely on correct model specifications.
ate(Y ~ A | 1 | X, data=d)
#> Estimate Std.Err 2.5% 97.5% P-value
#> A=0 0.4208 0.02706 0.3678 0.4739 1.543e-54
#> A=1 0.7072 0.03366 0.6412 0.7731 5.204e-98
ate(Y ~ A | A*X | 1, data=d)
#> Estimate Std.Err 2.5% 97.5% P-value
#> A=0 0.4240 0.02617 0.3727 0.4752 4.794e-59
#> A=1 0.7474 0.02411 0.7001 0.7946 5.297e-211
In contrast, the doubly-robust AIPW estimator is consistent in the intersection model where either the propensity model or the outcome regression model is correctly specified
a <- ate(Y ~ A | A*X | X+Z, data=d)
summary(a)
#>
#> Augmented Inverse Probability Weighting estimator
#> Response Y (Outcome model: gaussian):
#> Y ~ A * X
#> Exposure A (Propensity model: logistic regression):
#> A ~ X + Z
#>
#> Estimate Std.Err 2.5% 97.5% P-value
#> A=0 0.492322 0.02609 0.4412 0.54346 2.028e-79
#> A=1 0.663735 0.03068 0.6036 0.72386 8.429e-104
#> Outcome model:
#> (Intercept) 0.426675 0.02498 0.3777 0.47563 2.048e-65
#> A 0.322545 0.03399 0.2559 0.38916 2.303e-21
#> X 0.232605 0.01825 0.1968 0.26837 3.266e-37
#> A:X -0.075738 0.02609 -0.1269 -0.02461 3.693e-03
#> Propensity model:
#> (Intercept) -0.006421 0.08337 -0.1698 0.15699 9.386e-01
#> X -0.863004 0.12123 -1.1006 -0.62539 1.091e-12
#> Z -1.005211 0.09080 -1.1832 -0.82725 1.742e-28
#>
#> Average Treatment Effect (constrast: 'A=1' - 'A=0'):
#>
#> Estimate Std.Err 2.5% 97.5% P-value
#> ATE 0.1714 0.03661 0.09967 0.2432 2.831e-06
From the summary
output we also get the estimates of the
Average Treatment Effects expressed as a causal relative risk (RR),
causal odds ratio (OR), or causal risk difference (RD) including the
confidence limits.
From the model object a
we can extract the estimated
coefficients (expected potential outcomes) and corresponding asympotic
variance matrix with the coef
and vcov
methods. The estimated influence function can extracted with
the iid
method:
coef(a)
#> A=0 A=1
#> 0.4923220 0.6637346
vcov(a)
#> A=0 A=1
#> A=0 0.0006807275 0.0001409887
#> A=1 0.0001409887 0.0009411920
head(iid(a))
#> A=0 A=1
#> 1 -0.0012182867 -0.0002821609
#> 2 0.0002438822 0.0002898771
#> 3 -0.0004583641 -0.0002555994
#> 4 0.0003753253 0.0002567340
#> 5 0.0011971267 0.0001267515
#> 6 -0.0004585485 -0.0001710518
As an example with multiple treatment levels, we simulate from a new model where the outcome is continuous and the treatment follows a proportional odds model
The AIPW estimator is obtained by estimating a logistic regression model for each treatment level (vs all others) in the propensity model (here a correct model is specified for both the OR and IPW part)
summary(a2 <- ate(y ~ a | a*x | x, data=d, binary=FALSE))
#>
#> Augmented Inverse Probability Weighting estimator
#> Response y (Outcome model: gaussian):
#> y ~ a * x
#> Exposure a (Propensity model: logistic regression):
#> a ~ x
#>
#> Estimate Std.Err 2.5% 97.5% P-value
#> a=0 -1.5836 0.04754 -1.6767 -1.4904 2.510e-243
#> a=1 -0.8580 0.04183 -0.9400 -0.7760 1.634e-93
#> a=2 -0.3993 0.03513 -0.4681 -0.3304 6.175e-30
#> a=3 0.7297 0.02693 0.6770 0.7825 1.043e-161
#>
#> Average Treatment Effect (constrast: 'a=1' - 'a=0'):
#>
#> Estimate Std.Err 2.5% 97.5% P-value
#> ATE 0.7256 0.06138 0.6053 0.8459 3.069e-32
Choosing a different contrast for the association measures:
summary(a2, contrast=c(2,4))
#>
#> Augmented Inverse Probability Weighting estimator
#> Response y (Outcome model: gaussian):
#> y ~ a * x
#> Exposure a (Propensity model: logistic regression):
#> a ~ x
#>
#> Estimate Std.Err 2.5% 97.5% P-value
#> a=0 -1.5836 0.04754 -1.6767 -1.4904 2.510e-243
#> a=1 -0.8580 0.04183 -0.9400 -0.7760 1.634e-93
#> a=2 -0.3993 0.03513 -0.4681 -0.3304 6.175e-30
#> a=3 0.7297 0.02693 0.6770 0.7825 1.043e-161
#>
#> Average Treatment Effect (constrast: 'a=1' - 'a=3'):
#>
#> Estimate Std.Err 2.5% 97.5% P-value
#> ATE -1.588 0.04665 -1.679 -1.496 6.079e-254
head(iid(a2))
#> a=0 a=1 a=2 a=3
#> 1 -3.166428e-05 -5.457237e-06 -6.359929e-06 -6.789360e-06
#> 2 1.092943e-04 8.474978e-05 9.422265e-05 1.094304e-04
#> 3 1.301396e-04 9.795983e-05 1.089817e-04 1.052614e-04
#> 4 -9.113452e-05 7.659446e-05 -7.048630e-05 -8.723959e-05
#> 5 1.215406e-04 9.249439e-05 1.028746e-04 1.340444e-05
#> 6 2.515487e-04 1.778471e-04 1.983432e-04 2.724858e-04
estimate(a2, function(x) x[2]-x[4])
#> Estimate Std.Err 2.5% 97.5% P-value
#> a=1 -1.588 0.04665 -1.679 -1.496 6.079e-254
sessionInfo()
#> R version 4.3.2 (2023-10-31)
#> Platform: aarch64-apple-darwin22.6.0 (64-bit)
#> Running under: macOS Sonoma 14.3.1
#>
#> Matrix products: default
#> BLAS: /Users/kkzh/.asdf/installs/R/4.3.2/lib/R/lib/libRblas.dylib
#> LAPACK: /Users/kkzh/.asdf/installs/R/4.3.2/lib/R/lib/libRlapack.dylib; LAPACK version 3.11.0
#>
#> locale:
#> [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8
#>
#> time zone: Europe/Copenhagen
#> tzcode source: internal
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] targeted_0.5 lava_1.7.4
#>
#> loaded via a namespace (and not attached):
#> [1] Matrix_1.6-5 future.apply_1.11.1 jsonlite_1.8.8
#> [4] futile.logger_1.4.3 compiler_4.3.2 Rcpp_1.0.12
#> [7] parallel_4.3.2 jquerylib_0.1.4 globals_0.16.2
#> [10] splines_4.3.2 yaml_2.3.7 fastmap_1.1.1
#> [13] lattice_0.22-5 R6_2.5.1 knitr_1.45
#> [16] future_1.33.1 nloptr_2.0.3 bslib_0.5.1
#> [19] rlang_1.1.3 cachem_1.0.8 xfun_0.41
#> [22] sass_0.4.7 cli_3.6.2 formatR_1.14
#> [25] futile.options_1.0.1 digest_0.6.34 grid_4.3.2
#> [28] mvtnorm_1.2-4 timereg_2.0.5 evaluate_0.23
#> [31] pracma_2.4.4 data.table_1.15.0 lambda.r_1.2.4
#> [34] numDeriv_2016.8-1.1 listenv_0.9.1 codetools_0.2-19
#> [37] survival_3.5-7 optimx_2023-10.21 parallelly_1.37.0
#> [40] rmarkdown_2.25 tools_4.3.2 htmltools_0.5.6.1
#> [43] mets_1.3.4
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.