The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
WeightIt is a one-stop package to generate balancing weights
for point and longitudinal treatments in observational studies. Support
is included for binary, multi-category, and continuous treatments, a
variety of estimands including the ATE, ATT, ATC, ATO, and others, and
support for a wide variety of weighting methods, including those that
rely on parametric modeling, machine learning, or optimization.
WeightIt also provides functionality for fitting regression
models in weighted samples that account for estimation of the weights in
quantifying uncertainty. WeightIt uses a familiar formula
interface and is meant to complement MatchIt
as a package
that provides a unified interface to basic and advanced weighting
methods.
For a complete vignette, see the website
for WeightIt or vignette("WeightIt")
.
To install and load WeightIt , use the code below:
#CRAN version
::pkg_install("WeightIt")
pak
#Development version
::pkg_install("ngreifer/WeightIt")
pak
library("WeightIt")
The workhorse function of WeightIt is
weightit()
, which generates weights from a given formula
and data input according to methods and other parameters specified by
the user. Below is an example of the use of weightit()
to
generate propensity score weights for estimating the ATT:
data("lalonde", package = "cobalt")
<- weightit(treat ~ age + educ + nodegree +
W + race + re74 + re75,
married data = lalonde, method = "glm",
estimand = "ATT")
W
#> A weightit object
#> - method: "glm" (propensity score weighting with GLM)
#> - number of obs.: 614
#> - sampling weights: none
#> - treatment: 2-category
#> - estimand: ATT (focal: 1)
#> - covariates: age, educ, nodegree, married, race, re74, re75
Evaluating weights has two components: evaluating the covariate
balance produced by the weights, and evaluating whether the weights will
allow for sufficient precision in the eventual effect estimate. For the
first goal, functions in the cobalt
package, which are
fully compatible with WeightIt, can be used, as demonstrated
below:
library("cobalt")
bal.tab(W, un = TRUE)
#> Balance Measures
#> Type Diff.Un Diff.Adj
#> prop.score Distance 1.7941 -0.0205
#> age Contin. -0.3094 0.1188
#> educ Contin. 0.0550 -0.0284
#> nodegree Binary 0.1114 0.0184
#> married Binary -0.3236 0.0186
#> race_black Binary 0.6404 -0.0022
#> race_hispan Binary -0.0827 0.0002
#> race_white Binary -0.5577 0.0021
#> re74 Contin. -0.7211 -0.0021
#> re75 Contin. -0.2903 0.0110
#>
#> Effective sample sizes
#> Control Treated
#> Unadjusted 429. 185
#> Adjusted 99.82 185
For the second goal, qualities of the distributions of weights can be
assessed using summary()
, as demonstrated below.
summary(W)
#> Summary of weights
#>
#> - Weight ranges:
#>
#> Min Max
#> treated 1.0000 || 1.0000
#> control 0.0092 |---------------------------| 3.7432
#>
#> - Units with the 5 most extreme weights by group:
#>
#> 5 4 3 2 1
#> treated 1 1 1 1 1
#> 597 573 381 411 303
#> control 3.0301 3.0592 3.2397 3.5231 3.7432
#>
#> - Weight statistics:
#>
#> Coef of Var MAD Entropy # Zeros
#> treated 0.000 0.000 0.000 0
#> control 1.818 1.289 1.098 0
#>
#> - Effective Sample Sizes:
#>
#> Control Treated
#> Unweighted 429. 185
#> Weighted 99.82 185
Desirable qualities include small coefficients of variation close to 0 and large effective sample sizes.
Finally, we can estimate the effect of the treatment using a weighted outcome model, accounting for estimation of the weights in the standard error of the effect estimate:
<- glm_weightit(re78 ~ treat, data = lalonde,
fit weightit = W)
summary(fit, ci = TRUE)
#>
#> Call:
#> glm_weightit(formula = re78 ~ treat, data = lalonde, weightit = W)
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|) 2.5 % 97.5 %
#> (Intercept) 5135.1 583.8 8.797 <1e-06 3990.9 6279.2 ***
#> treat 1214.1 798.2 1.521 0.128 -350.3 2778.4
#> Standard error: HC0 robust (adjusted for estimation of weights)
The tables below contains the available methods in WeightIt
for estimating weights for binary, multi-category, and continuous
treatments. Many of these methods do not require any other package to
use; see vignette("installing-packages")
for information on
how to install packages that are used.
Method | method |
---|---|
Binary regression PS | "glm" |
Generalized boosted modeling PS | "gbm" |
Covariate balancing PS | "cbps" |
Non-Parametric covariate balancing PS | "npcbps" |
Entropy balancing | "ebal" |
Inverse probability tilting | "ipt" |
Stable balancing weights | "optweight" |
SuperLearner PS | "super" |
Bayesian additive regression trees PS | "bart" |
Energy balancing | "energy" |
Method | method |
---|---|
Multinomial regression PS | "glm" |
Generalized boosted modeling PS | "gbm" |
Covariate balancing PS | "cbps" |
Non-Parametric covariate balancing PS | "npcbps" |
Entropy balancing | "ebal" |
Inverse probability tilting | "ipt" |
Stable balancing weights | "optweight" |
SuperLearner PS | "super" |
Bayesian additive regression trees PS | "bart" |
Energy balancing | "energy" |
Method | method |
---|---|
Generalized linear model GPS | "glm" |
Generalized boosted modeling GPS | "gbm" |
Covariate balancing GPS | "cbps" |
Non-Parametric covariate balancing GPS | "npcbps" |
Entropy balancing | "ebal" |
Stable balancing weights | "optweight" |
SuperLearner GPS | "super" |
Bayesian additive regression trees GPS | "bart" |
Distance covariance optimal weighting | "energy" |
In addition, WeightIt implements the subgroup balancing
propensity score using the function sbps()
. Several other
tools and utilities are available, including trim()
to trim
or truncate weights, calibrate()
to calibrate propensity
scores, get_w_from_ps()
to compute weights from propensity
scores.
WeightIt provides functions to fit weighted models that
account for the uncertainty in estimating the weights. These include
glm_weightit()
for fitting generalized linear models,
ordinal_weightit()
for ordinal regression models,
multinom_weightit()
for multinomial regression models, and
coxph_weightit()
for Cox proportional hazards models.
Several methods are available for computing the parameter variances,
including asymptotically correct M-estimation-base variances, robust
variances that treat the weights as fixed, and traditional and
fractional weighted bootstrap variances. Clustered variances are
supported. See vignette("estimating-effects")
for
information on how to use these after weighting to estimate treatment
effects.
Please submit bug reports, questions, comments, or other issues to https://github.com/ngreifer/WeightIt/issues. If you would like to see your package or method integrated into WeightIt, please contact the author. Fan mail is greatly appreciated.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.