The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Cross-validation with single algorithm

When users choose to estimate and evaluate ITR under cross-validation, the package implements Algorithm 1 from Imai and Li (2023) to estimate and evaluate ITR. For more information about Algorithm 1, please refer to the this page.

Instead of specifying the split_ratio argument, we choose the number of folds (n_folds). We present an example of estimating ITR with 3 folds cross-validation. In practice, we recommend using 10 folds to get a more stable model performance.

Input R package input Descriptions
Data \(\mathbf{Z}=\left\{\mathbf{X}_i, T_i, Y_i\right\}_{i=1}^n\) treatment = treatment, form = user_formula, data = star_data treatment is a character string specifying the treatment variable in the data; form is a formula specifying the outcome and covariates; and a dataframe data
Machine learning algorithm \(F\) algorithms = c("causal_forest") a character vector specifying the ML algorithms to be used
Evaluation metric \(\tau_f\) PAPE, PAPD, AUPEC, GATE By default
Number of folds \(K\) n_folds = 3 n_folds is a numeric value indicating the number of folds used for cross-validation
budget = 0.2 budget is a numeric value specifying the maximum percentage of population that can be treated under the budget constraint
library(evalITR)
# estimate ITR 
set.seed(2021)
fit_cv <- estimate_itr(
               treatment = treatment,
               form = user_formula,
               data = star_data,
               algorithms = c("causal_forest"),
               budget = 0.2,
               n_folds = 3)
#> Evaluate ITR with cross-validation ...

The output will be an object that includes estimated evaluation metric \(\hat{\tau}_F\) and the estimated variance of \(\hat{\tau}_F\) for different metrics (PAPE, PAPD, AUPEC).

# evaluate ITR 
est_cv <- evaluate_itr(fit_cv)
#> Cannot compute PAPDp

# summarize estimates
summary(est_cv)
#> -- PAPE ------------------------------------------------------------------------
#>   estimate std.deviation     algorithm statistic p.value
#> 1     0.49          0.65 causal_forest      0.76    0.45
#> 
#> -- PAPEp -----------------------------------------------------------------------
#>   estimate std.deviation     algorithm statistic p.value
#> 1        3          0.77 causal_forest       3.9 8.8e-05
#> 
#> -- PAPDp -----------------------------------------------------------------------
#> Cannot compute PAPDp
#> 
#> -- AUPEC -----------------------------------------------------------------------
#>   estimate std.deviation     algorithm statistic p.value
#> 1      1.3           1.6 causal_forest       0.8    0.42
#> 
#> -- GATE ------------------------------------------------------------------------
#>   estimate std.deviation     algorithm group statistic p.value upper lower
#> 1      -56            59 causal_forest     1     -0.95    0.34    59  -171
#> 2       32            67 causal_forest     2      0.48    0.63   163   -99
#> 3       16            59 causal_forest     3      0.27    0.79   131   -99
#> 4       10            76 causal_forest     4      0.14    0.89   159  -138
#> 5       16            98 causal_forest     5      0.16    0.87   209  -177

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.