The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
This package provides functions for conducting frequentist inference on adaptively generated data. These functions produce point estimates and confidence intervals, using the methods proposed in Zhan, Ruohan, et al. (2021) and Hadad, Vitor, et al. (2021). The code in this package is directly adapted from the original python code for those publications, documented at - github.com/gsbDBI/adaptive-confidence-intervals and - github.com/gsbDBI/contextual_bandits_evaluation.
For illustration, several functions for simulating non-contextual and contextual adaptive experiments using Thompson sampling are also supplied.
The latest release of the package can be installed through CRAN:
install.packages("banditsCI")
The current development version can be installed from source using devtools:
::install_github("Uchicago-pol-methods/banditsCI") devtools
library(banditsCI)
set.seed(60637)
# Generate synthetic data.
<- generate_bandit_data(xs = as.matrix(iris[,1:4]),
data y = as.numeric(iris[,5]))
# Run a simulated (non-contextual) experiment.
<- run_experiment(ys = data$data$ys,
results floor_start = 1/data$data$K,
floor_decay = 0.9,
batch_sizes = c(50, 50, 50))
# Evaluate mean response under treatment arms.
## Balancing weights
<- calculate_balwts(results$ws, results$probs)
balwts ## ipw scores
<- aw_scores(
aipw_scores ws = results$ws,
yobs = results$yobs,
K = ncol(results$ys),
balwts = balwts)
## The policies we're evaluating
<- lapply(1:data$data$K, function(x) {
policy1 <- matrix(0, nrow = data$data$A, ncol = data$data$K)
pol_mat <- 1
pol_mat[,x]
pol_mat
}
)
## Estimation
<- output_estimates(
out_full policy1 = policy1,
gammahat = aipw_scores,
probs_array = results$probs,
floor_decay = 0.9)
out_full
For a more detailed description of how to use the package functions, see the vignette.
We produce estimates under different adaptive weighting schemes in
the output_estimates()
function. Weighting schemes
include:
uniform = TRUE
.non_contextual_minvar = TRUE
.MinVar
function \(\phi(v) = \sqrt{1/v}\)contextual_minvar = TRUE
.MinVar
function \(\phi(v) = \sqrt{1/v}\)non_contextual_stablevar = TRUE
.StableVar
function \(\phi(v) = 1/v\)contextual_stablevar = TRUE
.StableVar
function \(\phi(v) = 1/v\)non_contextual_twopoint = TRUE
.We illustrate precise replication of simulated experimental results using the code from the original papers (we modify python notebooks for the benefit of illustration). The code for the replication is available in the following repositories:
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.