The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
As a brief introduction we show how to use mcboost
in only 6 lines of code. For our example, we use the data from the
sonar binary classification task. We instantiate a
MCBoost
instance by specifying a
auditor_fitter
. This auditor_fitter
defines
the splits into groups in each boosting iteration based on the obtained
residuals. In this example, we choose a Tree
based model.
Afterwards, we run the $multicalibrate()
method on our data
to start multi-calibration. We only use the first 200 samples of the
sonar data set to train our multi-calibrated model.
tsk = tsk("sonar")
d = tsk$data(cols = tsk$feature_names)
l = tsk$data(cols = tsk$target_names)[[1]]
mc = MCBoost$new(auditor_fitter = "TreeAuditorFitter")
mc$multicalibrate(d[1:200,], l[1:200])
After the calibration, we use the model to predict on the left-out data (8 observations).
Internally mcboost runs the following procedure max_iter
times:
init_predictor
in the first iteration.res = y - y_hat
num_buckets
according to
y_hat
.auditor_fitter
) (here
calledc(x)
) on the data in each bucket with target variable
r
.misscal = mean(c(x) * res(x))
misscal > alpha
: For the bucket with highest
misscal
, update the model using the prediction
c(x)
. else: Stop the procedureA lot more details can be found either in the code, or in the corresponding publications.
First we download the data and create an mlr3
classification task:
library(data.table)
adult_train = fread(
"https://raw.githubusercontent.com/Yorko/mlcourse.ai/master/data/adult_train.csv",
stringsAsFactors = TRUE
)
adult_train$Country = NULL
adult_train$fnlwgt = NULL
train_tsk = TaskClassif$new("adult_train", adult_train, target = "Target")
We removed the features Country
and fnlwgt
since we expect them to have no predictive power. fnlwgt
means final weight and aims to allocate similar weights to people with
similar demographic characteristics, while Country
has 42
distinct levels but 89 % of the observations are from the United
States.
Then we do basic preprocessing:
library(mlr3pipelines)
pipe = po("collapsefactors", no_collapse_above_prevalence = 0.0006) %>>%
po("fixfactors") %>>%
po("encode") %>>%
po("imputehist")
prep_task = pipe$train(train_tsk)[[1]]
In order to simulate settings where a sensitive feature is not
available, we remove the (dummy encoded) feature Race
from
the training task.
prep_task$set_col_roles(c("Race.Amer.Indian.Eskimo", "Race.Asian.Pac.Islander", "Race.Black", "Race.Other", "Race.White"), remove_from = "feature")
Now we fit a random forest
.
A simple way to use the predictions from any model
in
mcboost is to wrap the predict function and provide it
as an initial predictor. This can be done from any model / any library.
Note, that we have to make sure, that our init_predictor
returns a numeric vector of predictions.
As mcboost requires the data to be provided in
X, y
format (a data.table
or
data.frame
of features and a vector of labels), we create
those two objects.
data = prep_task$data(cols = prep_task$feature_names)
labels = 1 - one_hot(prep_task$data(cols = prep_task$target_names)[[1]])
We use a ridge regularized linear regression model as the auditor.
mc = MCBoost$new(auditor_fitter = "RidgeAuditorFitter", init_predictor = init_predictor)
mc$multicalibrate(data, labels)
The print
method additionally lists the average auditor
values in the different buckets in each iteration:
adult_test = fread(
"https://raw.githubusercontent.com/Yorko/mlcourse.ai/master/data/adult_test.csv",
stringsAsFactors = TRUE
)
adult_test$Country = NULL
adult_test$fnlwgt = NULL
# The first row seems to have an error
adult_test = adult_test[Target != "",]
adult_test$Target = droplevels(adult_test$Target)
# Note, that we have to convert columns from numeric to integer here:
sdc = train_tsk$feature_types[type == "integer", id]
adult_test[, (sdc) := lapply(.SD, as.integer), .SDcols = sdc]
test_tsk = TaskClassif$new("adult_test", adult_test, target = "Target")
prep_test = pipe$predict(test_tsk)[[1]]
Now, we can again extract X, y
.
test_data = prep_test$data(cols = prep_test$feature_names)
test_labels = 1 - one_hot(prep_test$data(cols = prep_test$target_names)[[1]])
and predict.
The accuracy of the multi-calibrated model
is similar to the non-calibrated model.
But if we have a look at the bias for the different subpopulations of
feature Race
, we can see that the predictions got more
calibrated. Note that we did not explicitly give neither the initial
model nor the auditor access to the feature Race
.
# Get bias per subgroup for multi-calibrated predictor
adult_test$biasmc = (prs - test_labels)
adult_test[, .(abs(mean(biasmc)), .N), by = .(Race)]
# Get bias per subgroup for initial predictor
adult_test$biasinit = (init_predictor(test_data) - test_labels)
adult_test[, .(abs(mean(biasinit)), .N), by = .(Race)]
We can also obtain the auditor effect after multicalibration. This indicates “how much” each observation has been affected by multi-calibration (on average across iterations).
We can see that there are a few instances with more pronounced effects, while most have actually only a low effect.
In order to get more insights, we compute quantiles of the less and more effected population (median as cut-point) and analyze differences.
effect = apply(test_data[ae >= median(ae[ae > 0]),], 2, quantile)
no_effect = apply(test_data[ae < median(ae[ae>0]),], 2, quantile)
difference = apply((effect-no_effect), 2, mean)
difference[difference > 0.1]
There seems to be a difference in some variables like
Education
and Marital_Status
.
We can further analyze the individuals:
mcboost
does not require your model to be a
mlr3
model. As an input, mcboost
expects a
function init_predictor
that takes as input
data
and returns a prediction.
tsk = tsk("sonar")
data = tsk$data()[, Class := as.integer(Class) - 1L]
mod = glm(data = data, formula = Class ~ .)
The init_predictor
could then use the glm
model:
… and we can calibrate this predictor.
Very often MCBoost
’s calibration is very aggressive and
tends to overfit. This section tries to introduce a method to regularize
against this overfitting.
In this section we use a Cross-Validated
learner that
predicts on held-out data during the training phase. This idea is based
on Wolpert (1992)’s Stacked Generalization. Other, simpler methods
include choosing a smaller step size eta
or reducing the
number of iters
.
As an init_predictor
we again use a ranger
model from mlr3 and construct an init predictor using the convenience
function provided by mcboost
.
learner = lrn("classif.ranger", predict_type = "prob")
learner$train(tsk)
init_predictor = mlr3_init_predictor(learner)
… and we can calibrate this predictor. This time, we use a
CVTreeAuditorFitter
instead of a
TreeAuditorFitter
. This allows us to avoid overfitting
similar to a technique coined stacked generalization
first
described by Wolpert in 1992. Note, that this can sometimes take a
little longer since each learner is cross-validated using 3
folds (default).
We can also use a fresh chunk of the validation data in each
iteration. mcboost
implements two strategies,
"bootstrap"
and "split"
. While
"split"
simply splits up the data, "bootstrap"
draws a new bootstrap sample of the data in each iteration.
Again, we use a ranger
mlr3 model as our initial
predictor:
learner = lrn("classif.ranger", predict_type = "prob")
learner$train(tsk)
init_predictor = mlr3_init_predictor(learner)
and we can now calibrate:
For this example, we use the sonar dataset once again:
tsk = tsk("sonar")
data = tsk$data(cols = tsk$feature_names)
labels = tsk$data(cols = tsk$target_names)[[1]]
The Subpop-fitter can be easily adjusted by constructing it from a
LearnerAuditorFitter
. This allows for using any
mlr3 learner. See here
for a list of available learners.
rf = LearnerAuditorFitter$new(lrn("regr.rpart", minsplit = 10L))
mc = MCBoost$new(auditor_fitter = rf)
mc$multicalibrate(data, labels)
The TreeAuditorFitter
and
RidgeAuditorFitter
are two instantiations of this Fitter
with pre-defined learners. By providing their character strings the
fitter could be automatically constructed.
In some occasions, instead of using a Learner
, we might
want to use a fixed set of subgroups. Those can either be defined from
the data itself or provided from the outside.
Splitting via the dataset
In order to split the data into groups according to a set of columns,
we use a SubpopAuditorFitter
together with a list of
subpops
. Those define the group splits to multi-calibrate
on. These splits can be either a character
string,
referencing a binary variable in the data or a function
that, when evaluated on the data, returns a binary vector.
In order to showcase both options, we add a binary variable to our
data
:
rf = SubpopAuditorFitter$new(list(
"Bin",
function(data) {data[["V1"]] > 0.2},
function(data) {data[["V1"]] > 0.2 | data[["V3"]] < 0.29}
))
And we can again apply it to predict on new data:
Manually defined masks
If we want to add the splitting from the outside, by supplying binary masks for the rows of the data, we can provide manually defined masks. Note, that the masks have to correspond with the number of rows in the dataset.
During prediction, we now have to supply a set of masks for the prediction data.
When data has missing values or other non-standard columns, we often
have to pre-process data in order to be able to fit models. Those
preprocessing steps can be embedded into the SubPopFitter
by using a mlr3pipelines Pipeline. The following code
shows a brief example:
tsk = tsk("penguins")
# first we convert to a binary task
row_ids = tsk$data(cols = c("species", "..row_id"))[species %in% c("Adelie", "Gentoo")][["..row_id"]]
tsk$filter(row_ids)$droplevels()
tsk
library("mlr3pipelines")
library("mlr3learners")
# Convert task to X,y
X = tsk$data(cols = tsk$feature_names)
y = tsk$data(cols = tsk$target_names)
# Our inital model is a pipeline that imputes missings and encodes categoricals
init_model = as_learner(po("encode") %>>% po("imputehist") %>>%
lrn("classif.glmnet", predict_type = "prob"))
# And we fit it on a subset of the data in order to simulate a poorly performing model.
init_model$train(tsk$clone()$filter(row_ids[c(1:9, 160:170)]))
init_model$predict(tsk)$score()
# We define a pipeline that imputes missings and encodes categoricals
auditor = as_learner(po("encode") %>>% po("imputehist") %>>% lrn("regr.rpart"))
mc = MCBoost$new(auditor_fitter = auditor, init_predictor = init_model)
mc$multicalibrate(X, y)
and we can observe where it improved:
We abuse the Communities & Crime
dataset in order to
showcase how mcboost
can be used in a regression
setting.
First we download the data and create an mlr3
regression
task:
library(data.table)
library(mlr3oml)
oml = OMLData$new(42730)
data = oml$data
tsk = TaskRegr$new("communities_crime", data, target = "ViolentCrimesPerPop")
Currently, mcboost only allows to work with targets between 0 and 1. Luckily, our target variable’s values are already in that range, but if they were not, we could simply scale them to [0;1] before our analysis.
We again split our task into train and
test. We do this in mlr3
by creating a 2/3
- 1/3 split using mlr3::partition()
and assigning the train
ids to the row role "use"
.
Then we do basic preprocessing, since we do not have any categorical variables, we only impute NA’s using a histogram approach.
library(mlr3pipelines)
pipe = po("imputehist")
prep_task = pipe$train(list(tsk))[[1]]
prep_task$set_col_roles(c("racepctblack", "racePctWhite", "racePctAsian", "racePctHisp", "community"), remove_from = "feature")
Now we fit our first Learner
: A
random forest
.
A simple way to use the predictions from any Model
in
mcboost is to wrap the predict function and provide it
as an initial predictor. This can be done from any model / any library.
Note, that we have to make sure, that our init_predictor
returns a numeric vector of predictions.
As mcboost requires the data to be provided in
X, y
format (a data.table
or
data.frame
of features and a vector of labels), we create
those two objects.
We first create the test task by assigning the test ids to the row
role "use"
, and then use our preprocessing
pipe's
predict function to also impute missing values for
the validation data. Then we again extract features X
and
target y
.
test_task = tsk$clone()
test_task$row_roles$use = split$test
test_task = pipe$predict(list(test_task))[[1]]
test_data = test_task$data(cols = tsk$feature_names)
test_labels = test_task$data(cols = tsk$target_names)[[1]]
and predict.
Now we can compute the MSE of the multi-calibrated model
and compare to the non-calibrated version:
But looking at sub-populations we can see that the predictions got
more calibrated. Since we cannot show all subpopulations we only show
the MSE for the feature racepctblack
.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.