The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Holdout validation and K-fold cross-validation of Stan programs with the loo package

Bruno Nicenboim

2024-07-03

Introduction

This vignette demonstrates how to do holdout validation and K-fold cross-validation with loo for a Stan program.

Example: Eradication of Roaches using holdout validation approach

This vignette uses the same example as in the vignettes Using the loo package (version >= 2.0.0) and Avoiding model refits in leave-one-out cross-validation with moment matching.

Coding the Stan model

Here is the Stan code for fitting a Poisson regression model:

# Note: some syntax used in this Stan program requires RStan >= 2.26 (or CmdStanR)
# To use an older version of RStan change the line declaring `y` to: int y[N];
stancode <- "
data {
  int<lower=1> K;
  int<lower=1> N;
  matrix[N,K] x;
  array[N] int y;
  vector[N] offset;

  real beta_prior_scale;
  real alpha_prior_scale;
}
parameters {
  vector[K] beta;
  real intercept;
}
model {
  y ~ poisson(exp(x * beta + intercept + offset));
  beta ~ normal(0,beta_prior_scale);
  intercept ~ normal(0,alpha_prior_scale);
}
generated quantities {
  vector[N] log_lik;
  for (n in 1:N)
    log_lik[n] = poisson_lpmf(y[n] | exp(x[n] * beta + intercept + offset[n]));
}
"

Following the usual approach recommended in Writing Stan programs for use with the loo package, we compute the log-likelihood for each observation in the generated quantities block of the Stan program.

Setup

In addition to loo, we load the rstan package for fitting the model. We will also need the rstanarm package for the data.

library("rstan")
library("loo")
seed <- 9547
set.seed(seed)

Holdout validation

For this approach, the model is first fit to the “train” data and then is evaluated on the held-out “test” data.

Splitting the data between train and test

The data is divided between train (80% of the data) and test (20%):

# Prepare data
data(roaches, package = "rstanarm")
roaches$roach1 <- sqrt(roaches$roach1)
roaches$offset <- log(roaches[,"exposure2"])
# 20% of the data goes to the test set:
roaches$test <- 0
roaches$test[sample(.2 * seq_len(nrow(roaches)))] <- 1
# data to "train" the model
data_train <- list(y = roaches$y[roaches$test == 0],
                   x = as.matrix(roaches[roaches$test == 0,
                                         c("roach1", "treatment", "senior")]),
                   N = nrow(roaches[roaches$test == 0,]),
                   K = 3,
                   offset = roaches$offset[roaches$test == 0],
                   beta_prior_scale = 2.5,
                   alpha_prior_scale = 5.0
                   )
# data to "test" the model
data_test <- list(y = roaches$y[roaches$test == 1],
                   x = as.matrix(roaches[roaches$test == 1,
                                         c("roach1", "treatment", "senior")]),
                   N = nrow(roaches[roaches$test == 1,]),
                   K = 3,
                   offset = roaches$offset[roaches$test == 1],
                   beta_prior_scale = 2.5,
                   alpha_prior_scale = 5.0
                   )

Fitting the model with RStan

Next we fit the model to the “test” data in Stan using the rstan package:

# Compile
stanmodel <- stan_model(model_code = stancode)
# Fit model
fit <- sampling(stanmodel, data = data_train, seed = seed, refresh = 0)

We recompute the generated quantities using the posterior draws conditional on the training data, but we now pass in the held-out data to get the log predictive densities for the test data. Because we are using independent data, the log predictive density coincides with the log likelihood of the test data.

gen_test <- gqs(stanmodel, draws = as.matrix(fit), data= data_test)
log_pd <- extract_log_lik(gen_test)

Computing holdout elpd:

Now we evaluate the predictive performance of the model on the test data using elpd().

(elpd_holdout <- elpd(log_pd))

Computed from 4000 by 52 log-likelihood matrix using the generic elpd function

     Estimate    SE
elpd  -1736.7 288.9
ic     3473.5 577.9

When one wants to compare different models, the function loo_compare() can be used to assess the difference in performance.

K-fold cross validation

For this approach the data is divided into folds, and each time one fold is tested while the rest of the data is used to fit the model (see Vehtari et al., 2017).

Splitting the data in folds

We use the data that is already pre-processed and we divide it in 10 random folds using kfold_split_random

# Prepare data
roaches$fold <- kfold_split_random(K = 10, N = nrow(roaches))

Fitting and extracting the log pointwise predictive densities for each fold

We now loop over the 10 folds. In each fold we do the following. First, we fit the model to all the observations except the ones belonging to the left-out fold. Second, we compute the log pointwise predictive densities for the left-out fold. Last, we store the predictive density for the observations of the left-out fold in a matrix. The output of this loop is a matrix of the log pointwise predictive densities of all the observations.

# Prepare a matrix with the number of post-warmup iterations by number of observations:
log_pd_kfold <- matrix(nrow = 4000, ncol = nrow(roaches))
# Loop over the folds
for(k in 1:10){
  data_train <- list(y = roaches$y[roaches$fold != k],
                   x = as.matrix(roaches[roaches$fold != k,
                                         c("roach1", "treatment", "senior")]),
                   N = nrow(roaches[roaches$fold != k,]),
                   K = 3,
                   offset = roaches$offset[roaches$fold != k],
                   beta_prior_scale = 2.5,
                   alpha_prior_scale = 5.0
                   )
  data_test <- list(y = roaches$y[roaches$fold == k],
                   x = as.matrix(roaches[roaches$fold == k,
                                         c("roach1", "treatment", "senior")]),
                   N = nrow(roaches[roaches$fold == k,]),
                   K = 3,
                   offset = roaches$offset[roaches$fold == k],
                   beta_prior_scale = 2.5,
                   alpha_prior_scale = 5.0
                   )
  fit <- sampling(stanmodel, data = data_train, seed = seed, refresh = 0)
  gen_test <- gqs(stanmodel, draws = as.matrix(fit), data= data_test)
  log_pd_kfold[, roaches$fold == k] <- extract_log_lik(gen_test)
}

Computing K-fold elpd:

Now we evaluate the predictive performance of the model on the 10 folds using elpd().

(elpd_kfold <- elpd(log_pd_kfold))

Computed from 4000 by 262 log-likelihood matrix using the generic elpd function

     Estimate     SE
elpd  -5552.7  727.9
ic    11105.5 1455.8

If one wants to compare several models (with loo_compare), one should use the same folds for all the different models.

References

Gelman, A., and Hill, J. (2007). Data Analysis Using Regression and Multilevel Hierarchical Models. Cambridge University Press.

Stan Development Team (2020) RStan: the R interface to Stan, Version 2.21.1 https://mc-stan.org

Vehtari, A., Gelman, A., and Gabry, J. (2017). Practical Bayesian model evaluation using leave-one-out cross-validation and WAIC. Statistics and Computing. 27(5), 1413–1432. :10.1007/s11222-016-9696-4. Links: published | arXiv preprint.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.