The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

A sparse-group boosing Tutorial in R

Introduction

sgboost is an implementation of the sparse-group boosting 1 to be used in conjunction with the R-package mboost. In the fitting process, a formula object defining group base-learners and individual base-learners is used. Regularization is based on the degrees of freedom of an individual base-learners \(df(\lambda)\) and group base-learners \(df(\lambda^{(g)})\), such that \(df(\lambda) = \alpha\) and \(df(\lambda^{(g)}) = 1- \alpha\). Sparse-group boosting serves as an alternative method to sparse-group lasso, employing boosted Ridge regression.

Installation

You can install the released version of sgboost from CRAN with:

install.packages("sgboost")

You can install the development version of sgboost from GitHub with:

# install.packages("devtools")
devtools::install_github("FabianObster/sgboost")

Vignettes are not included in the package by default. If you want to include vignettes, then use this modified command:

remotes::install_github(
  "FabianObster/sgboost",
  build_vignettes = TRUE, dependencies = TRUE
)
library(mboost)
library(sgboost)
library(dplyr)
library(ggplot2)

Data setup

We use the same simulated data and model structure as in the vignette of sparsegl (McDonald, Liang , Heinsfeld, 2023)2. Randomly generate a \(n\times p\) design matrix X.Randomly generate an \(n \times p\) design matrix X. For the real-valued vector y, the following two settings are being used:

where the coefficient vector \(\beta^*\) is specified as below, and the white noise \(\epsilon\) follows a standard normal distribution.

set.seed(10)
n <- 100
p <- 200
X <- matrix(data = rnorm(n * p, mean = 0, sd = 1), nrow = n, ncol = p)
beta_star <- c(
  rep(5, 5), c(5, -5, 2, 0, 0), rep(-5, 5),
  c(2, -3, 8, 0, 0), rep(0, (p - 20))
)
groups <- rep(1:(p / 5), each = 5)

# Linear regression model
eps <- rnorm(n, mean = 0, sd = 1)
y <- X %*% beta_star + eps

# Logistic regression model
pr <- 1 / (1 + exp(-X %*% beta_star))
y_binary <- as.factor(rbinom(n, 1, pr))

# Input data.frames
df <- X %>%
  as.data.frame() %>%
  mutate_all(function(x) {
    as.numeric(scale(x))
  }) %>%
  mutate(y = as.numeric(y), y_binary = y_binary)
group_df <- data.frame(
  group_name = groups,
  variable_name = head(colnames(df), -2)
)

Optimization Problem

The sparse-group-boosting problem is formulated as the sum of mean squared error (linear regression) or logistic loss (logistic regression) within each boosting iteration:

Workflow

To estimate, tune and interpret a sparse-group boosting model, the following workflow is advised:

Define model

We start by creating the formula that describes the sparse-group boosting optimization problem, as stated above. We pass three central parameters to create_formula.

sgb_formula_linear <- create_formula(
  alpha = 0.4, group_df = group_df, outcome_name = "y", intercept = FALSE,
  group_name = "group_name", var_name = "variable_name",
)
#> Warning in create_formula(alpha = 0.4, group_df = group_df, outcome_name = "y", : there is a group containing only one variable.
#>             It will be treated as individual variable and as group
sgb_formula_binary <- create_formula(
  alpha = 0.4, group_df = group_df, outcome_name = "y_binary", intercept = FALSE,
  group_name = "group_name", var_name = "variable_name",
)
#> Warning in create_formula(alpha = 0.4, group_df = group_df, outcome_name = "y_binary", : there is a group containing only one variable.
#>             It will be treated as individual variable and as group

Model fitting and tuning

Pass the formula to mboost and use the arguments as seems appropriate. The main hyperparameters are nuand mstop. For model tuning, the mboost function cvrisk can be used and plotted. cvrisk may be slow to run. One can run it in parallel to speed up.

sgb_model_linear <- mboost(
  formula = sgb_formula_linear, data = df,
  control = boost_control(nu = 1, mstop = 600)
)
# cv_sgb_model_linear <- cvrisk(sgb_model_linear,
#                               folds = cv(model.weights(sgb_model_linear),
#                                          type = 'kfold', B = 10))
sgb_model_binary <- mboost(
  formula = sgb_formula_binary, data = df, family = Binomial(),
  control = boost_control(nu = 1, mstop = 600)
)
# cv_sgb_model_binary <- cvrisk(sgb_model_binary,
#                               folds = cv(model.weights(sgb_model_linear),
#                                          type = 'kfold', B = 10))
# mstop(cv_sgb_model_linear)
# mstop(cv_sgb_model_binary)
# plot(cv_sgb_model_linear)
# plot(cv_sgb_model_binary)
sgb_model_linear <- sgb_model_linear[320]
sgb_model_binary <- sgb_model_binary[540]

In this example, the lowest out-of-sample risk is obtained at 320 (linear) and 540 (logistic) boosting iterations.

Interpreting and plotting a sparse-group boosting model

sgboost has useful functions to understand sparse-group boosting models and reflects that final model estimates of a specific variable in the dataset can be attributed to group base-learners as well as individual baseleraners depending on the boosting iteration

Variable importance

A good starting point for understanding a sparse-group boosting model is the variable importance. In the context of boosting the variable importance can be defined as the relative contribution of each predictor to the overall reduction of the loss function (negative log likelihood). get_varimp returns the variable importance of each base-learner/predictor selected in throughout the boosting process. Note that in the case of the selection of a group and an individual variable - call it \(x_1\) - from the same group -\(x_1, x_2, ... x_p\) -, both base-learners (predictors) will have an associated variable importance as defined before. This allows us to differentiate between the individual contribution of \(x_1\) its own variable and the contribution of the group \(x_1\) is part of as a group construct. It is not possible to compute the aggregated variable importance of \(x_1\) as it is not clear how much \(x_1\) contributes to the group. However, the aggregated coefficients can be computed using get_coef. Also, the aggregated importance of all groups vs. all individual variables are returned in a separate data.frame. With plot_varimp one can visualize the variable importance as a barplot. One can indicate the maximum number of predictors to be printed through n_predictors or the minimal variable importance that a predictor has to have though prop. Through both parameters the number of printed entries can be reduced. Note that in this case, the relative importance of groups in the legend is based only on the plotted variables and not the ones removed. To add information about the direction of effect sizes, one could add arrows behind the bars3. To do this for the groups, one can use the aggregated coefficients from get_coef.

get_varimp(sgb_model = sgb_model_linear)$varimp %>% slice(1:10)
#> # A tibble: 10 × 6
#>    reduction blearner                predictor selfreq type  relative_importance
#>        <dbl> <chr>                   <chr>       <dbl> <chr>               <dbl>
#>  1    137.   bols(V1, V2, V3, V4, V… V1, V2, … 0.162   group             0.300  
#>  2    132.   bols(V18, intercept = … V18       0.0125  indi…             0.291  
#>  3    106.   bols(V11, V12, V13, V1… V11, V12… 0.178   group             0.232  
#>  4     19.1  bols(V7, intercept = F… V7        0.0625  indi…             0.0419 
#>  5     18.1  bols(V6, intercept = F… V6        0.0656  indi…             0.0397 
#>  6     16.1  bols(V14, intercept = … V14       0.00312 indi…             0.0354 
#>  7      6.87 bols(V17, intercept = … V17       0.05    indi…             0.0151 
#>  8      6.27 bols(V16, intercept = … V16       0.025   indi…             0.0138 
#>  9      4.27 bols(V149, intercept =… V149      0.025   indi…             0.00939
#> 10      3.54 bols(V8, intercept = F… V8        0.0312  indi…             0.00779
get_varimp(sgb_model = sgb_model_linear)$group_importance
#> # A tibble: 2 × 2
#>   type       importance
#>   <chr>           <dbl>
#> 1 group           0.533
#> 2 individual      0.467
plot_varimp(sgb_model = sgb_model_linear, n_predictors = 15)
#> The number characters of some predictors were reduced.
#>             Adjust with max_char_length
#> 50 predictors were removed. Use prop or n_predictors to change

Resulting model coefficients

The resulting coefficients can be retrieved through get_coef. In a sparse-group boosting models, a variable in a dataset can be selected as an individual variable or as a group. Therefore, there can be two associated effect sizes for the same variable. This function aggregates both and returns them in a data.frame sorted by the effect size 'effect'.

get_coef(sgb_model = sgb_model_linear)$aggregate %>% slice(1:10)
#> # A tibble: 10 × 4
#>    variable effect blearner                                            predictor
#>    <chr>     <dbl> <chr>                                               <chr>    
#>  1 V18        7.80 bols(V18, intercept = FALSE, df = 0.4)              V18      
#>  2 V15       -5.87 bols(V11, V12, V13, V14, V15, intercept = FALSE, d… V11, V12…
#>  3 V5         5.42 bols(V1, V2, V3, V4, V5, intercept = FALSE, df = 0… V1, V2, …
#>  4 V4         5.15 bols(V1, V2, V3, V4, V5, intercept = FALSE, df = 0… V1, V2, …
#>  5 V11       -4.99 bols(V11, V12, V13, V14, V15, intercept = FALSE, d… V11, V12…
#>  6 V13       -4.97 bols(V11, V12, V13, V14, V15, intercept = FALSE, d… V11, V12…
#>  7 V14       -4.93 bols(V11, V12, V13, V14, V15, intercept = FALSE, d… V11, V12…
#>  8 V6         4.92 bols(V6, intercept = FALSE, df = 0.4); bols(V6, V7… V6; V6, …
#>  9 V7        -4.89 bols(V7, intercept = FALSE, df = 0.4); bols(V6, V7… V7; V6, …
#> 10 V2         4.87 bols(V1, V2, V3, V4, V5, intercept = FALSE, df = 0… V1, V2, …
get_coef(sgb_model = sgb_model_linear)$raw %>% slice(1:10)
#> # A tibble: 10 × 5
#>    variable effect blearner                                      predictor type 
#>    <chr>     <dbl> <chr>                                         <chr>     <chr>
#>  1 V18        7.80 bols(V18, intercept = FALSE, df = 0.4)        V18       indi…
#>  2 V5         5.42 bols(V1, V2, V3, V4, V5, intercept = FALSE, … V1, V2, … group
#>  3 V15       -5.39 bols(V11, V12, V13, V14, V15, intercept = FA… V11, V12… group
#>  4 V4         5.08 bols(V1, V2, V3, V4, V5, intercept = FALSE, … V1, V2, … group
#>  5 V11       -4.94 bols(V11, V12, V13, V14, V15, intercept = FA… V11, V12… group
#>  6 V6         4.92 bols(V6, intercept = FALSE, df = 0.4)         V6        indi…
#>  7 V7        -4.88 bols(V7, intercept = FALSE, df = 0.4)         V7        indi…
#>  8 V2         4.84 bols(V1, V2, V3, V4, V5, intercept = FALSE, … V1, V2, … group
#>  9 V3         4.49 bols(V1, V2, V3, V4, V5, intercept = FALSE, … V1, V2, … group
#> 10 V13       -4.48 bols(V11, V12, V13, V14, V15, intercept = FA… V11, V12… group

Plotting effect sizes and importance

With plot_effects we can plot the effect sizes of the sparse-group boosting model in relation to the relative importance to get an overall picture of the model. Through the parameter 'plot_type' one can choose the type of visualization. 'radar' refers to a radar plot using polar coordinates. Here the angle is relative to the cumulative relative importance of predictors and the radius is proportional to the effect size. 'clock' does the same as 'radar' but uses clock coordinates instead of polar coordinates. 'scatter' uses the effect size as y-coordinate and the cumulative relative importance as x-axis in a classical Scatter plot.

plot_effects(sgb_model = sgb_model_binary, n_predictors = 10, base_size = 10)
#> 31 predictors were removed. Use prop or n_predictors to change

plot_effects(sgb_model = sgb_model_binary, n_predictors = 10, plot_type = "clock", base_size = 10)
#> 31 predictors were removed. Use prop or n_predictors to change

plot_effects(sgb_model = sgb_model_binary, n_predictors = 10, plot_type = "scatter", base_size = 10)
#> 31 predictors were removed. Use prop or n_predictors to change

Coefficient path

plot_path calls get_coef_path to retrieve the aggregated coefficients from a mboost object for each boosting iteration and plots it, while indicating if a coefficient was updated by an individual variable or group.

plot_path(sgb_model = sgb_model_linear[100])

plot_path(sgb_model = sgb_model_binary[100])


  1. Obster F & Heumann C, (2024). Sparse-group boosting – Unbiased group and variable selection. https://doi.org/10.48550/arXiv.2206.06344↩︎

  2. McDonald D, Liang X, Heinsfeld A, Cohen A, Yang Y, Zou H, Friedman J, Hastie T, Tibshirani R, Narasimhan B, Tay K, Simon N, Qian J & Yang J. (2022). Getting started with sparsegl. https://CRAN.R-project.org/package=sparsegl.↩︎

  3. Obster F., Bohle H. & Pechan P.M. (2024). The financial well-being of fruit farmers in Chile and Tunisia depends more on social and geographical factors than on climate change. Commun Earth Environ 5, 16. https://doi.org/10.1038/s43247-023-01128-2.↩︎

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.