The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

stackgbm: Stacked Gradient Boosting Machines

A minimalist implementation of model stacking by Wolpert (1992) <doi:10.1016/S0893-6080(05)80023-1> for boosted tree models. A classic, two-layer stacking model is implemented, where the first layer generates features using gradient boosting trees, and the second layer employs a logistic regression model that uses these features as inputs. Utilities for training the base models and parameters tuning are provided, allowing users to experiment with different ensemble configurations easily. It aims to provide a simple and efficient way to combine multiple gradient boosting models to improve predictive model performance and robustness.

Version: 0.1.0
Depends: R (≥ 3.5.0)
Imports: pROC, progress, rlang
Suggests: knitr, lightgbm, msaenet, rmarkdown, xgboost
Published: 2024-04-30
Author: Nan Xiao ORCID iD [aut, cre, cph]
Maintainer: Nan Xiao <me at nanx.me>
BugReports: https://github.com/nanxstats/stackgbm/issues
License: MIT + file LICENSE
URL: https://nanx.me/stackgbm/, https://github.com/nanxstats/stackgbm
NeedsCompilation: no
Materials: README NEWS
CRAN checks: stackgbm results

Documentation:

Reference manual: stackgbm.pdf
Vignettes: Model stacking for boosted trees

Downloads:

Package source: stackgbm_0.1.0.tar.gz
Windows binaries: r-devel: stackgbm_0.1.0.zip, r-release: not available, r-oldrel: stackgbm_0.1.0.zip
macOS binaries: r-release (arm64): stackgbm_0.1.0.tgz, r-oldrel (arm64): stackgbm_0.1.0.tgz, r-release (x86_64): stackgbm_0.1.0.tgz, r-oldrel (x86_64): stackgbm_0.1.0.tgz

Linking:

Please use the canonical form https://CRAN.R-project.org/package=stackgbm to link to this page.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.