The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

Oblique Decision Random Forest for Classification and Regression

ODRF

CRAN status R-CMD-check pkgdown Lifecycle: stable

ODRF implements the well-known Oblique Decision Tree (ODT) and ODT-based Random Forest (ODRF), which uses linear combinations of predictors as partitioning variables for both traditional CART and Random Forest. A number of modifications have been adopted in the implementation; some new functions are also provided.

Overview

The ODRF R package consists of the following main functions:

ODRF allows users to define their own functions to find the projections at each node, which is essential to the performance of the forests. We also provide a complete comparison and analysis for other ODT and ODRF, more details are available in vignette(“ODRF”).

Installation

You can install the development version of ODRF from GitHub with:

# install.packages("devtools")
devtools::install_github("liuyu-star/ODRF")

Usage

We show how to use the ODRF package with examples.

Classification and regression using ODT and ODRF

Examples of classification and regression using ODRF and ODT are as follows.

library(ODRF)
#> Loading required package: partykit
#> Warning: package 'partykit' was built under R version 4.2.3
#> Loading required package: grid
#> Loading required package: libcoin
#> Loading required package: mvtnorm
data(seeds, package = "ODRF")
set.seed(12)
train <- sample(1:209, 150)
seeds_train <- data.frame(seeds[train, ])
seeds_test <- data.frame(seeds[-train, ])
forest <- ODRF(varieties_of_wheat ~ ., seeds_train, split = "gini", 
  parallel = FALSE)
pred <- predict(forest, seeds_test[, -8])
(e.forest <- mean(pred != seeds_test[, 8]))
#> [1] 0.01694915
data(body_fat, package = "ODRF")
train <- sample(1:252, 200)
bodyfat_train <- data.frame(body_fat[train, ])
bodyfat_test <- data.frame(body_fat[-train, ])
tree <- ODT(Density ~ ., bodyfat_train, split = 'mse')
pred <- predict(tree, bodyfat_test[, -1])
(e.tree <- mean((pred - bodyfat_test[, 1])^2))
#> [1] 4.248171e-05

In the following example, suppose the training data are available in two batches. The first batch is used to train ODT and ODRF, and the second batch is used to update the model by online. The error after the model update is significantly smaller than that of one batch of data alone.

Update existing ODT and ODRF with online.

set.seed(17)
index <- sample(nrow(seeds_train), floor(nrow(seeds_train) / 2))
forest1 <- ODRF(varieties_of_wheat ~ ., seeds_train[index, ],
  split = "gini", parallel = FALSE)
pred <- predict(forest1, seeds_test[, -8])
(e.forest.1 <- mean(pred != seeds_test[, 8]))
#> [1] 0.05084746
forest2 <- online(forest1, seeds_train[-index, -8], seeds_train[-index, 8])
pred <- predict(forest2, seeds_test[, -8])
(e.forest.online <- mean(pred != seeds_test[, 8]))
#> [1] 0.03389831
index <- seq(floor(nrow(bodyfat_train) / 2))
tree1 <- ODT(Density ~ ., bodyfat_train[index, ], split = 'mse')
pred <- predict(tree1, bodyfat_test[, -1])
(e.tree.1 <- mean((pred - bodyfat_test[, 1])^2))
#> [1] 5.563611e-05
tree2 <- online(tree1, bodyfat_train[-index, -1], bodyfat_train[-index, 1])
pred <- predict(tree2, bodyfat_test[, -1])
(e.tree.online <- mean((pred - bodyfat_test[, 1])^2))
#> [1] 5.567618e-05

prune first judges whether the error of new data is reduced or not if applied, starting from the last leaf nodes. For ODRF, if argument ‘useOOB=TRUE’ then it uses ‘out-of-bag’ for pruning. Examples are as follows.

set.seed(4)
bodyfat_train=rbind(as.matrix(bodyfat_train),matrix(rnorm(3000*5),5*200,15))
seeds_train=rbind(as.matrix(seeds_train),matrix(rnorm(1200*5),5*150,8))
bodyfat_train[-seq(200),1]=sample(bodyfat_train[seq(200),1],5*200,
  replace = TRUE)
seeds_train[-seq(150),8]=sample(seeds_train[seq(150),8],5*150,
  replace = TRUE)
index <- sample(nrow(seeds_train), floor(nrow(seeds_train) / 2))
forest1 <- ODRF(seeds_train[index, -8], seeds_train[index, 8],
  split = "gini", parallel = FALSE)
pred <- predict(forest1, seeds_test[, -8])
(e.forest.1 <- mean(pred != seeds_test[, 8]))
#> [1] 0.08474576
forest2 <- prune(forest1, seeds_train[-index, -8], seeds_train[-index, 8], 
  useOOB = FALSE)
pred <- predict(forest2, seeds_test[, -8])
(e.forest.prune1 <- mean(pred != seeds_test[, 8]))
#> [1] 0.06779661
forest3 <- prune(forest1, seeds_train[index, -8], seeds_train[index, 8])
pred <- predict(forest3, seeds_test[, -8])
(e.forest.prune2 <- mean(pred != seeds_test[, 8]))
#> [1] 0.06779661
index <- sample(nrow(bodyfat_train), floor(nrow(bodyfat_train) / 2))
tree1 <- ODT(bodyfat_train[index, -1], bodyfat_train[index, 1], split = 'mse')
pred <- predict(tree1, bodyfat_test[, -1])
(e.tree.1 <- mean((pred - bodyfat_test[, 1])^2))
#> [1] 9.44763e-05
tree2 <- prune(tree1, bodyfat_train[-index, -1], bodyfat_train[-index, 1])
pred <- predict(tree2, bodyfat_test[, -1])
(e.tree.prune <- mean((pred - bodyfat_test[, 1])^2))
#> [1] 7.59766e-05

Note that, prune does not always improve efficiency because the number of observers in the training set is too small to build a simple tree structure. Therefore, we expand the training set with random numbers to make prune effective.

Plot the tree structure of ODT

plot(tree)

Getting help

If you encounter a clear bug, please file an issue with a minimal reproducible example on GitHub.


Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.