The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
ODRF implements the well-known Oblique Decision Tree (ODT) and ODT-based Random Forest (ODRF), which uses linear combinations of predictors as partitioning variables for both traditional CART and Random Forest. A number of modifications have been adopted in the implementation; some new functions are also provided.
The ODRF R package consists of the following main functions:
ODT()
classification and regression using an ODT in
which each node is split by a linear combination of predictors.ODRF()
classification and regression implemented by the
ODRF It’s an extension of random forest based on ODT() and includes
random forest as a special case.Online()
online training to update existing ODT and
ODRF by using new data sets.prune()
prune ODT from bottom to top with validation
data based on prediction error.print()
, predict() and plot() the base R functions in
the base R Package to class ODT and ODRF.ODRF
allows users to define their own functions to find
the projections at each node, which is essential to the performance of
the forests. We also provide a complete comparison and analysis for
other ODT and ODRF, more details are available in vignette(“ODRF”).
You can install the development version of ODRF from GitHub with:
# install.packages("devtools")
::install_github("liuyu-star/ODRF") devtools
We show how to use the ODRF package with examples.
Examples of classification and regression using ODRF and ODT are as follows.
library(ODRF)
#> Loading required package: partykit
#> Warning: package 'partykit' was built under R version 4.2.3
#> Loading required package: grid
#> Loading required package: libcoin
#> Loading required package: mvtnorm
data(seeds, package = "ODRF")
set.seed(12)
<- sample(1:209, 150)
train <- data.frame(seeds[train, ])
seeds_train <- data.frame(seeds[-train, ])
seeds_test <- ODRF(varieties_of_wheat ~ ., seeds_train, split = "gini",
forest parallel = FALSE)
<- predict(forest, seeds_test[, -8])
pred <- mean(pred != seeds_test[, 8]))
(e.forest #> [1] 0.01694915
data(body_fat, package = "ODRF")
<- sample(1:252, 200)
train <- data.frame(body_fat[train, ])
bodyfat_train <- data.frame(body_fat[-train, ])
bodyfat_test <- ODT(Density ~ ., bodyfat_train, split = 'mse')
tree <- predict(tree, bodyfat_test[, -1])
pred <- mean((pred - bodyfat_test[, 1])^2))
(e.tree #> [1] 4.248171e-05
In the following example, suppose the training data are available in two batches. The first batch is used to train ODT and ODRF, and the second batch is used to update the model by online. The error after the model update is significantly smaller than that of one batch of data alone.
Update existing ODT and ODRF with online.
set.seed(17)
<- sample(nrow(seeds_train), floor(nrow(seeds_train) / 2))
index <- ODRF(varieties_of_wheat ~ ., seeds_train[index, ],
forest1 split = "gini", parallel = FALSE)
<- predict(forest1, seeds_test[, -8])
pred .1 <- mean(pred != seeds_test[, 8]))
(e.forest#> [1] 0.05084746
<- online(forest1, seeds_train[-index, -8], seeds_train[-index, 8])
forest2 <- predict(forest2, seeds_test[, -8])
pred <- mean(pred != seeds_test[, 8]))
(e.forest.online #> [1] 0.03389831
<- seq(floor(nrow(bodyfat_train) / 2))
index <- ODT(Density ~ ., bodyfat_train[index, ], split = 'mse')
tree1 <- predict(tree1, bodyfat_test[, -1])
pred .1 <- mean((pred - bodyfat_test[, 1])^2))
(e.tree#> [1] 5.563611e-05
<- online(tree1, bodyfat_train[-index, -1], bodyfat_train[-index, 1])
tree2 <- predict(tree2, bodyfat_test[, -1])
pred <- mean((pred - bodyfat_test[, 1])^2))
(e.tree.online #> [1] 5.567618e-05
prune first judges whether the error of new data is reduced or not if applied, starting from the last leaf nodes. For ODRF, if argument ‘useOOB=TRUE’ then it uses ‘out-of-bag’ for pruning. Examples are as follows.
set.seed(4)
=rbind(as.matrix(bodyfat_train),matrix(rnorm(3000*5),5*200,15))
bodyfat_train=rbind(as.matrix(seeds_train),matrix(rnorm(1200*5),5*150,8))
seeds_train-seq(200),1]=sample(bodyfat_train[seq(200),1],5*200,
bodyfat_train[replace = TRUE)
-seq(150),8]=sample(seeds_train[seq(150),8],5*150,
seeds_train[replace = TRUE)
<- sample(nrow(seeds_train), floor(nrow(seeds_train) / 2))
index <- ODRF(seeds_train[index, -8], seeds_train[index, 8],
forest1 split = "gini", parallel = FALSE)
<- predict(forest1, seeds_test[, -8])
pred .1 <- mean(pred != seeds_test[, 8]))
(e.forest#> [1] 0.08474576
<- prune(forest1, seeds_train[-index, -8], seeds_train[-index, 8],
forest2 useOOB = FALSE)
<- predict(forest2, seeds_test[, -8])
pred <- mean(pred != seeds_test[, 8]))
(e.forest.prune1 #> [1] 0.06779661
<- prune(forest1, seeds_train[index, -8], seeds_train[index, 8])
forest3 <- predict(forest3, seeds_test[, -8])
pred <- mean(pred != seeds_test[, 8]))
(e.forest.prune2 #> [1] 0.06779661
<- sample(nrow(bodyfat_train), floor(nrow(bodyfat_train) / 2))
index <- ODT(bodyfat_train[index, -1], bodyfat_train[index, 1], split = 'mse')
tree1 <- predict(tree1, bodyfat_test[, -1])
pred .1 <- mean((pred - bodyfat_test[, 1])^2))
(e.tree#> [1] 9.44763e-05
<- prune(tree1, bodyfat_train[-index, -1], bodyfat_train[-index, 1])
tree2 <- predict(tree2, bodyfat_test[, -1])
pred <- mean((pred - bodyfat_test[, 1])^2))
(e.tree.prune #> [1] 7.59766e-05
Note that, prune does not always improve efficiency because the number of observers in the training set is too small to build a simple tree structure. Therefore, we expand the training set with random numbers to make prune effective.
data(iris, package = "datasets")
<- ODT(Species ~ ., data = iris)
tree #> Warning in ODT_compute(formula, Call, varName, X, y, split, lambda,
#> NodeRotateFun, : You are creating a tree for classification
print(tree)
#>
#> =============================================================
#> Oblique Classification Tree structure
#> =============================================================
#>
#> 1) root
#> node2)# proj1*X < 0.25 -> (leaf1 = setosa)
#> node3) proj1*X >= 0.25
#> node4)# proj2*X < 0.87 -> (leaf2 = versicolor)
#> node5)# proj2*X >= 0.87 -> (leaf3 = virginica)
<- as.party(tree, data = iris)
party.tree print(party.tree)
#>
#> Model formula:
#> Species ~ Sepal.Length + Sepal.Width + Petal.Length + Petal.Width
#>
#> Fitted party:
#> [1] root
#> | [2] proj1*X >= 0.24576
#> | | [3] proj2*X >= 0.86619: virginica (n = 54, err = 7.4%)
#> | | [4] proj2*X < 0.86619: versicolor (n = 46, err = 0.0%)
#> | [5] proj1*X < 0.24576: setosa (n = 50, err = 0.0%)
#>
#> Number of inner nodes: 2
#> Number of terminal nodes: 3
<- ODRF(Species ~ ., data = iris, parallel = FALSE)
forest #> Warning in ODRF_compute(formula, Call, varName, X, y, split, lambda,
#> NodeRotateFun, : You are creating a forest for classification
print(forest)
#>
#> Call:
#> ODRF.formula(formula = Species ~ ., data = data, parallel = FALSE)
#> Type of oblique decision random forest: classification
#> Number of trees: 100
#> OOB estimate of error rate: 4.67%
#> Confusion matrix:
#> setosa versicolor virginica class_error
#> setosa 50 0 0 0.00000000
#> versicolor 0 47 4 0.07843122
#> virginica 0 3 46 0.06122436
plot(tree)
If you encounter a clear bug, please file an issue with a minimal reproducible example on GitHub.
Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.