The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
For this package, we have written methods to estimate regressions trees and random forests to minimize the spectral objective:
\[\hat{f} = \text{argmin}_{f' \in \mathcal{F}} \frac{||Q(\mathbf{Y} - f'(\mathbf{X}))||_2^2}{n}\] The package is currently fully written in R Core Team (2024) for now and it gets quite slow for larger sample sizes. There might be a faster cpp version in the future, but for now, there are a few ways to increase the computations if you apply the methods to larger data sets.
Some speedup can be achieved by taking advantage of modern hardware.
When estimating an SDForest, the most obvious way to increase the computations is to fit the individual trees on different cores in parallel. Parallel computing is supported for both Unix and Windows. Depending on how your system is set up, some linear algebra libraries might already run in parallel. In this case, the speed improvement from choosing more than one core to run on might not be that large. Be aware of potential RAM-overflows.
Especially if we have many observations, it might be reasonable to
perform the matrix multiplications on a GPU. We can evaluate many
potential splits simultaneously by multiplying an n times n matrix with
an n times potential split matrix on a GPU. We use GPUmatrix (Lobato-Fernandez, A.Ferrer-Bonsoms, and Rubio
2024) to do the calculations on the GPU. We also refer to their
website to set up your GPU properly. The number of splits that can be
evaluated in parallel in this way highly depends on your GPU size and
can be controlled using the mem_size
parameter. The default
value of 1e+7 should not result in a memory overflow on a GPU with 24G
VRAM. For us, this worked well on a GeForce RTX 3090.
In a few places, approximations perform almost as well as if we run
the whole procedure. Reasonable split points to divide the space of
\(\mathbb{R}^p\) are, in principle, all
values between the observed ones. In practice and with many
observations, the number of potential splits grows too large. We,
therefore, evaluate maximal max_candidates
splits of the
potential ones and choose them according to the quantiles of the
potential ones.
# approximation of candidate splits
fit <- SDForest(x = X, y = Y, max_candidates = 100)
tree <- SDTree(x = X, y = Y, max_candidates = 50)
If we have many observations, we can reduce computing time by only
sampling max_size
observations from the data instead of
\(n\). This can dramatically reduce
computing time compared to a full bootstrap sample but could also
decrease performance.
# draws maximal 500 samples from the data for each tree
fit <- SDForest(x = X, y = Y, max_size = 500)
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.