The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

RMTL: Regularized Multi-Task Learning

Efficient solvers for 10 regularized multi-task learning algorithms applicable for regression, classification, joint feature selection, task clustering, low-rank learning, sparse learning and network incorporation. Based on the accelerated gradient descent method, the algorithms feature a state-of-art computational complexity O(1/k^2). Sparse model structure is induced by the solving the proximal operator. The detail of the package is described in the paper of Han Cao and Emanuel Schwarz (2018) <doi:10.1093/bioinformatics/bty831>.

Version: 0.9.9
Depends: R (≥ 3.5.0)
Imports: MASS (≥ 7.3-50), psych (≥ 1.8.4), corpcor (≥ 1.6.9), doParallel (≥ 1.0.14), foreach (≥ 1.4.4)
Suggests: knitr, rmarkdown
Published: 2022-05-02
DOI: 10.32614/CRAN.package.RMTL
Author: Han Cao [cre, aut, cph], Emanuel Schwarz [aut]
Maintainer: Han Cao <hank9cao at gmail.com>
BugReports: https://github.com/transbioZI/RMTL/issues/
License: GPL-3
URL: https://github.com/transbioZI/RMTL/
NeedsCompilation: no
Materials: README NEWS
CRAN checks: RMTL results

Documentation:

Reference manual: RMTL.pdf
Vignettes: An Tutorial for Regularized Multi-task Learning using the package RMTL

Downloads:

Package source: RMTL_0.9.9.tar.gz
Windows binaries: r-devel: RMTL_0.9.9.zip, r-release: RMTL_0.9.9.zip, r-oldrel: RMTL_0.9.9.zip
macOS binaries: r-release (arm64): RMTL_0.9.9.tgz, r-oldrel (arm64): RMTL_0.9.9.tgz, r-release (x86_64): RMTL_0.9.9.tgz, r-oldrel (x86_64): RMTL_0.9.9.tgz
Old sources: RMTL archive

Reverse dependencies:

Reverse suggests: joinet

Linking:

Please use the canonical form https://CRAN.R-project.org/package=RMTL to link to this page.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.