The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
A Momentumized, Adaptive, Dual Averaged Gradient Method for Stochastic Optimization algorithm. MADGRAD is a 'best-of-both-worlds' optimizer with the generalization performance of stochastic gradient descent and at least as fast convergence as that of Adam, often faster. A drop-in optim_madgrad() implementation is provided based on Defazio et al (2020) <doi:10.48550/arXiv.2101.11075>.
Version: | 0.1.0 |
Imports: | torch (≥ 0.3.0), rlang |
Suggests: | testthat (≥ 3.0.0) |
Published: | 2021-05-10 |
DOI: | 10.32614/CRAN.package.madgrad |
Author: | Daniel Falbel [aut, cre, cph], RStudio [cph], MADGRAD original implementation authors. [cph] |
Maintainer: | Daniel Falbel <daniel at rstudio.com> |
License: | MIT + file LICENSE |
NeedsCompilation: | no |
Materials: | README |
CRAN checks: | madgrad results |
Reference manual: | madgrad.pdf |
Package source: | madgrad_0.1.0.tar.gz |
Windows binaries: | r-devel: madgrad_0.1.0.zip, r-release: madgrad_0.1.0.zip, r-oldrel: madgrad_0.1.0.zip |
macOS binaries: | r-release (arm64): madgrad_0.1.0.tgz, r-oldrel (arm64): madgrad_0.1.0.tgz, r-release (x86_64): madgrad_0.1.0.tgz, r-oldrel (x86_64): madgrad_0.1.0.tgz |
Please use the canonical form https://CRAN.R-project.org/package=madgrad to link to this page.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.