The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
PyTorch framework for Deep Learning research and development. It focuses on reproducibility, rapid experimentation, and codebase reuse so you can create something new rather than write another regular train loop. Break the cycle - use the Catalyst!
Specify loaders from catalyst dict:
library(fastai)
library(magrittr)
loaders = loaders()
data = Data_Loaders(loaders['train'], loaders['valid'])$cuda()
nn = nn()
model = nn$Sequential() +
nn$Flatten() +
nn$Linear(28L * 28L, 10L)
Output:
Sequential(
(0): Flatten()
(1): Linear(in_features=784, out_features=10, bias=True)
)
metrics = list(accuracy,top_k_accuracy)
learn = Learner(data, model, loss_func = nn$functional$cross_entropy, opt_func = Adam,
metrics = metrics)
learn %>% fit_one_cycle(1, 0.02)
epoch train_loss valid_loss accuracy top_k_accuracy time
0 0.269411 0.336529 0.910200 0.993700 00:08
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.