The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
drop_last=TRUE
is now the default for training
dataloaders created by luz (when eg. you pass a list or a torch dataset
as data input) (#117)luz_callback_autoresume()
allowing to easily
resume trainining runs that might have crashed. (#107)luz_callback_resume_from_checkpoint()
allowing
one to resume a training run from a checkpoint file. (#107)luz_metric_set()
for more information. (#112)loss_fn
is now a field of the context, thus callbacks
can override it when needed. (#112)luz_callback_mixup
now supports the
run_valid
and auto_loss
arguments. (#112)ctx
now aliases to the default opt
and
opt_name
when a single optimizer is specified (ie. most
cases) (#114)tfevents
callback for logging the loss and
getting weights histograms. (#118)evaluate
. (#123)accelerator
s cpu
argument is
always respected. (#119)rlang
and ggplot2
deprecations.
(#120)lr_finder()
now by default divides the range between
start_lr
and end_lr
into log-spaced intervals,
following the fast.ai implementation. Cf. Sylvain Gugger’s post:
https://sgugger.github.io/how-do-you-find-a-good-learning-rate.html. The
previous behavior can be achieved passing
log_spaced_intervals=FALSE
to the function. (#82, @skeydan)plot.lr_records()
now in addition plots an
exponentially weighted moving average of the loss (again, see Sylvain
Gugger’s post), with a weighting coefficient of 0.9
(which
seems a reasonable value for the default setting of 100
learning-rate-incrementing intervals). (#82, @skeydan)luz_callback_gradient_clip
inspired by FastAI’s
implementation. (#90)backward
argument to setup
allowing one to customize how backward
is called for the
loss scalar value. (#93)luz_callback_keep_best_model()
to reload the
weights from the best model after training is finished. (#95)fit.luz_module_generator()
. Removed
ctx$epochs
from context object and replaced it with
ctx$min_epochs
and ctx$max_epochs
(#53, @mattwarkentin).cuda_index
argument to accelerator
to allow selecting an specific GPU when multiple are present (#58, @cmcmaster1).lr_finder
(#59, @cmcmaster1).fit
using the as_dataloader()
method
(#66).valid_data
can now be scalar value indicating the
proportion of data
that will be used for fitting. This only
works if data
is a torch dataset or a list. (#69)dataloader_options
to
fit
to pass additional information to
as_dataloader()
. (#71)evaluate
function allowing users to get
metrics from a model in a new dataset. (#73)patience = 1
and when they are specified
before other logging callbacks. (#76)ctx$data
now refers to the current in use
data
instead of always refering to
ctx$train_data
. (#54)ctx
object to make it safer and avoid
returing it in the output. (#73)NEWS.md
file to track changes to the
package.These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.