The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Keras provides default training and evaluation loops,
fit()
and evaluate()
. Their usage is covered
in the guide Training
& evaluation with the built-in methods.
If you want to customize the learning algorithm of your model while
still leveraging the convenience of fit()
(for instance, to
train a GAN using fit()
), you can subclass the
Model
class and implement your own
train_step()
method, which is called repeatedly during
fit()
. This is covered in the guide Customizing what happens in
fit()
.
Now, if you want very low-level control over training & evaluation, you should write your own training & evaluation loops from scratch. This is what this guide is about.
GradientTape
: a first end-to-end exampleCalling a model inside a GradientTape
scope enables you
to retrieve the gradients of the trainable weights of the layer with
respect to a loss value. Using an optimizer instance, you can use these
gradients to update these variables (which you can retrieve using
model$trainable_weights
).
Let’s consider a simple MNIST model:
get_model <- function() {
inputs <- keras_input(shape = 784, name = "digits")
outputs <- inputs |>
layer_dense(units = 64, activation = "relu") |>
layer_dense(units = 64, activation = "relu") |>
layer_dense(units = 10, name = "predictions")
keras_model(inputs = inputs, outputs = outputs)
}
model <- get_model()
Let’s train it using mini-batch gradient with a custom training loop.
First, we’re going to need an optimizer, a loss function, and a dataset:
# Instantiate an optimizer.
optimizer <- optimizer_adam(learning_rate = 1e-3)
# Instantiate a loss function.
loss_fn <- loss_sparse_categorical_crossentropy(from_logits = TRUE)
# Prepare the training dataset.
batch_size <- 64
c(c(x_train, y_train), c(x_test, y_test)) %<-% dataset_mnist()
x_train <- array_reshape(x_train, c(-1, 784))
x_test <- array_reshape(x_test, c(-1, 784))
# Reserve 10,000 samples for validation.
val_i <- sample.int(nrow(x_train), 10000)
x_val <- x_train[val_i,]
y_val <- y_train[val_i]
x_train %<>% .[-val_i,]
y_train %<>% .[-val_i]
# Prepare the training dataset.
train_dataset <- list(x_train, y_train) |>
tensor_slices_dataset() |>
dataset_shuffle(buffer_size = 1024) |>
dataset_batch(batch_size)
# Prepare the validation dataset.
val_dataset <- list(x_val, y_val) |>
tensor_slices_dataset() |>
dataset_batch(batch_size)
Here’s our training loop:
for
loop that iterates over epochsGradientTape()
scopeepochs <- 3
for (epoch in seq_len(epochs)) {
cat("Start of epoch ", epoch, "\n")
# Iterate over the batches of the dataset.
step <- 0
iterator <- as_iterator(train_dataset)
while (!is.null(batch <- iter_next(iterator))) {
c(x_batch_train, y_batch_train) %<-% batch
step <- step + 1
# Open a GradientTape to record the operations run
# during the forward pass, which enables auto-differentiation.
with(tf$GradientTape() %as% tape, {
# Run the forward pass of the layer.
# The operations that the layer applies
# to its inputs are going to be recorded
# on the GradientTape.
logits <- model(x_batch_train, training = TRUE) # Logits for this minibatch
# Compute the loss value for this minibatch.
loss_value <- loss_fn(y_batch_train, logits)
})
# Use the gradient tape to automatically retrieve
# the gradients of the trainable variables with respect to the loss.
gradients <- tape$gradient(loss_value, model$trainable_weights)
# Run one step of gradient descent by updating
# the value of the variables to minimize the loss.
optimizer$apply(gradients, model$trainable_weights)
# Log every 200 batches.
if (step %% 200 == 0) {
cat(sprintf("Training loss (for one batch) at step %d: %.4f\n",
step, loss_value))
cat(sprintf("Seen so far: %d samples\n", (step * batch_size)))
}
}
}
## Start of epoch 1
## Training loss (for one batch) at step 200: 1.1675
## Seen so far: 12800 samples
## Training loss (for one batch) at step 400: 1.6450
## Seen so far: 25600 samples
## Training loss (for one batch) at step 600: 0.6477
## Seen so far: 38400 samples
## Start of epoch 2
## Training loss (for one batch) at step 200: 0.6421
## Seen so far: 12800 samples
## Training loss (for one batch) at step 400: 0.1827
## Seen so far: 25600 samples
## Training loss (for one batch) at step 600: 0.4249
## Seen so far: 38400 samples
## Start of epoch 3
## Training loss (for one batch) at step 200: 0.2715
## Seen so far: 12800 samples
## Training loss (for one batch) at step 400: 0.3106
## Seen so far: 25600 samples
## Training loss (for one batch) at step 600: 0.2905
## Seen so far: 38400 samples
Let’s add metrics monitoring to this basic loop.
You can readily reuse the built-in metrics (or custom ones you wrote) in such training loops written from scratch. Here’s the flow:
metric$update_state()
after each batchmetric$result()
when you need to display the
current value of the metricmetric$reset_state()
when you need to clear the
state of the metric (typically at the end of an epoch)Let’s use this knowledge to compute
SparseCategoricalAccuracy
on validation data at the end of
each epoch:
# Get a fresh model
model <- get_model()
# Instantiate an optimizer to train the model.
optimizer <- optimizer_adam(learning_rate = 1e-3)
# Instantiate a loss function.
loss_fn <- loss_sparse_categorical_crossentropy(from_logits = TRUE)
# Prepare the metrics.
train_acc_metric <- metric_sparse_categorical_accuracy()
val_acc_metric <- metric_sparse_categorical_accuracy()
Here’s our training & evaluation loop:
epochs <- 2
time <- Sys.time()
for (epoch in seq_len(epochs)) {
cat("Start of epoch ", epoch, "\n")
# Iterate over the batches of the dataset.
step <- 0
train_dataset_iterator <- as_iterator(train_dataset)
while (!is.null(train_batch <- iter_next(train_dataset_iterator))) {
c(x_batch_train, y_batch_train) %<-% train_batch
step <- step + 1
with(tf$GradientTape() %as% tape, {
logits <- model(x_batch_train, training = TRUE)
loss_value <- loss_fn(y_batch_train, logits)
})
gradients <- tape$gradient(loss_value, model$trainable_weights)
optimizer$apply(gradients, model$trainable_weights)
# Update training metric.
train_acc_metric$update_state(y_batch_train, logits)
if (step %% 200 == 0) {
cat(sprintf(
"Training loss (for one batch) at step %d: %.4f\n", step, loss_value))
cat(sprintf("Seen so far: %d samples \n", (step * batch_size)))
}
}
# Display metrics at the end of each epoch.
train_acc <- train_acc_metric$result()
cat(sprintf("Training acc over epoch: %.4f\n", train_acc))
# Reset training metrics at the end of each epoch
train_acc_metric$reset_state()
# Run a validation loop at the end of each epoch.
iterate(val_dataset, function(val_batch) {
c(x_batch_val, y_batch_val) %<-% val_batch
val_logits <- model(x_batch_val, training = FALSE)
val_acc_metric$update_state(y_batch_val, val_logits)
})
val_acc <- val_acc_metric$result()
val_acc_metric$reset_state()
cat(sprintf("Validation acc: %.4f\n", val_acc))
}
## Start of epoch 1
## Training loss (for one batch) at step 200: 1.6268
## Seen so far: 12800 samples
## Training loss (for one batch) at step 400: 1.2241
## Seen so far: 25600 samples
## Training loss (for one batch) at step 600: 0.4987
## Seen so far: 38400 samples
## Training acc over epoch: 0.7844
## Validation acc: 0.8680
## Start of epoch 2
## Training loss (for one batch) at step 200: 0.4626
## Seen so far: 12800 samples
## Training loss (for one batch) at step 400: 0.4654
## Seen so far: 25600 samples
## Training loss (for one batch) at step 600: 0.5022
## Seen so far: 38400 samples
## Training acc over epoch: 0.8837
## Validation acc: 0.9031
## Time difference of 39.66021 secs
tf_function()
The default runtime in TensorFlow 2 is eager execution. As such, our training loop above executes eagerly.
This is great for debugging, but graph compilation has a definite performance advantage. Describing your computation as a static graph enables the framework to apply global performance optimizations. This is impossible when the framework is constrained to greedily execute one operation after another, with no knowledge of what comes next.
You can compile into a static graph any function that takes tensors
as input. Just add a @tf.function
decorator on it, like
this:
train_step <- tf_function(function(x, y) {
with(tf$GradientTape() %as% tape, {
logits <- model(x, training = TRUE)
loss_value <- loss_fn(y, logits)
})
gradients <- tape$gradient(loss_value, model$trainable_weights)
optimizer$apply(gradients, model$trainable_weights)
train_acc_metric$update_state(y, logits)
invisible(NULL) # return nothing
})
Let’s do the same with the evaluation step:
test_step <- tf_function(function(x, y) {
val_logits <- model(x, training=FALSE)
val_acc_metric$update_state(y, val_logits)
invisible(NULL) # return nothing
})
Now, let’s re-run our training loop with this compiled training step:
epochs <- 2
time <- Sys.time()
for (epoch in seq_len(epochs)) {
cat("Start of epoch ", epoch, "\n")
# Iterate over the batches of the dataset.
step <- 0
while (!is.null(batch <- iter_next(iterator))) {
c(x_batch_train, y_batch_train) %<-% batch
step <- step + 1
train_step(x_batch_train, y_batch_train)
if (step %% 200 == 0) {
cat(sprintf(
"Training loss (for one batch) at step %d: %.4f\n", step, loss_value
))
cat(sprintf("Seen so far: %d samples \n", (step * batch_size)))
}
}
# Display metrics at the end of each epoch.
train_acc <- train_acc_metric$result()
cat(sprintf("Training acc over epoch: %.4f\n", train_acc))
# Reset training metrics at the end of each epoch
train_acc_metric$reset_state()
# Run a validation loop at the end of each epoch.
iterate(val_dataset, function(val_batch) {
c(x_batch_val, y_batch_val) %<-% val_batch
test_step(x_batch_val, y_batch_val)
})
val_acc <- val_acc_metric$result()
val_acc_metric$reset_state()
cat(sprintf("Validation acc: %.4f\n", val_acc))
}
## Start of epoch 1
## Training acc over epoch: 0.0000
## Validation acc: 0.9031
## Start of epoch 2
## Training acc over epoch: 0.0000
## Validation acc: 0.9031
## Time difference of 0.3845379 secs
Much faster, isn’t it?
Layers and models recursively track any losses created during the
forward pass by layers that call self$add_loss(value)
. The
resulting list of scalar loss values are available via the property
model$losses
at the end of the forward pass.
If you want to be using these loss components, you should sum them and add them to the main loss in your training step.
Consider this layer, that creates an activity regularization loss:
layer_activity_regularization <- Layer(
"ActivityRegularizationLayer",
call = function(inputs) {
self$add_loss(0.1 * op_mean(inputs))
inputs
}
)
Let’s build a really simple model that uses it:
inputs <- keras_input(shape = 784, name="digits")
outputs <- inputs |>
layer_dense(units = 64, activation = "relu") |>
layer_activity_regularization() |>
layer_dense(units = 64, activation = "relu") |>
layer_dense(units = 10, name = "predictions")
model <- keras_model(inputs = inputs, outputs = outputs)
Here’s what our training step should look like now:
train_step <- tf_function(function(x, y) {
with(tf$GradientTape() %as% tape, {
logits <- model(x, training = TRUE)
loss_value <- Reduce(`+`, c(loss_fn(y, logits),
model$losses))
})
gradients <- tape$gradient(loss_value, model$trainable_weights)
optimizer$apply(gradients, model$trainable_weights)
train_acc_metric$update_state(y, logits)
invisible(NULL)
})
Now you know everything there is to know about using built-in training loops and writing your own from scratch.
To conclude, here’s a simple end-to-end example that ties together everything you’ve learned in this guide: a DCGAN trained on MNIST digits.
You may be familiar with Generative Adversarial Networks (GANs). GANs can generate new images that look almost real, by learning the latent distribution of a training dataset of images (the “latent space” of the images).
A GAN is made of two parts: a “generator” model that maps points in the latent space to points in image space, a “discriminator” model, a classifier that can tell the difference between real images (from the training dataset) and fake images (the output of the generator network).
A GAN training loop looks like this:
For a much more detailed overview of how GANs works, see Deep Learning with Python.
Let’s implement this training loop. First, create the discriminator meant to classify fake vs real digits:
# Create the discriminator
discriminator <-
keras_model_sequential(name = "discriminator",
input_shape = c(28, 28, 1)) |>
layer_conv_2d(filters = 64, kernel_size = c(3, 3),
strides = c(2, 2), padding = "same") |>
layer_activation_leaky_relu(negative_slope = 0.2) |>
layer_conv_2d(filters = 128, kernel_size = c(3, 3),
strides = c(2, 2), padding = "same") |>
layer_activation_leaky_relu(negative_slope = 0.2) |>
layer_global_max_pooling_2d() |>
layer_dense(units = 1)
summary(discriminator)
## [1mModel: "discriminator"[0m
## ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
## ┃[1m [0m[1mLayer (type) [0m[1m [0m┃[1m [0m[1mOutput Shape [0m[1m [0m┃[1m [0m[1m Param #[0m[1m [0m┃
## ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
## │ conv2d ([38;5;33mConv2D[0m) │ ([38;5;45mNone[0m, [38;5;34m14[0m, [38;5;34m14[0m, [38;5;34m64[0m) │ [38;5;34m640[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ leaky_re_lu ([38;5;33mLeakyReLU[0m) │ ([38;5;45mNone[0m, [38;5;34m14[0m, [38;5;34m14[0m, [38;5;34m64[0m) │ [38;5;34m0[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ conv2d_1 ([38;5;33mConv2D[0m) │ ([38;5;45mNone[0m, [38;5;34m7[0m, [38;5;34m7[0m, [38;5;34m128[0m) │ [38;5;34m73,856[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ leaky_re_lu_1 ([38;5;33mLeakyReLU[0m) │ ([38;5;45mNone[0m, [38;5;34m7[0m, [38;5;34m7[0m, [38;5;34m128[0m) │ [38;5;34m0[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ global_max_pooling2d │ ([38;5;45mNone[0m, [38;5;34m128[0m) │ [38;5;34m0[0m │
## │ ([38;5;33mGlobalMaxPooling2D[0m) │ │ │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ dense_6 ([38;5;33mDense[0m) │ ([38;5;45mNone[0m, [38;5;34m1[0m) │ [38;5;34m129[0m │
## └─────────────────────────────────┴────────────────────────┴───────────────┘
## [1m Total params: [0m[38;5;34m74,625[0m (291.50 KB)
## [1m Trainable params: [0m[38;5;34m74,625[0m (291.50 KB)
## [1m Non-trainable params: [0m[38;5;34m0[0m (0.00 B)
Then let’s create a generator network, that turns latent vectors into
outputs of shape (28, 28, 1)
(representing MNIST
digits):
latent_dim <- 128L
generator <-
keras_model_sequential(name = "generator",
input_shape = latent_dim) |>
layer_dense(7 * 7 * 128) |>
layer_activation_leaky_relu(negative_slope = 0.2) |>
layer_reshape(target_shape = c(7, 7, 128)) |>
layer_conv_2d_transpose(filters = 128, kernel_size = c(4, 4),
strides = c(2, 2), padding = "same") |>
layer_activation_leaky_relu(negative_slope = 0.2) |>
layer_conv_2d_transpose(filters = 128, kernel_size = c(4, 4),
strides = c(2, 2), padding = "same") |>
layer_activation_leaky_relu(negative_slope = 0.2) |>
layer_conv_2d(filters = 1, kernel_size = c(7, 7), padding = "same",
activation = "sigmoid")
summary(generator)
## [1mModel: "generator"[0m
## ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━┓
## ┃[1m [0m[1mLayer (type) [0m[1m [0m┃[1m [0m[1mOutput Shape [0m[1m [0m┃[1m [0m[1m Param #[0m[1m [0m┃
## ┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━┩
## │ dense_7 ([38;5;33mDense[0m) │ ([38;5;45mNone[0m, [38;5;34m6272[0m) │ [38;5;34m809,088[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ leaky_re_lu_2 ([38;5;33mLeakyReLU[0m) │ ([38;5;45mNone[0m, [38;5;34m6272[0m) │ [38;5;34m0[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ reshape ([38;5;33mReshape[0m) │ ([38;5;45mNone[0m, [38;5;34m7[0m, [38;5;34m7[0m, [38;5;34m128[0m) │ [38;5;34m0[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ conv2d_transpose │ ([38;5;45mNone[0m, [38;5;34m14[0m, [38;5;34m14[0m, [38;5;34m128[0m) │ [38;5;34m262,272[0m │
## │ ([38;5;33mConv2DTranspose[0m) │ │ │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ leaky_re_lu_3 ([38;5;33mLeakyReLU[0m) │ ([38;5;45mNone[0m, [38;5;34m14[0m, [38;5;34m14[0m, [38;5;34m128[0m) │ [38;5;34m0[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ conv2d_transpose_1 │ ([38;5;45mNone[0m, [38;5;34m28[0m, [38;5;34m28[0m, [38;5;34m128[0m) │ [38;5;34m262,272[0m │
## │ ([38;5;33mConv2DTranspose[0m) │ │ │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ leaky_re_lu_4 ([38;5;33mLeakyReLU[0m) │ ([38;5;45mNone[0m, [38;5;34m28[0m, [38;5;34m28[0m, [38;5;34m128[0m) │ [38;5;34m0[0m │
## ├─────────────────────────────────┼────────────────────────┼───────────────┤
## │ conv2d_2 ([38;5;33mConv2D[0m) │ ([38;5;45mNone[0m, [38;5;34m28[0m, [38;5;34m28[0m, [38;5;34m1[0m) │ [38;5;34m6,273[0m │
## └─────────────────────────────────┴────────────────────────┴───────────────┘
## [1m Total params: [0m[38;5;34m1,339,905[0m (5.11 MB)
## [1m Trainable params: [0m[38;5;34m1,339,905[0m (5.11 MB)
## [1m Non-trainable params: [0m[38;5;34m0[0m (0.00 B)
Here’s the key bit: the training loop. As you can see it is quite straightforward. The training step function only takes 17 lines.
# Instantiate one optimizer for the discriminator and another for the generator.
d_optimizer <- optimizer_adam(learning_rate = 0.0003)
g_optimizer <- optimizer_adam(learning_rate = 0.0004)
# Instantiate a loss function.
loss_fn <- loss_binary_crossentropy(from_logits = TRUE)
train_step <- tf_function(function(real_images) {
# Sample random points in the latent space
c(batch_size, ...) %<-% op_shape(real_images)
random_latent_vectors <-
tf$random$normal(shape(batch_size, latent_dim))
# Decode them to fake images
generated_images <- generator(random_latent_vectors)
# Combine them with real images
combined_images <- tf$concat(list(generated_images, real_images),
axis = 0L)
# Assemble labels discriminating real from fake images
labels <- tf$concat(list(tf$ones(shape(batch_size, 1)),
tf$zeros(shape(batch_size, 1))),
axis = 0L)
# Add random noise to the labels - important trick!
labels %<>% `+`(tf$random$uniform(tf$shape(.), maxval = 0.05))
# Train the discriminator
with(tf$GradientTape() %as% tape, {
predictions <- discriminator(combined_images)
d_loss <- loss_fn(labels, predictions)
})
grads <- tape$gradient(d_loss, discriminator$trainable_weights)
d_optimizer$apply(grads, discriminator$trainable_weights)
# Sample random points in the latent space
random_latent_vectors <-
tf$random$normal(shape(batch_size, latent_dim))
# Assemble labels that say "all real images"
misleading_labels <- tf$zeros(shape(batch_size, 1))
# Train the generator (note that we should *not* update the weights
# of the discriminator)!
with(tf$GradientTape() %as% tape, {
predictions <- discriminator(generator(random_latent_vectors))
g_loss <- loss_fn(misleading_labels, predictions)
})
grads <- tape$gradient(g_loss, generator$trainable_weights)
g_optimizer$apply(grads, generator$trainable_weights)
list(d_loss, g_loss, generated_images)
})
Let’s train our GAN, by repeatedly calling train_step
on
batches of images.
Since our discriminator and generator are convnets, you’re going to want to run this code on a GPU.
# Prepare the dataset. We use both the training & test MNIST digits.
batch_size <- 64
c(c(x_train, .), c(x_test, .)) %<-% dataset_mnist()
all_digits <- op_concatenate(list(x_train, x_test))
all_digits <- op_reshape(all_digits, c(-1, 28, 28, 1))
dataset <- all_digits |>
tensor_slices_dataset() |>
dataset_map(\(x) op_cast(x, "float32") / 255) |>
dataset_shuffle(buffer_size = 1024) |>
dataset_batch(batch_size = batch_size)
epochs <- 1 # In practice you need at least 20 epochs to generate nice digits.
save_dir <- "./"
for (epoch in seq_len(epochs)) {
cat("Start epoch: ", epoch, "\n")
step <- 0
train_iterator <- as_iterator(dataset)
while (!is.null(real_images <- iter_next(train_iterator))) {
step <- step + 1
# Train the discriminator & generator on one batch of real images.
c(d_loss, g_loss, generated_images) %<-% train_step(real_images)
# Logging.
if (step %% 200 == 0) {
# Print metrics
cat(sprintf("discriminator loss at step %d: %.2f\n", step, d_loss))
cat(sprintf("adversarial loss at step %d: %.2f\n", step, g_loss))
}
# To limit execution time we stop after 10 steps.
# Remove the lines below to actually train the model!
if (step > 10)
break
}
}
## Start epoch: 1
That’s it! You’ll get nice-looking fake MNIST digits after just ~30s of training on a GPU.
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.