The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

SDMtune - hyperparameter tuning

Sergio Vignali, Arnaud Barras & Veronika Braunisch

Training, validation and testing split

When you tune the model hyperparameters you iteratively adjust the hyperparameters while monitoring the changes in the evaluation metric computed using the testing dataset. In this process, the information contained in the testing dataset leaks in the model and therefore, at the end of the process, the testing dataset doesn’t represent anymore an independent set to evaluate the model Chollet and Allaire (2018). A better strategy, than splitting the observation locations in training and testing, would be to split them into training, validation and testing datasets. The training dataset is then used to train the model, the validation datasets to drive the hyperparameter tuning and the testing dataset to evaluate the tuned model. The function trainValTest allows to split the data in three folds containing the provided percentage of data. For illustration purpose let’s split the presence locations in training (60%), validation (20%) and testing (20%) datasets. The following steps are described in the basic-use vignette, refer to it if the following code is not clear:

library(SDMtune)
library(zeallot)

# Prepare data
files <- list.files(path = file.path(system.file(package = "dismo"), "ex"),
                    pattern = "grd", 
                    full.names = TRUE)

predictors <- terra::rast(files)
data <- prepareSWD(species = "Virtual species", 
                   p = virtualSp$presence,
                   a = virtualSp$background, 
                   env = predictors,
                   categorical = "biome")

# Split data in training, validation and testing datasets
c(train, val, test) %<-% trainValTest(data, 
                                      val = 0.2, 
                                      test = 0.2,
                                      only_presence = TRUE, 
                                      seed = 61516)

cat("# Training  : ", nrow(train@data))
cat("\n# Validation: ", nrow(val@data))
cat("\n# Testing   : ", nrow(test@data))

# Train Maxnet model with default settings
model <- train("Maxnet", 
               data = train)

Check the effect of varying one hyperparameter

To see the effect of varying one hyperparameter on the model performance we can use the function gridSearch. The function iterates through a set of predefined hyperparameter values, train the model and displays in real-time the evaluation metric in the RStudio viewer pane (hover over the points to get a tooltip with extra information). Let’s see how the AUC changes varying the regularization multiplier. First we have to define the values for the hyperparameter that we want to test. For that we create a named list that we will use as an argument for the function gridSearch:

# Define the values for the regularization multiplier
h <- list(reg = seq(0.2, 1, 0.1))

# Call the gridSearch function
exp_1 <- gridSearch(model, 
                    hypers = h, 
                    metric = "auc", 
                    test = val)

As you noticed we used the validation dataset as test argument. The output of the function is an object of class SDMtune. Let’s print it:

exp_1

When you print the output, the text contains the models configuration that have been used during the execution of the function. In our case, only the regularization multiplier reg has multiple values. You can plot the SDMtune object:

plot(exp_1, 
     title = "Experiment 1")

and you can also recreate the interactive chart using:

plot(exp_1, 
     title = "Experiment 1", 
     interactive = TRUE)

The SDMtune object stores the results in the slot @results:

exp_1@results

You can order them with:

exp_1@results[order(-exp_1@results$test_AUC), ]

In the next example we check how the TSS changes varying the regularization multiplier from 1 to 4:

# Define the values for reg
h <- list(reg = 1:4)

# Call the gridSearch function
exp_2 <- gridSearch(model, 
                    hypers = h, 
                    metric = "tss", 
                    test = val)

and how AUC changes varying the feature combinations using the following values: l, lq, lh, lqp, lqph and lqpht:

# Define the values for fc
h <- list(fc = c("l", "lq", "lh", "lqp", "lqph", "lqpht"))

# Call the gridSearch function
exp_3 <- gridSearch(model, 
                    hypers = h, 
                    metric = "auc", 
                    test = val)

Train a Maxent model and see how the AUC changes varying the number of iterations from 300 to 1100 with increments of 200 (highlight to see the solution):

maxent_model <- train("Maxent", 
                      data = data)

# Define the values for fc
h <- list("iter" = seq(300, 1100, 200))

# Call the gridSearch function
exp_4 <- gridSearch(maxent_model, 
                    hypers = h, 
                    metric = "auc", 
                    test = val)

To see which hyperparameters can be tuned in a given model use the function getTunableArgs. For example:

getTunableArgs(model)

Tune hyperparameters

To tune the model hyperparameters you should run all the possible combinations of hyperparameters. Here is an example using combinations of regularization multiplier and feature classes:

h <- list(reg = seq(0.2, 2, 0.2),
          fc = c("l", "lq", "lh", "lqp", "lqph", "lqpht"))

exp_5 <- gridSearch(model, 
                    hypers = h, 
                    metric = "auc", 
                    test = val)

This code takes already quite long as it has to train 60 models. Imagine if you want to check more values for the regularization multiplier and maybe add the number of iterations (in the case of a Maxent model). The number of models to be trained increases exponentially and consequently the execution time. In the next two paragraphs we will present two possible alternative to the gridSearch function.

Optimize Model

The previous function doesn’t learn anything from the trained models, it just selects n random combinations of hyperparameters. The function optimizeModel instead uses a genetic algorithm to find an optimum or near optimum solution. Check the function documentation to understand how it works, here we provide the code to execute it:

exp_7 <- optimizeModel(model,
                       hypers = h,
                       metric = "auc",
                       test = val,
                       pop = 15,
                       gen = 2,
                       seed = 798)

Evaluate final model

Let’s say we want to use the best tuned model found by the randomSearch function. Before evaluating the model using the testing dataset, we can merge the training and the validation datasets together to increase the number of locations and train a new model with the merged observations and the tuned configuration. At this point we may have removed variables using the varSel or reduceVar function. If this is the case, we cannot merge directly the initial datasets which contain all the environmental variables. We can extract the train dataset with the selected variables from the output of the experiment and merge it with the validation dataset using the function mergeSWD:

# Index of the best model in the experiment
index <- which.max(exp_6@results$test_AUC)

# New train dataset containing only the selected variables
new_train <- exp_6@models[[index]]@data 

# Merge only presence data
merged_data <- mergeSWD(new_train,
                        val,
                        only_presence = TRUE)

The val dataset contains all the initial environmental variables but the mergeSWD function will merge only those that are present in both datasets (in case you have performed variable selection).

Then we get the model configuration from the experiment 6:

final_model <- train("Maxnet",
                     data = merged_data,
                     fc = exp_6@results[index, 1],
                     reg = exp_6@results[index, 2])

Now we can evaluate the final model using the held apart testing dataset:

auc(final_model, test = test)

Hyperparameters tuning with cross validation

Another approach would be to split the data in two folds: training and testing, use the cross validation strategy with the training dataset to tune the model hyperparameters, and evaluate the tuned model with the unseen held apart testing dataset.

# Create the folds from the training dataset
folds <- randomFolds(train,
                     k = 4,
                     only_presence = TRUE,
                     seed = 25)

# Train the model
cv_model <- train("Maxent",
                  data = train,
                  folds = folds)

All the previous examples can be applied to the cross validation, here an example with randomSearch (note that in this case the testing dataset is not provided as is taken from the folds stored in the SDMmodelCV):

h <- list(reg = seq(0.2, 5, 0.2),
          fc = c("l", "lq", "lh", "lp", "lqp", "lqph"))

exp_8 <- randomSearch(cv_model,
                      hypers = h,
                      metric = "auc",
                      pop = 10,
                      seed = 65466)

The function randomSearch orders the models according to the best value of the metric for the testing dataset. In this case we can train a model using the best hyperparameters configuration and without cross validation with:

final_model <- combineCV(exp_8@models[[1]])

auc(final_model, test = test)
Chollet, François, and J. J. Allaire. 2018. Deep learning with R. 1st ed. Manning Publications Co.
Müller, Andreas C., and Sarah Guido. 2016. Introduction to machine learning with Python : a guide for data scientists.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.