The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.
Package website: release | dev
A new R6 and much more modular implementation for single- and multi-objective Bayesian Optimization.
The best entry point to get familiar with mlr3mbo
is
provided via the Bayesian
Optimization chapter in the mlr3book
.
mlr3mbo
is built modular relying on the following R6 classes:
Surrogate
: Surrogate ModelAcqFunction
: Acquisition FunctionAcqOptimizer
: Acquisition Function OptimizerBased on these, Bayesian Optimization (BO) loops can be written, see,
e.g., bayesopt_ego
for sequential single-objective BO.
mlr3mbo
also provides an OptimizerMbo
class
behaving like any other Optimizer
from the bbotk package as
well as a TunerMbo
class behaving like any other
Tuner
from the mlr3tuning
package.
mlr3mbo
uses sensible defaults for the
Surrogate
, AcqFunction
,
AcqOptimizer
, and even the loop_function
. See
?mbo_defaults
for more details.
Minimize the two-dimensional Branin function via sequential BO using a GP as surrogate and EI as acquisition function optimized via a local serch:
library(bbotk)
library(mlr3mbo)
library(mlr3learners)
set.seed(1)
= function(xdt) {
fun = branin(xdt[["x1"]], xdt[["x2"]])
y data.table(y = y)
}
= ps(
domain x1 = p_dbl(-5, 10),
x2 = p_dbl(0, 15)
)
= ps(
codomain y = p_dbl(tags = "minimize")
)
= ObjectiveRFunDt$new(
objective fun = fun,
domain = domain,
codomain = codomain
)
= oi(
instance objective = objective,
terminator = trm("evals", n_evals = 25)
)
= srlrn(lrn("regr.km", control = list(trace = FALSE)))
surrogate
= acqf("ei")
acq_function
= acqo(
acq_optimizer opt("local_search", n_initial_points = 10, initial_random_sample_size = 1000, neighbors_per_point = 10),
terminator = trm("evals", n_evals = 2000)
)
= opt("mbo",
optimizer loop_function = bayesopt_ego,
surrogate = surrogate,
acq_function = acq_function,
acq_optimizer = acq_optimizer
)
$optimize(instance) optimizer
## x1 x2 x_domain y
## <num> <num> <list> <num>
## 1: 3.104516 2.396279 <list[2]> 0.412985
We can quickly visualize the contours of the objective function (on log scale) as well as the sampling behavior of our BO run (lighter blue colours indicating points that were evaluated in later stages of the optimization process; the first batch is given by the initial design).
library(ggplot2)
= generate_design_grid(instance$search_space, resolution = 1000L)$data
grid := branin(x1 = x1, x2 = x2)]
grid[, y
ggplot(aes(x = x1, y = x2, z = log(y)), data = grid) +
geom_contour(colour = "black") +
geom_point(aes(x = x1, y = x2, colour = batch_nr), data = instance$archive$data) +
labs(x = expression(x[1]), y = expression(x[2])) +
theme_minimal() +
theme(legend.position = "bottom")
Note that you can also use bb_optimize
as a shorthand
instead of constructing an optimization instance.
library(mlr3)
library(mlr3learners)
library(mlr3tuning)
library(mlr3mbo)
set.seed(1)
= tsk("pima")
task
= lrn("classif.rpart", cp = to_tune(lower = 1e-04, upper = 1, logscale = TRUE))
learner
= tune(
instance tuner = tnr("mbo"),
task = task,
learner = learner,
resampling = rsmp("holdout"),
measure = msr("classif.ce"),
term_evals = 10)
$result instance
## cp learner_param_vals x_domain classif.ce
## <num> <list> <list> <num>
## 1: -6.188733 <list[2]> <list[1]> 0.2382812
These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.