The hardware and bandwidth for this mirror is donated by METANET, the Webhosting and Full Service-Cloud Provider.
If you wish to report a bug, or if you are interested in having us mirror your free-software or open-source project, please feel free to contact us at mirror[@]metanet.ch.

mirai - Torch Integration

Torch Integration

Custom serialization functions may be registered to handle external pointer type reference objects.

This allows tensors from the torch package to be used seamlessly in ‘mirai’ computations.

Setup Steps

  1. Set up dameons.

  2. Create the serialization configuration, specifying ‘class’ as ‘torch_tensor’ and ‘vec’ as TRUE.

  3. Use everywhere(), supplying the configuration to the ‘.serial’ argument, and (optionally) making the torch package available on all daemons for convenience.

library(mirai)
library(torch)

daemons(1)
#> [1] 1

cfg <- serial_config(
  class = "torch_tensor",
  sfunc = torch:::torch_serialize,
  ufunc = torch::torch_load,
  vec = TRUE
)

everywhere(library(torch), .serial = cfg)

Example Usage

The below example creates a convolutional neural network using torch::nn_module().

A set of model parameters is also specified.

The model specification and parameters are then passed to and initialized within a ‘mirai’.

model <- nn_module(
  initialize = function(in_size, out_size) {
    self$conv1 <- nn_conv2d(in_size, out_size, 5)
    self$conv2 <- nn_conv2d(in_size, out_size, 5)
  },
  forward = function(x) {
    x <- self$conv1(x)
    x <- nnf_relu(x)
    x <- self$conv2(x)
    x <- nnf_relu(x)
    x
  }
)

params <- list(in_size = 1, out_size = 20)

m <- mirai(do.call(model, params), model = model, params = params)

m[]
#> An `nn_module` containing 1,040 parameters.
#> 
#> ── Modules ────────────────────────────────────────────────────────────────────────────────────────────────────────────
#> • conv1: <nn_conv2d> #520 parameters
#> • conv2: <nn_conv2d> #520 parameters

The returned model is an object containing many tensor elements.

m$data$parameters$conv1.weight
#> torch_tensor
#> (1,1,.,.) = 
#>   0.1090  0.0691 -0.0591 -0.0461 -0.0532
#>  -0.0153  0.1228  0.1182 -0.0665  0.0505
#>  -0.0303  0.0163 -0.0647 -0.1798 -0.0441
#>  -0.0588  0.0846  0.0857  0.0327  0.1972
#>   0.1442  0.0955 -0.1682 -0.0183  0.0960
#> 
#> (2,1,.,.) = 
#>  -0.0650  0.0633  0.0921 -0.0372  0.1392
#>  -0.0493 -0.0742 -0.1552 -0.0638 -0.0708
#>  -0.0113 -0.1114 -0.0013 -0.0260 -0.0838
#>  -0.0292  0.0165 -0.1340 -0.0556  0.0925
#>  -0.0394  0.0905  0.1140 -0.1017  0.0363
#> 
#> (3,1,.,.) = 
#>  -0.1762  0.0509 -0.1795  0.1617 -0.1282
#>   0.1735 -0.1951 -0.1044  0.1623 -0.1978
#>   0.1982 -0.1127 -0.1133 -0.0947 -0.0160
#>   0.1135  0.0198 -0.0254  0.0281 -0.0520
#>   0.0794 -0.0114 -0.1520  0.0267 -0.1980
#> 
#> (4,1,.,.) = 
#>   0.0412  0.0476  0.1843 -0.0444  0.0996
#>   0.0813  0.1186  0.1490 -0.1211 -0.0169
#>   0.0239  0.0793 -0.0484  0.0478  0.1343
#>   0.1461  0.1949  0.0382  0.0634  0.1292
#>   0.0958  0.0273  0.1933  0.0691  0.0905
#> 
#> (5,1,.,.) = 
#>  -0.0724  0.1578 -0.0650 -0.0328  0.1338
#> ... [the output was truncated (use n=-1 to disable)]
#> [ CPUFloatType{20,1,5,5} ][ requires_grad = TRUE ]

It is usual for model parameters to then be passed to an optimiser.

This can also be initialized within a ‘mirai’ process.

optim <- mirai(optim_rmsprop(params = params), params = m$data$parameters)

optim[]
#> <optim_rmsprop>
#>   Inherits from: <torch_optimizer>
#>   Public:
#>     add_param_group: function (param_group) 
#>     clone: function (deep = FALSE) 
#>     defaults: list
#>     initialize: function (params, lr = 0.01, alpha = 0.99, eps = 1e-08, weight_decay = 0, 
#>     load_state_dict: function (state_dict, ..., .refer_to_state_dict = FALSE) 
#>     param_groups: list
#>     state: State, R6
#>     state_dict: function () 
#>     step: function (closure = NULL) 
#>     zero_grad: function () 
#>   Private:
#>     step_helper: function (closure, loop_fun)

daemons(0)
#> [1] 0

Above, tensors and complex objects containing tensors were passed seamlessly between host and daemon processes, in the same way as any other R object.

The custom serialization in mirai leverages R’s own native ‘refhook’ mechanism to allow such completely transparent usage. Designed to be fast and efficient, data copies are minimised and the ‘official’ serialization methods from the torch package are used directly.

These binaries (installable software) and packages are in development.
They may not be fully stable and should be used with caution. We make no claims about them.