Closed jemus42 closed 2 years ago
Seems like using set.seed
and torch_manual_seed
in combination with num_threads = 1
behaves as expected, results (checked by predictions) are identical between two separate runs.
See https://github.com/mlr-org/mlr3torch/blob/main/attic/threading-repro.R
Addendum: It appears torch::torch_manual_seed
is not required in this scenario. Neat.
This also affects parallelization within {batchtools}
, but apparently using the SSH clusterFunc with localhost
works fine, as this (if I understand correctly) works similar to future::plan(multisession)
.
See https://github.com/pytorch/pytorch/wiki/Autograd-and-Fork
I think this disqualifies
future::plan("multicore")
, when I tried a CV with this plan I gotTesting
future::plan("multisession")
seems to run at least without error.At the very least I should keep an eye on this and make sure it's documented (e.g. a vignette on resampling with
{mlr3torch}
in general)