In tuneLearn each time a worker is called it creates a bootstrap datasets, and it uses it for all values of sigma. So each core creates only one copy of the data.
In tuneLearnFast the bootstrapped objects are created centrally and the whole list is passed to each node. So, if we suppose that we have N cores and N bootstrap datasets, the memory requirements are N times bigger in tuneLearnFast than in tuneLearn.
It would be better to export a specific bootstrapped object only to one specific core, which will have to handle it. Not sure clusterExport can be used to export objects to a specific core of a cluster.
In
tuneLearn
each time a worker is called it creates a bootstrap datasets, and it uses it for all values of sigma. So each core creates only one copy of the data.In
tuneLearnFast
the bootstrapped objects are created centrally and the whole list is passed to each node. So, if we suppose that we have N cores and N bootstrap datasets, the memory requirements are N times bigger intuneLearnFast
than intuneLearn
.It would be better to export a specific bootstrapped object only to one specific core, which will have to handle it. Not sure
clusterExport
can be used to export objects to a specific core of a cluster.