Closed mcschmitz closed 5 years ago
No, you can only choose one level. Complex arrangements like described by you are not supported and there are no plans to add such a behavior in the future.
For multilevel parallelization please have a look at mlr3 and future.
Is there any possibility to skip a parallelization level?
For example if one wants to tune a learner via nested resampling, an inner 10 fold CV, and an outer 3 fold CV. Suppose you have have 8 cores, then parallelization of the outer CV would lead to 3 elements executed by 8 cores, which would waste a lot of computing power. Of course you could set the parallelization level to "mlr.tuneParams" to increase the computing power in use, but not if you do the tuning via MBO. So is there any possibility to tell parallelMap to use the "mlr.resample" level but to skip the first resample task and parallelize the inner CV instead?
Thanks for your help :)