Closed annette987 closed 5 years ago
1) please really check submitted MRE before directly before posting. the first code block is not syntactically valid
parallelStart(mode="multicore", cpus=12, * level="mlr.resampe" *****
so I am guessing you at least edited your code before posting without running
2) the process does not seem to crash but to (infinitely?) block?
3) I could reproduce this exactly once on my machine. and this was before I updated all of my packages from CRAN. I cannot help, if this is not reproducible
I can keep this open for a few days, but see above.
@annette987 does the error really still occur of you update all packages from cran?
@mllg @pat-s can you please check on your side?
@annette987 also as documented by randomForestSRC they use internal parallelization with OpenMP. that might results in "hickups" if parallelization with pm and rfSRC are both "on"
a) use another simple (but tuned) learner in your example. does the problem still occur? b) turn off parallelization in rfSCR (you shpuld do this anyway, if you want to run a benchmark as above)
Thank you very much for all your suggestions. After updating all packages from CRAN the problem is no longer occurring. Sorry I did not think to do this before contacting you.
I am using mlr to run a benchmark of several different learners and want to implement parallelization. If I set the level in parallelStart() to either mlr.resample or mlr.tuneParams, then pass to benchmark() a learner with only one level of resampling, followed by a learner using nested resampling, such as via makeTuneWrapper(), then the program silently crashes after starting to benchmark the second learner.
If I pass to benchmark() the learner using nested resampling first, followed by the learner without nesting, all is fine. If I pass it just the learner using nested resampling on its own, or just the other learner on its own, all is fine.
Here is a minimal example that causes a crash:
and here is the output:
Simply swapping the order in which the learners are passed to benchmark(), the program completes and I get the following output:
EDIT: I just tried passing two tuned learners to benchmark and it runs the first one but crashes on the second. So perhaps it is not about switching levels of resampling after all.