mschubert / clustermq

R package to send function calls as jobs on LSF, SGE, Slurm, PBS/Torque, or each via SSH
https://mschubert.github.io/clustermq/
Apache License 2.0
146 stars 27 forks source link

register_dopar_cmq's n_jobs vs template cores #225

Closed liutiming closed 3 years ago

liutiming commented 3 years ago

Thanks for this helpful package again.

I am not very sure how will foreach jobs be submitted when both n_jobs and cores are given? Does one override the other?

mschubert commented 3 years ago

Jobs is the number of jobs submitted, cores is the number of cores requested per job.

So, for instance, you can use mclapply within each job that then makes use of the cores. But generally, I would recommend to parallelize in jobs with one core.

Hope that helps :smile:

liutiming commented 3 years ago

Thanks that does help a lot!

So if I submit n_jobs = 32 and cores = 32, it will actually submit 32*32 core requests, right? So as you said, the wise thing to do is n_jobs = 32 and core = 1 since whatever is inside foreach loop may not parallelize well...

mschubert commented 3 years ago

Yes, exactly!

This is also true for using Q directly, not only for foreach