Closed bentsherman closed 1 year ago
Another downside of this situation is that when you specify cpus through clusterOptions
instead of cpus
, task.cpus
does not accurately reflect the actual number of cpus and so you can't do multiprocessing if you have to provide task.cpus
as a command-line argument in the task script.
To my surprise, my sys admins actually removed the interconnect requirement at my request, since "any" is a sensible default, so this issue is no longer urgent for me. That being said, I think it's still an interesting issue to consider so I'll leave it open.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.
Bump
I have a feature request and some discussion around the pbspro executor and how resource settings are determined from
cpus
,memory
, andclusterOptions
.So we have a pipeline with the standard sort of resource settings:
And in the spirit of nf-core we are developing an institutional config file with settings for our PBS Pro scheduler. Now a wonky thing about our scheduler is that we must specify the interconnect, even if we don't care and just say "any". Our admins have been pretty adamant about this rule. So we put that here:
These settings in combination produce the following kind of pbs headers:
In this situation, PBS seems to ignore the second select line. It gets the right cpus, memory, and walltime, but it never includes the interconnect setting and so the job is rejected. (Actually the job is still accepted if ncpus=1, but I think that is also an artifact of our particular cluster.)
So now I have to null the cpus and memory directives and instead specify them in clusterOptions:
But at this point things get really hairy because I end up having to do the same kind of thing for any withLabel or withName rules in the pipeline config. The pipeline in question is working towards nf-core compatibility so they aren't going to maintain platform-specific profiles, and I don't think the sys admins are going to budge on this weird interconnect rule.
So I'm wondering if we can solve my problem in Nextflow? It seems to me that if
cpus
and/ormemory
are defined in addition toclusterOptions
, then the resulting select lines should be merged like so:These headers would work for me. The time directive can be left alone because the walltime setting is not tied to the select. This is all just for the
pbspro
executor, although I imagine we could have this same discussion forpbs
and really any other executor that supportsclusterOptions
.