Closed dasmoth closed 2 years ago
I was planning to add this feature in the release along with array job support.
I've never used slurm apart from reading the manual, so correct me if I'm wrong. Would it be sufficient to simply append extra arguments to sbatch
command as in grid engine runner? The first argument to subprocess.run
would be ["sbatch", "--output=stdout", "--error=stderr", "--parsable", *sbatch_args]
Good to hear that this is coming. And yes, sbatch
is quite a lot like qsub
, providing a way to append an array of extra args should work fine.
I added sbatchargs
parameter that can be either an array of arguments or a shell-like string of arguments that will be appended to sbatch
just like for grid engine.
You can define it in the config file using an array:
type: SlurmRunner
parameters:
sbatchargs: [<arg>, <arg>, <arg>]
or using string
type: SlurmRunner
parameters:
sbatchargs: "<arg> <arg> <arg>"
I haven't tested it, but it was a simple fix and should work just fine.
Thanks very much. I've successfully added a resource requirement (--mem=...
) to a SlurmRunner using this mechanism, and everything seems to be working.
Excellent
When using the
slurm
branch, there doesn't appear to be any mechanism for providing batch-job submission options (partition names, resource requirements, etc.).The similarly-architected
GridEngineRunner
allows a "qargs" parameter to be provided, which is inserted into theqsub
command line. Could we have the same thing forsbatch
?I realize the Slurm runner is still under development, but seems like this could be a worthwhile addition?