Closed Fred-White94 closed 3 years ago
Hi - are you talking about the node for the "main thread" that runs the snakemake and starts all the jobs, or do you mean for each job that is submitted? -A
short answer, currently neither is implemented, but I can tell you for either option what you can change and I can put it into the next release (due next week).
I actually wanted to know for both, I guess to force everything to work on a single node that happens with bigCores: 1 but not sure where to specify the exact node.
okay, so, since this is not currently implemented, this is a bit of a hack:
you can change the "partition" variable in the config/slurm.config or config/slurm_simple.config (or whichever file you are using - xxx.config with xxx being the value of the SCHEDULER variable in VARIABLE_CONFIG) to the argument + value you would usually give to your slurm setup (e.g. "--nodelist=favNode"). This will hand "--nodelist=favNode" to every submit command, except the "main thread".
for the main thread, you'd have to add the "--nodelist=favNode" to the SUBMIT_COMMAND variable in VARIABLE_CONFIG. I am assuming it currently says sbatch for you. You'd change it to sbatch --nodelist=favNode
ok great! thanks for your help.
Cheers, Fred.
Oh, I just realized, you could also do that second trick for the config/slurm.config: replace call: "sbatch" with call: "sbatch --nodelist=favNode" (that way you can keep "partition" flexible for different rules)
let me know if it works.
Your edits did run. I ran into an error with: rule filter_numbers. Saw that there was a recent commit to this rule and updated to the latest commit, now I get an error immediately with line 2 of the Snakefile...
I'll have a look into it..
sorry about that. Can you check out the last release?
alternatively, you can check out the branch "runningNow" which runs at my end. Sorry for a non-working version in the master.
okay, forget the last 2 comments. Actually, the config files are not backwards compatible. I guess you have config file with something like filtering: trunc_qual: 2 but it needs to be like this now filtering: trunc_qual: fwd: 2 rvs: 2 (I'll allow both options in the next version to avoid this problem, for now you have to change the config file)
I see, thanks for the info. I am quite a few releases behind, now reverting and recreating the env. Will get back to you
okay, so I've just pushed the latest version. At least it will warn you, if the config file has a bad mismatch. It also has some new functionalities. I have not actually added a new parameter to select nodes, I think the solution to include it right after the call is the easiest option.
so, the wrapper now supports submission to a specific node. I'll close this issue.
Hi I just wanted to know if it is possible with the ./dadasnake command to choose which worker node to run the pipeline on. Perhaps in VARIABLE_CONFIG?
Thanks in advance, Fred.