Closed AxelVaillant closed 1 year ago
Hi @AxelVaillant,
the error you are getting is still an out-of-memory problem, so the only two solutions are to either increase the memory even more, or to try another mapping tool that grenepipe offers.
To what value have you increased your memory? I have worked with datasets where 25GB was needed (or more, can't quite remember). If you think that the new memory that you are setting is somehow not used by the pipeline, you can also share your cluster_config.yaml
here, to see if everything is all right with that file.
Cheers and so long Lucas
Ok this time i increased the memory limit to 25G and i still get the exact same error but without the last line "slurmstepd error oom-kill event.."
Here is the content of my cluster_config.yaml :
__default__:
time: 600 # Default time (minutes). A time limit of zero requests that no time limit be imposed
mem: 25G # Default memory. A memory size specification of zero grants the job access to all of the memory on each node.
cpus-per-task: 1
nodes: 1
ntasks: 1
account: arabreed
partition: tests
trim_reads_se:
mem: 25G
cpus-per-task: 4
trim_reads_pe:
mem: 25G
cpus-per-task: 4
map_reads:
meme: 25G
cpus-per-task: 4
call_variants:
time: 1-0
cpus-per-task: 4
Okay, that file looks all right. If you don't get the out of memory error any more, it might be something else (or still out of memory, but somehow that last line does not get printed). Have you checked the log file produced by the mapping itself, logs/bwa-mem/ARP-28-c20_S343-1.log
?
Edit: See also the troubleshooting page for other things you can check out. If tracking down the log files does not reveal the error, you can also try to run bwa mem directly with the file that is causing trouble, and see if that works - that would at least tell you whether the problem is with bwa mem and/or your files, or with grenepipe.
This all can be quite tricky, but as said in the troubleshooting page, it's a necessary evil that comes from trying to string together many different tools with their own little problems, that in combination can cause a lot of different issues... :-(
Hi @AxelVaillant, any update on this?
Hi, Unfortunately i didn't manage to solve my problems so i gave up to execute the pipeline on a cluster. Anyway, thank you for your help !
Hi @AxelVaillant,
I am sorry to hear! If you have a moment, I'd be interested in a bit of feedback, in order to improve grenepipe: Was this still due to the errors above? Have you tried running the tool causing the error on its own (outside of grenepipe) to check if that works? From what I can see above, it was just an out-of-memory issue, so hopefully fixable (unless your cluster does not offer enough memory, but that seems unlikely).
If you have any suggestions on what needs to be fixed in grenepipe to get this to work for you (if this is due to grenepipe), I'd be grateful!
Cheers, thank you, and so long Lucas
Hello, I'm trying to run the pipeline on a cluster but i keep getting an error on the job mapping. it seems that the error is related to the wrapper "0.80.0/bio/bwa/mem" in mapping-bwa-mem.smk i tried to upgrade the memory capacity in cluster_config.yaml but it didn't work.
I am launching the pipeline with these options : snakemake --conda-frontend mamba --conda-prefix ~/scratch/conda-envs --profile profiles/slurm/ --directory ../OutputGrenepipe
The error message is the following :
Thank you !