Open max-hence opened 2 weeks ago
It looks like the particular bwa_mem job you posted the log of is failing because there is an existing set of temp files from the samtools sort command, likely from a previous failed run that crashed before cleanup could finish. I would initially try deleting the "SRR12460375.bam.tmp.*.bam" files and rerunning.
This doesn't look like a slurm / resources error, although I'm not entirely sure why the error in the command is not being propagated to slurm.
Hi,
I manage to make snpArcher work on dataset with medium size genomes (400Mb) but I got errors for bigger genomes (2Gb) and when job are taking to much time and ressources. I think I set the slurm/config.yaml properly to ask for big ressources and the cluster I m using is supposed to handle such settings but I got this kind of errors for instance at the bwa_map rule :
And in the .snakemake/slurm_logs/rule_bwa_map/GCA_902167145.1_SAMN15515513_SRR12460375/13562883.log :
But still when I look on the slurm cluster at that particular job I find no errors :
Do you have any clue on what could cause such an error ? I joined the slurm/config.yaml if needed. config.yaml.txt
Thank you very much,
Max Brault