Open jolespin opened 1 year ago
22:04:16.761
timestamp is when the step started to run. Any particular reason why you are using only 1 CPU thread? Everything will be painfully slow given the size of your dataset. I believe it's a timeout issue as both spades.py
and underlying tools were killed at the same time, hence no explicit error.
I was worried about memory issues and wasn't sure how the memory scales w/ threads.
I was worried about memory issues and wasn't sure how the memory scales w/ threads.
I would suggest you to deal with memory issue as they will appear. Here you're wasting lots of computational time just using single thread ending with timeouts in the result.
Description of bug
Here's the command:
Basically, we tried to pick up from a previous run and the job stopped w/o an error message. We requested 48 hours of run time but the time got to
22:04:16.761
so I don't think it was a timeout issue.It looks like K55 started:
I'm trying to figure out if I should try to continue the jobs or use MEGAHIT (which I'd rather not use since many of the jobs got pretty far)
spades.log
spades.log.zip
params.txt
(base) [jespinoz@exp-15-01 Commands]$ cat ../veba_output/assembly/LAM_Final_B5_1_sed_S5/intermediate/1__assembly/params.txt Command line: /expanse/projects/jcl122/miniconda3/envs/VEBA-assembly_env/bin/metaspades.py -o /expanse/projects/jcl122/vesta/veba_output/assembly/LAM_Final_B5_1_sed_S5/intermediate/1__assembly -1 /expanse/projects/jcl122/vesta/veba_output/preprocess/LAM_Final_B5_1_sed_S5/output/cleaned_1.fastq.gz -2 /expanse/projects/jcl122/vesta/veba_output/preprocess/LAM_Final_B5_1_sed_S5/output/cleaned_2.fastq.gz --tmp-dir /expanse/projects/jcl122/vesta/veba_output/assembly/LAM_Final_B5_1_sed_S5/tmp/assembly --threads 1 --memory 128 --tmp-dir ../veba_output/assembly/LAM_Final_B5_1_sed_S5/tmp/assembly --threads 1 --memory 243 --tmp-dir ../veba_output/assembly/LAM_Final_B5_1_sed_S5/tmp/assembly --threads 1 --memory 200 Restart-from=last with updated parameters: --tmp-dir ../veba_output/assembly/LAM_Final_B5_1_sed_S5/tmp/assembly --threads 1 --memory 200
System information: SPAdes version: 3.15.5 Python version: 3.9.15 OS: Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
Output dir: /expanse/projects/jcl122/vesta/veba_output/assembly/LAM_Final_B5_1_sed_S5/intermediate/1__assembly Mode: read error correction and assembling Debug mode is turned OFF
Dataset parameters: Metagenomic mode Reads: Library number: 1, library type: paired-end orientation: fr left reads: ['/expanse/projects/jcl122/vesta/veba_output/preprocess/LAM_Final_B5_1_sed_S5/output/cleaned_1.fastq.gz'] right reads: ['/expanse/projects/jcl122/vesta/veba_output/preprocess/LAM_Final_B5_1_sed_S5/output/cleaned_2.fastq.gz'] interlaced reads: not specified single reads: not specified merged reads: not specified Read error correction parameters: Iterations: 1 PHRED offset will be auto-detected Corrected reads will be compressed Assembly parameters: k: [21, 33, 55] Repeat resolution is enabled Mismatch careful mode is turned OFF MismatchCorrector will be SKIPPED Coverage cutoff is turned OFF Other parameters: Dir for temp files: /expanse/projects/jcl122/vesta/veba_output/assembly/LAM_Final_B5_1_sed_S5/tmp/assembly Threads: 1 Memory limit (in Gb): 200
SPAdes version
SPAdes v3.15.5
Operating System
Linux-4.18.0-477.15.1.el8_8.x86_64-x86_64-with-glibc2.28
Python Version
3.9.15
Method of SPAdes installation
conda
No errors reported in spades.log