DerrickWood / kraken2

The second version of the Kraken taxonomic sequence classification system
MIT License
715 stars 270 forks source link

xargs: cat: terminated by signal 13 #507

Open kdbchau opened 3 years ago

kdbchau commented 3 years ago

Using kraken2 version 2.1.2.

My code:

kraken2-build --build --db /scratch/chauk/kraken2/kraken_nt/nt --threads 32

The error (even after trying with different --thread n values):

Sequence ID to taxonomy ID map already present, skipping map creation.
Estimating required capacity (step 2)...
xargs: cat: terminated by signal 13

Have seen this issue posted a few times before but no clear solution/fix.

lynngao commented 3 years ago

Have you tried with allocating more memory to run the job?

kdbchau commented 3 years ago

How would I do that? I don't see any parameters with kraken2-build that alter memory, aside from maybe--max-db-size

lynngao commented 3 years ago

No it's not a parameter in kraken2-build command. You need to allocate the memory to run the command. Something like this

SBATCH --mem=60GB

bioscienceresearch commented 2 years ago

I have same issue. Kraken version 2.1.1

kraken2-build --build --db nt --threads 16 'Estimated hash table requirement: 345259953004 bytes' Taxonomy parsed and converted. xargs: cat: terminated by signal 13 ~/apps/Kraken2/build_kraken2_db.sh: line 143: 3284866 Done list_sequence_files 3284867 Exit 125 | xargs -0 cat 3284868 Killed | build_db -k $KRAKEN2_KMER_LEN -l $KRAKEN2_MINIMIZER_LEN -S $KRAKEN2_SEED_TEMPLATE $KRAKEN2XFLAG -H hash.k2d.tmp -t taxo.k2d.tmp -o opts.k2d.tmp -n taxonomy/ -m $seqid2taxid_map_file -c $required_capacity -p $KRAKEN2_THREAD_CT $max_db_flag -B $KRAKEN2_BLOCK_SIZE -b $KRAKEN2_SUBBLOCK_SIZE -r $KRAKEN2_MIN_TAXID_BITS $fast_build_flag I am using a machine with 200GB spare RAM, 6TB of disk space where database is located (But does have other intermittent processes). There is no setting however in the build_kraken2_db.sh parameters (or in build_db.cc that I noticed) that allows setting RAM usage.

As signal 13 refers to "Broken pipe: write to pipe with no readers" any ideas on most likely cause?

lynngao commented 2 years ago

I have same issue. Kraken version 2.1.1

kraken2-build --build --db nt --threads 16 'Estimated hash table requirement: 345259953004 bytes' Taxonomy parsed and converted. xargs: cat: terminated by signal 13 ~/apps/Kraken2/build_kraken2_db.sh: line 143: 3284866 Done list_sequence_files 3284867 Exit 125 | xargs -0 cat 3284868 Killed | build_db -k $KRAKEN2_KMER_LEN -l $KRAKEN2_MINIMIZER_LEN -S $KRAKEN2_SEED_TEMPLATE $KRAKEN2XFLAG -H hash.k2d.tmp -t taxo.k2d.tmp -o opts.k2d.tmp -n taxonomy/ -m $seqid2taxid_map_file -c $required_capacity -p $KRAKEN2_THREAD_CT $max_db_flag -B $KRAKEN2_BLOCK_SIZE -b $KRAKEN2_SUBBLOCK_SIZE -r $KRAKEN2_MIN_TAXID_BITS $fast_build_flag I am using a machine with 200GB spare RAM, 6TB of disk space where database is located (But does have other intermittent processes). There is no setting however in the build_kraken2_db.sh parameters (or in build_db.cc that I noticed) that allows setting RAM usage.

As signal 13 refers to "Broken pipe: write to pipe with no readers" any ideas on most likely cause?

It's not a parameter in kraken command. It's when you run your job on high performance cluster, you need to allocate memory to the job.

bioscienceresearch commented 2 years ago

I note 345GB requirement and created a 200GB swapfile in addition to RAM. This runs OK but had to kill the process after 5 days as needed the machine for other work

adRn-s commented 11 months ago

TIL Signal 13 is a broken pipe on the receiving end. In this context, this means that a command (build_kraken2_db.sh?) still sends output, but xargs -0 cat is not there anymore, for whatever reason.

Scott-0208 commented 2 months ago

I have same issue. Kraken version 2.1.1 kraken2-build --build --db nt --threads 16 'Estimated hash table requirement: 345259953004 bytes' Taxonomy parsed and converted. xargs: cat: terminated by signal 13 ~/apps/Kraken2/build_kraken2_db.sh: line 143: 3284866 Done list_sequence_files 3284867 Exit 125 | xargs -0 cat 3284868 Killed | build_db -k $KRAKEN2_KMER_LEN -l $KRAKEN2_MINIMIZER_LEN -S $KRAKEN2_SEED_TEMPLATE $KRAKEN2XFLAG -H hash.k2d.tmp -t taxo.k2d.tmp -o opts.k2d.tmp -n taxonomy/ -m $seqid2taxid_map_file -c $required_capacity -p $KRAKEN2_THREAD_CT $max_db_flag -B $KRAKEN2_BLOCK_SIZE -b $KRAKEN2_SUBBLOCK_SIZE -r $KRAKEN2_MIN_TAXID_BITS $fast_build_flag I am using a machine with 200GB spare RAM, 6TB of disk space where database is located (But does have other intermittent processes). There is no setting however in the build_kraken2_db.sh parameters (or in build_db.cc that I noticed) that allows setting RAM usage. As signal 13 refers to "Broken pipe: write to pipe with no readers" any ideas on most likely cause?

It's not a parameter in kraken command. It's when you run your job on high performance cluster, you need to allocate memory to the job.

Hi, It's a very old problem and I'm facing the same problem right now. How do I allocate memory to run the command? I am not familiar with operating on a Linux terminal. Do I have to modify some text file in kraken or do I have to write a bash file? Thank you.

ChillarAnand commented 2 months ago

If you don't have enough ram, you can increase the swap. @Scott-0208

I wrote a detailed tutorial on the same here.

https://avilpage.com/2024/07/mastering-kraken2-initial-runs.html