Hello everyone
I'm working on a large sequencing data set (about 15,300,000 reads for 8.2 GB), I meeting a clustering issue using the Swarm method, knowing that Usearch61 is limited in memory (v32 bit).
The command "pick_otus.py -i non_chimeric_seqs_R2345.fasta -m swarm -o picked_otu" ends after 1:30, reaching about 3,500,000 OTUs for only about 6,300,000 total sequences.
Why does the command not take into account the 15,300,000 sequences? The swarm / 2.1.13 version has a memory limit for input files. I specify that I have access to a cluster of computing being in research course, and that I therefore carried out this order by means of a sequential job (memory allocated: 20 000 MB and 3 hours of time)
Hello everyone I'm working on a large sequencing data set (about 15,300,000 reads for 8.2 GB), I meeting a clustering issue using the Swarm method, knowing that Usearch61 is limited in memory (v32 bit). The command "pick_otus.py -i non_chimeric_seqs_R2345.fasta -m swarm -o picked_otu" ends after 1:30, reaching about 3,500,000 OTUs for only about 6,300,000 total sequences. Why does the command not take into account the 15,300,000 sequences? The swarm / 2.1.13 version has a memory limit for input files. I specify that I have access to a cluster of computing being in research course, and that I therefore carried out this order by means of a sequential job (memory allocated: 20 000 MB and 3 hours of time)