Closed gaferguz closed 1 year ago
At this point, you are unlikely to benefit from lowering threads. You may want to try annotating your fasta separately and use the merger tool on the results. I will remove that language you pointed out from the doc, we should not make promises when the size of the fasta clearly changes the requirements. You may need to split some of the larger FASTAs.
Hi there, I've been tried to annotate contigs assembled from high-coverage fastq files for a while using different DRAM versions (1.3.5 and 1.4.0.rc3), trusting the memory requirements specified on DRAM wiki page:
These are the number of input sequences for the samples that im trying to annotate locally (my system has 64 GB of RAM), whith an avegare lenght around 600-700 pb. These files range from 70MB to 647MB of size:
Plot11_CoAs.fa: 935196 sequences Plot1617_CoAs.fa: 729800 sequences Plot28_CoAs.fa: 127948 sequences Plot31_CoAs.fa: 840509 sequences Plot3637_CoAs.fa: 830278 sequences
After getting all hits, seems to get stuck at the merging ORF step, until the process is finally killed.
I would like to know any minimal memory recommendations for my dataset.