hzi-bifo / RiboDetector

Accurate and rapid RiboRNA sequences Detector based on deep learning
GNU General Public License v3.0
94 stars 16 forks source link

In CPU mode, smaller chunk_size does not reduce the memory use. #13

Closed dawnmy closed 2 years ago

dawnmy commented 2 years ago

The memory use seems to be not related to the chunk_size setting but the total number of input sequence bases. Need to find a way to reduce the memory use for large input files.

HongxiangXu commented 2 years ago

I seems met similar problem, as I used 4 threads and 256 chunk size, which should use a little memory right? as 20 thread with 1024 chunk size will only use about 20G memory as it described. However, my PC was stuck and my task manager showed nearly all memory(16G) were consumed, please figured out this problems. Thanks very much!

dawnmy commented 2 years ago

I seems met similar problem, as I used 4 threads and 256 chunk size, which should use a little memory right? as 20 thread with 1024 chunk size will only use about 20G memory as it described. However, my PC was stuck and my task manager showed nearly all memory(16G) were consumed, please figured out this problems. Thanks very much!

Thank you for your interest in RiboDetector. I am working on the new version. Next release will solve this issue. BTW, how large is your input fastq file (number of nucleotides)?

dawnmy commented 2 years ago

@xhxlilium This issue has been solved in the latest version v0.2.6. It can be updated with pip. When running on your large input files, you can use smaller threads and chunk_size (e.g. --threads 10 chunk_size 400, which will use about 10GB RAM but may be up to 20GB). I suggest to run RiboDetector on a PC with at least 32GB RAM.