Closed KGerhardt closed 2 years ago
Hi @KGerhardt , thank you for the feedback.
A few questions here:
v0.7.1
that affected the training step.I was using single mode and I had version 0.6.4 installed.
I upgraded to 0.7.2 and I'm rerunning the test that caused the memory usage issue. It's looking like that fixed it for me.
Thank you for responding so quickly. I apologize for flagging an issue that you'd already fixed.
No problem, that's why I was checking! I'd rather have it be that than a new memory leak I'm unaware of. 😁
Yes. Thank you very much for this project. I looked through your repositories, and saw you've got a pyFastANI - I'm a member of the Konstantinidis lab, and I'm using pyrodigal in a companion AAI tool to FastANI. Gene prediction is the rate-limiting step, and the combination of reduced I/O and faster operation in pyrodigal saves oodles of runtime. It also uses HMMER, so I'm looking into your pyHMMER repo, too. I can't overstate my appreciation for all of your work.
Thanks a lot @KGerhardt , I'm really happy to hear the work I have done turns out to be useful to other people. I'm actually experimenting with a way to accelerate FastANI at the moment, I'm not sure if I'll manage to go somewhere, but let's see :)
I've incorporated pyrodigal into a script I'm working on and I noticed when I ran it on a larger batch of genomes that the memory usage kept increasing over time. I went back and tested a few different approaches with the same issue. Even using the same OrfFinder object and retraining it continues to increase RAM use.
Is it possible to add a python-level deallocation function to the OrfFinder class?