Zuricho / ParallelFold

Modified version of Alphafold to divide CPU part (MSA and template searching) and GPU part. This can accelerate Alphafold when predicting multiple structures
https://parafold.sjtu.edu.cn
147 stars 45 forks source link

Limit RAM usage #12

Closed hrzolix closed 2 years ago

hrzolix commented 2 years ago

Im trying to run a fasta file with 3643 in length. MSA part was done, but the inference part tried to allocate 80 GB of VRAM on GPU which I dont have access to, Graphic cards are NVIDIA Tesla V100 16 GB. Now im trying to run inference on CPU which is a very slow process, and the job keeps using a lot of RAM and expand the usage as the time passes. Can I limit usage of RAM somehow? Or can I run inference on more graphic cards maybe with parallel process?

Zuricho commented 2 years ago

You can: use CUDA shared memory from multiple GPU cards You can't: limit the memory, and more GPU cannot accelerate the prediction process with parallel process

I need to add, I tested AlphaFold on NVIDIA Tesla V100 32GB, and it's still able to predicting when I submitted a sequence with more than 3000aa. Perhaps you don't need that much memory, and maybe it alse depends on the size of MSA