YoshitakaMo / localcolabfold

ColabFold on your local PC
MIT License
524 stars 127 forks source link

Question: Not Enough GPU memory #142

Open hypothalamus01 opened 1 year ago

hypothalamus01 commented 1 year ago

What is your question? I got a "Not enough GPU memory error" when using a 3440 a.a. sequence as input.

"2023-03-20 17:54:52,116 Could not predict N_Repeat_C. Not Enough GPU memory? RESOURCE_EXHAUSTED: Failed to allocate request for 5.66GiB (6075580416B) on device ordinal "

Computational environment

I tried to run it with CPU, but it will take roughly 5-6 weeks to complete. Is there a way I can still run it with 16GB GPU? Would a native alphafold2 be different? Also, if a new GPU with larger memory is necessary, how much ram it will need for 3440 a.a sequence?

YoshitakaMo commented 1 year ago

It's difficult to predict 3440 a.a. sequence because your GPU RAM is not sufficient. The required GPU memory are almost the same as the native AF2. I think It requires 24GB or more. One solution is to split the sequence into some parts according to the predicted protein domain boundary. Another way is to split it into multiple sequences with overlapping portions, and then merge the partial structures with PyMOL (or other methods) after the prediction.

hypothalamus01 commented 1 year ago

Thank you for your answer. I will try it on RTX4090.