Closed martinez-zacharya closed 1 year ago
Probably something went wrong in the layers of lightning and deepspeed. Can you provide the input going into the esmfold
forward call.
Specifically the line pred = self.esmfold.infer_pdb(seqs[0])
.
Probably seqs[0]
does not contain what you want it to?
Thanks for the reply!
CSVGVTGTAASEQYF
This is an example input where the program fails. I confirmed that the input is indeed a string.
What do you get when you follow the README instructions for running esmfold.infer_pdb(..)
?
I'm able to run the script in the README to infer the structure of the provided example sequence. I'm even able to uncomment out the set_chunk_size line and it still works.
Ok great so then it points to a problem in the lightning / deepspeed layers. It's not really possible to debug that, can you try to create a MWE to reproduce the error? In creating the MWE you may already find the problem. Thanks!
Bug description When using the .set_chunk_size() method for ESMFold, I receive an index error. My purpose behind changing the chunk size is to try and save on VRAM. Without setting the chunk size, I run out of memory before being able to infer the structure of even one sequence. Please let me know if I am misunderstanding how to use this method.
Reproduction steps model.set_chunk_size(64)
Logs I am using pytorch lightning with DeepSpeed stage 3, hence the repetitive error logs.
Thank you for any help ahead of time