hpcaitech / FastFold

Optimizing AlphaFold Training and Inference on GPU Clusters
Apache License 2.0
566 stars 86 forks source link

Optimize long sequence inference memory #69

Closed oahzxl closed 2 years ago

oahzxl commented 2 years ago

Use chunk, code optimization and heterogeneous computing to reduce memory usage of long sequence inference.

oahzxl commented 2 years ago

can infer sequence 5000 on 80G A100 now

Shenggan commented 2 years ago

LGTM, support multimer model in new PR