Open 17dacheng opened 2 years ago
additionally, the config is as follows, our gpu is a100, 80g memory
prep_model self.opt is {'alpha': 2.0, 'cmap_cutoff': 10.0, 'con': {'binary': False, 'cutoff': 14.0, 'num': 2, 'num_pos': inf, 'seqsep': 9}, 'dropout': True, 'fape_cutoff': 10.0, 'fix_pos': ar
ray([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16,
17, 18, 19, 20, 21, 22, 23]), 'hard': 0.0, 'i_con': {'binary': False, 'cutoff': 21.6875, 'num': 1, 'num_pos': inf}, 'learning_rate': 0.1, 'norm_seq_grad': True, 'num_models': 1, 'num_recycles': 0, 'pos': array([ 242, 244, 247, 249, 251, 267, 268, 270, 271, 981, 982, 985, 986, 988, 989, 1035, 1036, 1039, 1043, 1044, 1045, 1046,
1047, 1048]), 'sample_models': True, 'soft': 0.0, 'temp': 1.0, 'template': {'dropout': 0.0, 'rm_ic': False}, 'use_pssm': False, 'weights': {'con': 1.0, 'dgram_cce': 1.0, 'exp_res': 0.0
, 'fape': 0.0, 'helix': 0.0, 'pae': 0.0, 'plddt': 0.0, 'rmsd': 0.0, 'seq_ent': 0.0}}
As you see, it tries to allocate 206 GB of GPU memory. Generally speaking going beyond 600AAs is not possible for Gradient-based optimisation. You can try the semigreedy protocol, this should work.
Run the partial hallucination using the following code and got out of memory issue. the complex is composed of two chains ['A', 'C'], A chain has 1505 residues. Need to repaired side chain structure using partial hallucination. the residues which need to repair structure is list in pos. totally 19 parts.
when run the code, report following error. the memory consumption is unexpectedly large and could you kindly help me to check why?
The above exception was the direct cause of the following exception: