Graylab / IgFold

Fast, accurate antibody structure prediction from deep learning on massive set of natural antibodies
Other
319 stars 60 forks source link

runtimeerror : error(s) in loading state_dict for IgFold: #40

Closed Samuel-gwb closed 1 year ago

Samuel-gwb commented 1 year ago

Hi, Please help to solve following errors, thanks! I downloaded weights, and loaded in IgFoldRunner, but errors appeared as when excuting inference:

############################################# The code, data, and weights for this work are made available for non-commercial use (including at commercial entities) under the terms of the JHU Academic Software License Agreement. For commercial inquiries, please contact dmalon11[at]jhu.edu. License: https://github.com/Graylab/IgFold/blob/main/LICENSE.md

Loading 4 IgFold models... Using device: cuda:0 Loading /home/shanghai/RationalDesign/ToBeTest/IgFold/igfold/trained_models/IgFold/igfold_1.ckpt... Traceback (most recent call last): File "Inference_IgFold.py", line 38, in igfold = IgFoldRunner(num_models=num_models) File "/home/shanghai/RationalDesign/ToBeTest/IgFold/igfold/IgFoldRunner.py", line 71, in init IgFold.load_from_checkpoint(ckpt_file).eval().to(device)) File "/home/shanghai/anaconda3/envs/IgFold/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 137, in load_from_checkpoint return _load_from_checkpoint( File "/home/shanghai/anaconda3/envs/IgFold/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 180, in _load_from_checkpoint return _load_state(cls, checkpoint, strict=strict, **kwargs) File "/home/shanghai/anaconda3/envs/IgFold/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 238, in _load_state keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict) File "/home/shanghai/anaconda3/envs/IgFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for IgFold: Unexpected key(s) in state_dict: "bert_model.embeddings.position_ids", "bert_model.embeddings.word_embeddings.weight", "bert_model.embeddings.position_embeddings.weight", "bert_model.embeddings.token_type_embeddings.weight", "bert_model.embeddings.LayerNorm.weight", "bert_model.embeddings.LayerNorm.bias", "bert_model.encoder.layer.0.attention.self.query.weight", "bert_model.encoder.layer.0.attention.self.query.bias", "bert_model.encoder.layer.0.attention.self.key.weight", "bert_model.encoder.layer.0.attention.self.key.bias", "bert_model.encoder.layer.0.attention.self.value.weight", "bert_model.encoder.layer.0.attention.self.value.bias", "bert_model.encoder.layer.0.attention.output.dense.weight", "bert_model.encoder.layer.0.attention.output.dense.bias", "bert_model.encoder.layer.0.attention.output.LayerNorm.weight", "bert_model.encoder.layer.0.attention.output.LayerNorm.bias", "bert_model.encoder.layer.0.intermediate.dense.weight", "bert_model.encoder.layer.0.intermediate.dense.bias", "bert_model.encoder.layer.0.output.dense.weight", "bert_model.encoder.layer.0.output.dense.bias", "bert_model.encoder.layer.0.output.LayerNorm.weight", "bert_model.encoder.layer.0.output.LayerNorm.bias", "bert_model.encoder.layer.1.attention.self.query.weight", "bert_model.encoder.layer.1.attention.self.query.bias", "bert_model.encoder.layer.1.attention.self.key.weight", "bert_model.encoder.layer.1.attention.self.key.bias", "bert_model.encoder.layer.1.attention.self.value.weight", "bert_model.encoder.layer.1.attention.self.value.bias", "bert_model.encoder.layer.1.attention.output.dense.weight", "bert_model.encoder.layer.1.attention.output.dense.bias", "bert_model.encoder.layer.1.attention.output.LayerNorm.weight", "bert_model.encoder.layer.1.attention.output.LayerNorm.bias", "bert_model.encoder.layer.1.intermediate.dense.weight", "bert_model.encoder.layer.1.intermediate.dense.bias", "bert_model.encoder.layer.1.output.dense.weight", "bert_model.encoder.layer.1.output.dense.bias", "bert_model.encoder.layer.1.output.LayerNorm.weight", "bert_model.encoder.layer.1.output.LayerNorm.bias", "bert_model.encoder.layer.2.attention.self.query.weight", "bert_model.encoder.layer.2.attention.self.query.bias", "bert_model.encoder.layer.2.attention.self.key.weight", "bert_model.encoder.layer.2.attention.self.key.bias", "bert_model.encoder.layer.2.attention.self.value.weight", "bert_model.encoder.layer.2.attention.self.value.bias", "bert_model.encoder.layer.2.attention.output.dense.weight", "bert_model.encoder.layer.2.attention.output.dense.bias", "bert_model.encoder.layer.2.attention.output.LayerNorm.weight", "bert_model.encoder.layer.2.attention.output.LayerNorm.bias", "bert_model.encoder.layer.2.intermediate.dense.weight", "bert_model.encoder.layer.2.intermediate.dense.bias", "bert_model.encoder.layer.2.output.dense.weight", "bert_model.encoder.layer.2.output.dense.bias", "bert_model.encoder.layer.2.output.LayerNorm.weight", "bert_model.encoder.layer.2.output.LayerNorm.bias", "bert_model.encoder.layer.3.attention.self.query.weight", "bert_model.encoder.layer.3.attention.self.query.bias", "bert_model.encoder.layer.3.attention.self.key.weight", "bert_model.encoder.layer.3.attention.self.key.bias", "bert_model.encoder.layer.3.attention.self.value.weight", "bert_model.encoder.layer.3.attention.self.value.bias", "bert_model.encoder.layer.3.attention.output.dense.weight", "bert_model.encoder.layer.3.attention.output.dense.bias", "bert_model.encoder.layer.3.attention.output.LayerNorm.weight", "bert_model.encoder.layer.3.attention.output.LayerNorm.bias", "bert_model.encoder.layer.3.intermediate.dense.weight", "bert_model.encoder.layer.3.intermediate.dense.bias", "bert_model.encoder.layer.3.output.dense.weight", "bert_model.encoder.layer.3.output.dense.bias", "bert_model.encoder.layer.3.output.LayerNorm.weight", "bert_model.encoder.layer.3.output.LayerNorm.bias", "bert_model.encoder.layer.4.attention.self.query.weight", "bert_model.encoder.layer.4.attention.self.query.bias", "bert_model.encoder.layer.4.attention.self.key.weight", "bert_model.encoder.layer.4.attention.self.key.bias", "bert_model.encoder.layer.4.attention.self.value.weight", "bert_model.encoder.layer.4.attention.self.value.bias", "bert_model.encoder.layer.4.attention.output.dense.weight", "bert_model.encoder.layer.4.attention.output.dense.bias", "bert_model.encoder.layer.4.attention.output.LayerNorm.weight", "bert_model.encoder.layer.4.attention.output.LayerNorm.bias", "bert_model.encoder.layer.4.intermediate.dense.weight", "bert_model.encoder.layer.4.intermediate.dense.bias", "bert_model.encoder.layer.4.output.dense.weight", "bert_model.encoder.layer.4.output.dense.bias", "bert_model.encoder.layer.4.output.LayerNorm.weight", "bert_model.encoder.layer.4.output.LayerNorm.bias", "bert_model.encoder.layer.5.attention.self.query.weight", "bert_model.encoder.layer.5.attention.self.query.bias", "bert_model.encoder.layer.5.attention.self.key.weight", "bert_model.encoder.layer.5.attention.self.key.bias", "bert_model.encoder.layer.5.attention.self.value.weight", "bert_model.encoder.layer.5.attention.self.value.bias", "bert_model.encoder.layer.5.attention.output.dense.weight", "bert_model.encoder.layer.5.attention.output.dense.bias", "bert_model.encoder.layer.5.attention.output.LayerNorm.weight", "bert_model.encoder.layer.5.attention.output.LayerNorm.bias", "bert_model.encoder.layer.5.intermediate.dense.weight", "bert_model.encoder.layer.5.intermediate.dense.bias", "bert_model.encoder.layer.5.output.dense.weight", "bert_model.encoder.layer.5.output.dense.bias", "bert_model.encoder.layer.5.output.LayerNorm.weight", "bert_model.encoder.layer.5.output.LayerNorm.bias", "bert_model.encoder.layer.6.attention.self.query.weight", "bert_model.encoder.layer.6.attention.self.query.bias", "bert_model.encoder.layer.6.attention.self.key.weight", "bert_model.encoder.layer.6.attention.self.key.bias", "bert_model.encoder.layer.6.attention.self.value.weight", "bert_model.encoder.layer.6.attention.self.value.bias", "bert_model.encoder.layer.6.attention.output.dense.weight", "bert_model.encoder.layer.6.attention.output.dense.bias", "bert_model.encoder.layer.6.attention.output.LayerNorm.weight", "bert_model.encoder.layer.6.attention.output.LayerNorm.bias", "bert_model.encoder.layer.6.intermediate.dense.weight", "bert_model.encoder.layer.6.intermediate.dense.bias", "bert_model.encoder.layer.6.output.dense.weight", "bert_model.encoder.layer.6.output.dense.bias", "bert_model.encoder.layer.6.output.LayerNorm.weight", "bert_model.encoder.layer.6.output.LayerNorm.bias", "bert_model.encoder.layer.7.attention.self.query.weight", "bert_model.encoder.layer.7.attention.self.query.bias", "bert_model.encoder.layer.7.attention.self.key.weight", "bert_model.encoder.layer.7.attention.self.key.bias", "bert_model.encoder.layer.7.attention.self.value.weight", "bert_model.encoder.layer.7.attention.self.value.bias", "bert_model.encoder.layer.7.attention.output.dense.weight", "bert_model.encoder.layer.7.attention.output.dense.bias", "bert_model.encoder.layer.7.attention.output.LayerNorm.weight", "bert_model.encoder.layer.7.attention.output.LayerNorm.bias", "bert_model.encoder.layer.7.intermediate.dense.weight", "bert_model.encoder.layer.7.intermediate.dense.bias", "bert_model.encoder.layer.7.output.dense.weight", "bert_model.encoder.layer.7.output.dense.bias", "bert_model.encoder.layer.7.output.LayerNorm.weight", "bert_model.encoder.layer.7.output.LayerNorm.bias", "bert_model.pooler.dense.weight", "bert_model.pooler.dense.bias". ##############################

jeffreyruffolo commented 1 year ago

Closing this issue, which should be resolved with previous changes to offload AntiBERTy to an external dependency

ZheJiangLabCode commented 1 year ago

Hi, Please help to solve following errors, thanks! I downloaded weights, and loaded in IgFoldRunner, but errors appeared as when excuting inference:

############################################# The code, data, and weights for this work are made available for non-commercial use (including at commercial entities) under the terms of the JHU Academic Software License Agreement. For commercial inquiries, please contact dmalon11[at]jhu.edu. License: https://github.com/Graylab/IgFold/blob/main/LICENSE.md

Loading 4 IgFold models... Using device: cuda:0 Loading /home/shanghai/RationalDesign/ToBeTest/IgFold/igfold/trained_models/IgFold/igfold_1.ckpt... Traceback (most recent call last): File "Inference_IgFold.py", line 38, in igfold = IgFoldRunner(num_models=num_models) File "/home/shanghai/RationalDesign/ToBeTest/IgFold/igfold/IgFoldRunner.py", line 71, in init IgFold.load_from_checkpoint(ckpt_file).eval().to(device)) File "/home/shanghai/anaconda3/envs/IgFold/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 137, in load_from_checkpoint return _load_from_checkpoint( File "/home/shanghai/anaconda3/envs/IgFold/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 180, in _load_from_checkpoint return _load_state(cls, checkpoint, strict=strict, **kwargs) File "/home/shanghai/anaconda3/envs/IgFold/lib/python3.8/site-packages/pytorch_lightning/core/saving.py", line 238, in _load_state keys = obj.load_state_dict(checkpoint["state_dict"], strict=strict) File "/home/shanghai/anaconda3/envs/IgFold/lib/python3.8/site-packages/torch/nn/modules/module.py", line 2041, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for IgFold: Unexpected key(s) in state_dict: "bert_model.embeddings.position_ids", "bert_model.embeddings.word_embeddings.weight", "bert_model.embeddings.position_embeddings.weight", "bert_model.embeddings.token_type_embeddings.weight", "bert_model.embeddings.LayerNorm.weight", "bert_model.embeddings.LayerNorm.bias", "bert_model.encoder.layer.0.attention.self.query.weight", "bert_model.encoder.layer.0.attention.self.query.bias", "bert_model.encoder.layer.0.attention.self.key.weight", "bert_model.encoder.layer.0.attention.self.key.bias", "bert_model.encoder.layer.0.attention.self.value.weight", "bert_model.encoder.layer.0.attention.self.value.bias", "bert_model.encoder.layer.0.attention.output.dense.weight", "bert_model.encoder.layer.0.attention.output.dense.bias", "bert_model.encoder.layer.0.attention.output.LayerNorm.weight", "bert_model.encoder.layer.0.attention.output.LayerNorm.bias", "bert_model.encoder.layer.0.intermediate.dense.weight", "bert_model.encoder.layer.0.intermediate.dense.bias", "bert_model.encoder.layer.0.output.dense.weight", "bert_model.encoder.layer.0.output.dense.bias", "bert_model.encoder.layer.0.output.LayerNorm.weight", "bert_model.encoder.layer.0.output.LayerNorm.bias", "bert_model.encoder.layer.1.attention.self.query.weight", "bert_model.encoder.layer.1.attention.self.query.bias", "bert_model.encoder.layer.1.attention.self.key.weight", "bert_model.encoder.layer.1.attention.self.key.bias", "bert_model.encoder.layer.1.attention.self.value.weight", "bert_model.encoder.layer.1.attention.self.value.bias", "bert_model.encoder.layer.1.attention.output.dense.weight", "bert_model.encoder.layer.1.attention.output.dense.bias", "bert_model.encoder.layer.1.attention.output.LayerNorm.weight", "bert_model.encoder.layer.1.attention.output.LayerNorm.bias", "bert_model.encoder.layer.1.intermediate.dense.weight", "bert_model.encoder.layer.1.intermediate.dense.bias", "bert_model.encoder.layer.1.output.dense.weight", "bert_model.encoder.layer.1.output.dense.bias", "bert_model.encoder.layer.1.output.LayerNorm.weight", "bert_model.encoder.layer.1.output.LayerNorm.bias", "bert_model.encoder.layer.2.attention.self.query.weight", "bert_model.encoder.layer.2.attention.self.query.bias", "bert_model.encoder.layer.2.attention.self.key.weight", "bert_model.encoder.layer.2.attention.self.key.bias", "bert_model.encoder.layer.2.attention.self.value.weight", "bert_model.encoder.layer.2.attention.self.value.bias", "bert_model.encoder.layer.2.attention.output.dense.weight", "bert_model.encoder.layer.2.attention.output.dense.bias", "bert_model.encoder.layer.2.attention.output.LayerNorm.weight", "bert_model.encoder.layer.2.attention.output.LayerNorm.bias", "bert_model.encoder.layer.2.intermediate.dense.weight", "bert_model.encoder.layer.2.intermediate.dense.bias", "bert_model.encoder.layer.2.output.dense.weight", "bert_model.encoder.layer.2.output.dense.bias", "bert_model.encoder.layer.2.output.LayerNorm.weight", "bert_model.encoder.layer.2.output.LayerNorm.bias", "bert_model.encoder.layer.3.attention.self.query.weight", "bert_model.encoder.layer.3.attention.self.query.bias", "bert_model.encoder.layer.3.attention.self.key.weight", "bert_model.encoder.layer.3.attention.self.key.bias", "bert_model.encoder.layer.3.attention.self.value.weight", "bert_model.encoder.layer.3.attention.self.value.bias", "bert_model.encoder.layer.3.attention.output.dense.weight", "bert_model.encoder.layer.3.attention.output.dense.bias", "bert_model.encoder.layer.3.attention.output.LayerNorm.weight", "bert_model.encoder.layer.3.attention.output.LayerNorm.bias", "bert_model.encoder.layer.3.intermediate.dense.weight", "bert_model.encoder.layer.3.intermediate.dense.bias", "bert_model.encoder.layer.3.output.dense.weight", "bert_model.encoder.layer.3.output.dense.bias", "bert_model.encoder.layer.3.output.LayerNorm.weight", "bert_model.encoder.layer.3.output.LayerNorm.bias", "bert_model.encoder.layer.4.attention.self.query.weight", "bert_model.encoder.layer.4.attention.self.query.bias", "bert_model.encoder.layer.4.attention.self.key.weight", "bert_model.encoder.layer.4.attention.self.key.bias", "bert_model.encoder.layer.4.attention.self.value.weight", "bert_model.encoder.layer.4.attention.self.value.bias", "bert_model.encoder.layer.4.attention.output.dense.weight", "bert_model.encoder.layer.4.attention.output.dense.bias", "bert_model.encoder.layer.4.attention.output.LayerNorm.weight", "bert_model.encoder.layer.4.attention.output.LayerNorm.bias", "bert_model.encoder.layer.4.intermediate.dense.weight", "bert_model.encoder.layer.4.intermediate.dense.bias", "bert_model.encoder.layer.4.output.dense.weight", "bert_model.encoder.layer.4.output.dense.bias", "bert_model.encoder.layer.4.output.LayerNorm.weight", "bert_model.encoder.layer.4.output.LayerNorm.bias", "bert_model.encoder.layer.5.attention.self.query.weight", "bert_model.encoder.layer.5.attention.self.query.bias", "bert_model.encoder.layer.5.attention.self.key.weight", "bert_model.encoder.layer.5.attention.self.key.bias", "bert_model.encoder.layer.5.attention.self.value.weight", "bert_model.encoder.layer.5.attention.self.value.bias", "bert_model.encoder.layer.5.attention.output.dense.weight", "bert_model.encoder.layer.5.attention.output.dense.bias", "bert_model.encoder.layer.5.attention.output.LayerNorm.weight", "bert_model.encoder.layer.5.attention.output.LayerNorm.bias", "bert_model.encoder.layer.5.intermediate.dense.weight", "bert_model.encoder.layer.5.intermediate.dense.bias", "bert_model.encoder.layer.5.output.dense.weight", "bert_model.encoder.layer.5.output.dense.bias", "bert_model.encoder.layer.5.output.LayerNorm.weight", "bert_model.encoder.layer.5.output.LayerNorm.bias", "bert_model.encoder.layer.6.attention.self.query.weight", "bert_model.encoder.layer.6.attention.self.query.bias", "bert_model.encoder.layer.6.attention.self.key.weight", "bert_model.encoder.layer.6.attention.self.key.bias", "bert_model.encoder.layer.6.attention.self.value.weight", "bert_model.encoder.layer.6.attention.self.value.bias", "bert_model.encoder.layer.6.attention.output.dense.weight", "bert_model.encoder.layer.6.attention.output.dense.bias", "bert_model.encoder.layer.6.attention.output.LayerNorm.weight", "bert_model.encoder.layer.6.attention.output.LayerNorm.bias", "bert_model.encoder.layer.6.intermediate.dense.weight", "bert_model.encoder.layer.6.intermediate.dense.bias", "bert_model.encoder.layer.6.output.dense.weight", "bert_model.encoder.layer.6.output.dense.bias", "bert_model.encoder.layer.6.output.LayerNorm.weight", "bert_model.encoder.layer.6.output.LayerNorm.bias", "bert_model.encoder.layer.7.attention.self.query.weight", "bert_model.encoder.layer.7.attention.self.query.bias", "bert_model.encoder.layer.7.attention.self.key.weight", "bert_model.encoder.layer.7.attention.self.key.bias", "bert_model.encoder.layer.7.attention.self.value.weight", "bert_model.encoder.layer.7.attention.self.value.bias", "bert_model.encoder.layer.7.attention.output.dense.weight", "bert_model.encoder.layer.7.attention.output.dense.bias", "bert_model.encoder.layer.7.attention.output.LayerNorm.weight", "bert_model.encoder.layer.7.attention.output.LayerNorm.bias", "bert_model.encoder.layer.7.intermediate.dense.weight", "bert_model.encoder.layer.7.intermediate.dense.bias", "bert_model.encoder.layer.7.output.dense.weight", "bert_model.encoder.layer.7.output.dense.bias", "bert_model.encoder.layer.7.output.LayerNorm.weight", "bert_model.encoder.layer.7.output.LayerNorm.bias", "bert_model.pooler.dense.weight", "bert_model.pooler.dense.bias". ##############################

hello, i have a same problem as yours. Could you tell me how to solve it

Samuel-gwb commented 1 year ago

Not yet ...