pyli0628 / MPG

43 stars 9 forks source link

Qustion about Pre-trained model generation #3

Open xfy9 opened 2 years ago

xfy9 commented 2 years ago

Sorry to ask this question. I follow the steps of the readme,run the pretraining/loader.py to generate .pt file, and I run the pretraining/run_pretraning.sh to generate .pt model file, and I rename the lateset .pt file to MolGNet.pt, but when I run property/finetune.py, the process report the load MolGNet.pt error, this is the error: Traceback (most recent call last): File "property/finetune.py", line 300, in <module> main() File "property/finetune.py", line 224, in main model.from_pretrained(args.input_model_file) File "/home/zsw/zamao_pycode/MPG-main/property/model.py", line 278, in from_pretrained self.gnn.load_state_dict(torch.load(model_file)) File "/home/zsw/anaconda3/envs/zamao1/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1483, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for MolGNet: Missing key(s) in state_dict: "x_embedding.weight", "x_seg_embed.weight", "edge_embedding.weight", "edge_seg_embed.weight", "gnns.0.attention.query.weight", "gnns.0.attention.query.bias", "gnns.0.attention.key.weight", "gnns.0.attention.key.bias", "gnns.0.attention.value.weight", "gnns.0.attention.value.bias", "gnns.0.att_out.dense.weight", "gnns.0.att_out.dense.bias", "gnns.0.att_out.LayerNorm.weight", "gnns.0.att_out.LayerNorm.bias", "gnns.0.intermediate.dense_act.weight", "gnns.0.intermediate.dense_act.bias", "gnns.0.output.dense.weight", "gnns.0.output.dense.bias", "gnns.0.output.LayerNorm.weight", "gnns.0.output.LayerNorm.bias", "gnns.0.gru.weight_ih_l0", "gnns.0.gru.weight_hh_l0", "gnns.0.gru.bias_ih_l0", "gnns.0.gru.bias_hh_l0", "gnns.0.LayerNorm.weight", "gnns.0.LayerNorm.bias", "gnns.1.attention.query.weight", "gnns.1.attention.query.bias", "gnns.1.attention.key.weight", "gnns.1.attention.key.bias", "gnns.1.attention.value.weight", "gnns.1.attention.value.bias", "gnns.1.att_out.dense.weight", "gnns.1.att_out.dense.bias", "gnns.1.att_out.LayerNorm.weight", "gnns.1.att_out.LayerNorm.bias", "gnns.1.intermediate.dense_act.weight", "gnns.1.intermediate.dense_act.bias", "gnns.1.output.dense.weight", "gnns.1.output.dense.bias", "gnns.1.output.LayerNorm.weight", "gnns.1.output.LayerNorm.bias", "gnns.1.gru.weight_ih_l0", "gnns.1.gru.weight_hh_l0", "gnns.1.gru.bias_ih_l0", "gnns.1.gru.bias_hh_l0", "gnns.1.LayerNorm.weight", "gnns.1.LayerNorm.bias", "gnns.2.attention.query.weight", "gnns.2.attention.query.bias", "gnns.2.attention.key.weight", "gnns.2.attention.key.bias", "gnns.2.attention.value.weight", "gnns.2.attention.value.bias", "gnns.2.att_out.dense.weight", "gnns.2.att_out.dense.bias", "gnns.2.att_out.LayerNorm.weight", "gnns.2.att_out.LayerNorm.bias", "gnns.2.intermediate.dense_act.weight", "gnns.2.intermediate.dense_act.bias", "gnns.2.output.dense.weight", "gnns.2.output.dense.bias", "gnns.2.output.LayerNorm.weight", "gnns.2.output.LayerNorm.bias", "gnns.2.gru.weight_ih_l0", "gnns.2.gru.weight_hh_l0", "gnns.2.gru.bias_ih_l0", "gnns.2.gru.bias_hh_l0", "gnns.2.LayerNorm.weight", "gnns.2.LayerNorm.bias", "gnns.3.attention.query.weight", "gnns.3.attention.query.bias", "gnns.3.attention.key.weight", "gnns.3.attention.key.bias", "gnns.3.attention.value.weight", "gnns.3.attention.value.bias", "gnns.3.att_out.dense.weight", "gnns.3.att_out.dense.bias", "gnns.3.att_out.LayerNorm.weight", "gnns.3.att_out.LayerNorm.bias", "gnns.3.intermediate.dense_act.weight", "gnns.3.intermediate.dense_act.bias", "gnns.3.output.dense.weight", "gnns.3.output.dense.bias", "gnns.3.output.LayerNorm.weight", "gnns.3.output.LayerNorm.bias", "gnns.3.gru.weight_ih_l0", "gnns.3.gru.weight_hh_l0", "gnns.3.gru.bias_ih_l0", "gnns.3.gru.bias_hh_l0", "gnns.3.LayerNorm.weight", "gnns.3.LayerNorm.bias", "gnns.4.attention.query.weight", "gnns.4.attention.query.bias", "gnns.4.attention.key.weight", "gnns.4.attention.key.bias", "gnns.4.attention.value.weight", "gnns.4.attention.value.bias", "gnns.4.att_out.dense.weight", "gnns.4.att_out.dense.bias", "gnns.4.att_out.LayerNorm.weight", "gnns.4.att_out.LayerNorm.bias", "gnns.4.intermediate.dense_act.weight", "gnns.4.intermediate.dense_act.bias", "gnns.4.output.dense.weight", "gnns.4.output.dense.bias", "gnns.4.output.LayerNorm.weight", "gnns.4.output.LayerNorm.bias", "gnns.4.gru.weight_ih_l0", "gnns.4.gru.weight_hh_l0", "gnns.4.gru.bias_ih_l0", "gnns.4.gru.bias_hh_l0", "gnns.4.LayerNorm.weight", "gnns.4.LayerNorm.bias". Unexpected key(s) in state_dict: "model", "gnn", "linear_atom", "optimizer", "master params", "files", "epoch", "data_loader". Am I doing something wrong? Or I miss step? I would be very grateful if you could answer my doubts.

pyli0628 commented 2 years ago

Maybe you can print the state-dict of pretrained MolGNet, and find the difference between pretrained model and current model in fintune.py