microsoft / Graphormer

Graphormer is a general-purpose deep learning backbone for molecular modeling.
MIT License
2.08k stars 334 forks source link

About reproducing PCBA result #22

Closed Noisyntrain closed 3 years ago

Noisyntrain commented 3 years ago

Hi authors, thanks for your great work. As I have been trying to reproduce the No.1 result on ogb-pcba board, I didn't find the checkpoints mentioned in your paper that pretrained for the PCBA task. Therefore, I turned to use the PCQM checkpoint you provided for the PCQM task. But during loading the checkpoints, an error occured even I have set the hidden dimension and ffn dimension from 1024 to 768: `RuntimeError: Error(s) in loading state_dict for Graphormer:

    size mismatch for atom_encoder.weight: copying a param with shape torch.Size([4737, 768]) from checkpoint, the shape in current model is torch.Size([4609, 768]).

    size mismatch for edge_encoder.weight: copying a param with shape torch.Size([769, 32]) from checkpoint, the shape in current model is torch.Size([1537, 32]).`

Thus, may I ask two questions about the reproducing process:

  1. Can you provide the checkpoints that can be used to reproduce the PCBA result?
  2. Is there a reason why the code cannot load the previous PCQM checkpoints even though having changed the ffn and hidden dimension?

Looking forward to your reply. Thank you!

zhengsx commented 3 years ago

Considering that we plan to continuously upgrade Graphormer in this repository, it's very possible to fail when load an old checkpoint using the latest version of code. Hereby, we encourage you to prepare the pre-trained model following our paper by yourself.

weihua916 commented 3 years ago

Hi Graphormer authors, please do make sure that your results are easily reproducible on the master branch. You can continuously update your code on the non-master branch, and integrate that when you have finished checking the compatibility.

In short, please ensure easily reproducibility as soon as possible on the master branch. In the worst case, we will need to delete your leaderboard submissions (which I hope not happen.)

Weihua -- OGB Team

zhengsx commented 3 years ago

Hi Graphormer authors, please do make sure that your results are easily reproducible on the master branch. You can continuously update your code on the non-master branch, and integrate that when you have finished checking the compatibility.

In short, please ensure easily reproducibility as soon as possible on the master branch. In the worst case, we will need to delete your leaderboard submissions (which I hope not happen.)

Weihua -- OGB Team

Hi Weihua, we ensure that our results are easily reproducible on every branch at anytime by just following the instructions in this repository and our paper. We definately willing to offer help if anyone is in trouble when following our reproducing insturctions.

Noisyntrain commented 3 years ago

Hi authors, since it's been one week, may I know the rough time when the checkpoints and codes to reproduce the result can be ready? Thank you.

zhengsx commented 3 years ago

Hi authors, since it's been one week, may I know the rough time when the checkpoints and codes to reproduce the result can be ready? Thank you.

Hi @Noisyntrain , all the codes to reproduce all the results in our paper have already been released in the main branch since our first time release. Please let us know if you have any question about any part of code or script. Currently we don't have plan to relaese the checkpoints.

zhengsx commented 3 years ago

Close this issue due to inactivity for a long time. Feel free to reopen it if the problem still exist.

zhengsx commented 2 years ago

Hi @Noisyntrain , we notice some comments recently posted on Chinese social media talking about the reproduction issue derived from this issue.

Therefore we wonder whether your problem still exist (size mismatch caused by loading wrong pre-trained model) after you load a correct pre-trained model descripted in our paper?

Also, please feel free to repoen this issue and provide the detailed information of the reproduction process, if you sucessfully execute the program but meet any reproducing problem, e.g., the test accuracy could not reach the number we report in our paper.

GangLii commented 2 years ago

Hi authors. I met a similar problem. I pretrained the modle on PCQM dataset then try to load the checkpoint to train on pcba task. However, I met this error.

RuntimeError: Error(s) in loading state_dict for Graphormer: size mismatch for downstream_out_proj.weight: copying a param with shape torch.Size([1, 1024]) from checkpoint, the shape in current model is torch.Size([128, 1024]). size mismatch for downstream_out_proj.bias: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([128]).

I was using the v1 code. I wonder how your procedure got through this. Could you help me with this?

zhengsx commented 2 years ago

Hi authors. I met a similar problem. I pretrained the modle on PCQM dataset then try to load the checkpoint to train on pcba task. However, I met this error.

RuntimeError: Error(s) in loading state_dict for Graphormer: size mismatch for downstream_out_proj.weight: copying a param with shape torch.Size([1, 1024]) from checkpoint, the shape in current model is torch.Size([128, 1024]). size mismatch for downstream_out_proj.bias: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([128]).

I was using the v1 code. I wonder how your procedure got through this. Could you help me with this?

Thanks for using Graphormer. When fine-tuning the pretrained Graphormer, the last layer (out projection layer for different downstream tasks) should not be loaded, since the downstream tasks are not same as the pre-training one (128 binary classification (PCBA) v.s. 1 regression (PCQ)). Therefore, a re-initialized last layer should be applied.

Btw, v2 is recommended where the pre-trained models are well-prepared.

ayushnoori commented 2 years ago

Hi @zhengsx, is there an argument in the fairseq-train command specified in the documentation here that can apply a re-initialized last layer? If not, may you please advise how to re-initialize the last layer and then pass to the Graphormer training loop? Thanks!