Closed Noisyntrain closed 3 years ago
Considering that we plan to continuously upgrade Graphormer in this repository, it's very possible to fail when load an old checkpoint using the latest version of code. Hereby, we encourage you to prepare the pre-trained model following our paper by yourself.
Hi Graphormer authors, please do make sure that your results are easily reproducible on the master branch. You can continuously update your code on the non-master branch, and integrate that when you have finished checking the compatibility.
In short, please ensure easily reproducibility as soon as possible on the master branch. In the worst case, we will need to delete your leaderboard submissions (which I hope not happen.)
Weihua -- OGB Team
Hi Graphormer authors, please do make sure that your results are easily reproducible on the master branch. You can continuously update your code on the non-master branch, and integrate that when you have finished checking the compatibility.
In short, please ensure easily reproducibility as soon as possible on the master branch. In the worst case, we will need to delete your leaderboard submissions (which I hope not happen.)
Weihua -- OGB Team
Hi Weihua, we ensure that our results are easily reproducible on every branch at anytime by just following the instructions in this repository and our paper. We definately willing to offer help if anyone is in trouble when following our reproducing insturctions.
Hi authors, since it's been one week, may I know the rough time when the checkpoints and codes to reproduce the result can be ready? Thank you.
Hi authors, since it's been one week, may I know the rough time when the checkpoints and codes to reproduce the result can be ready? Thank you.
Hi @Noisyntrain , all the codes to reproduce all the results in our paper have already been released in the main branch since our first time release. Please let us know if you have any question about any part of code or script. Currently we don't have plan to relaese the checkpoints.
Close this issue due to inactivity for a long time. Feel free to reopen it if the problem still exist.
Hi @Noisyntrain , we notice some comments recently posted on Chinese social media talking about the reproduction issue derived from this issue.
Therefore we wonder whether your problem still exist (size mismatch caused by loading wrong pre-trained model) after you load a correct pre-trained model descripted in our paper?
Also, please feel free to repoen this issue and provide the detailed information of the reproduction process, if you sucessfully execute the program but meet any reproducing problem, e.g., the test accuracy could not reach the number we report in our paper.
Hi authors. I met a similar problem. I pretrained the modle on PCQM dataset then try to load the checkpoint to train on pcba task. However, I met this error.
RuntimeError: Error(s) in loading state_dict for Graphormer: size mismatch for downstream_out_proj.weight: copying a param with shape torch.Size([1, 1024]) from checkpoint, the shape in current model is torch.Size([128, 1024]). size mismatch for downstream_out_proj.bias: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([128]).
I was using the v1 code. I wonder how your procedure got through this. Could you help me with this?
Hi authors. I met a similar problem. I pretrained the modle on PCQM dataset then try to load the checkpoint to train on pcba task. However, I met this error.
RuntimeError: Error(s) in loading state_dict for Graphormer: size mismatch for downstream_out_proj.weight: copying a param with shape torch.Size([1, 1024]) from checkpoint, the shape in current model is torch.Size([128, 1024]). size mismatch for downstream_out_proj.bias: copying a param with shape torch.Size([1]) from checkpoint, the shape in current model is torch.Size([128]).
I was using the v1 code. I wonder how your procedure got through this. Could you help me with this?
Thanks for using Graphormer. When fine-tuning the pretrained Graphormer, the last layer (out projection layer for different downstream tasks) should not be loaded, since the downstream tasks are not same as the pre-training one (128 binary classification (PCBA) v.s. 1 regression (PCQ)). Therefore, a re-initialized last layer should be applied.
Btw, v2 is recommended where the pre-trained models are well-prepared.
Hi @zhengsx, is there an argument in the fairseq-train
command specified in the documentation here that can apply a re-initialized last layer? If not, may you please advise how to re-initialize the last layer and then pass to the Graphormer training loop? Thanks!
Hi authors, thanks for your great work. As I have been trying to reproduce the No.1 result on ogb-pcba board, I didn't find the checkpoints mentioned in your paper that pretrained for the PCBA task. Therefore, I turned to use the PCQM checkpoint you provided for the PCQM task. But during loading the checkpoints, an error occured even I have set the hidden dimension and ffn dimension from 1024 to 768: `RuntimeError: Error(s) in loading state_dict for Graphormer:
Thus, may I ask two questions about the reproducing process:
Looking forward to your reply. Thank you!