Closed StefanIsSmart closed 1 week ago
Do You use the checkpoints of the original model directly or retrained by yourself?
Do You use the checkpoints of the original model directly or retrained by yourself?
We used the released checkpoints.
Different models have different dimensions, how to make sure the predictor-head doesn't have a big inference?
Different models have different dimensions, how to make sure the predictor-head doesn't have a big inference?
We utilized only their pretrained GNN module, as the pre-training frameworks did not include predictor-heads for downstream tasks. The predictor-heads are added only when the pretrained GNN modules are applied to downstream tasks.
Different models have different dimensions, how to make sure the predictor-head doesn't have a big inference?
We utilized only their pretrained GNN module, as the pre-training frameworks did not include predictor-heads for downstream tasks. The predictor-heads are added only when the pretrained GNN modules are applied to downstream tasks.
I know that point, but you know the different inputs (get from the pre-trained model) are different, so the predictor heads will be different, how to make sure this point will have a small inference?
And could you upload the code about how you evaluate other models?
This will help readers reproduce the results in your paper, thank you!
Sorry, could you clarify what you mean by 'small/big inference'? Are you referring to 'small/big difference'?
Following standard practices for performance evaluation of pre-trained models, we used the checkpoints released by the original papers. The varying dimensions of the pre-trained models' outputs lead to different input dimensions for the predictor heads. We haven't specifically explored the impact of this, but I believe the effect would be subtle.
To reproduce the results, please refer to the documentation in the repositories of the corresponding baselines.
How to reproduce the result of other models?