Closed W-rudder closed 6 months ago
Thanks for your interests! We claim the details of zero-shot settings of GNNs at the beginning of the Sec 4.2, which could be demonstrated as follows:
And we will consider to release codes of baselines of both supervised and zero-shot settings.
Thanks for your interests! We claim the details of zero-shot settings of GNNs at the beginning of the Sec 4.2, which could be demonstrated as follows:
And we will consider to release codes of baselines of both supervised and zero-shot settings.
Thanks for your response!
This is a very interesting project. Could you please provide some pretraining code for GNN-based baselines? Is the training process similar to common procedures? For example, is the last layer of the GNN the same as the number of categories, or does the GNN generate representations that are then used to train a separate logistic regression classifier?
Regarding the zero-shot process for the baseline, could you specify the exact configurations? For instance, ArXiv has 40 categories, and for Cora and PubMed, the number of categories is different. How should this discrepancy be handled? If possible, could you provide some example code?
Thank you for your response!