HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
493 stars 36 forks source link

About baseline codes #40

Closed W-rudder closed 6 months ago

W-rudder commented 6 months ago

This is a very interesting project. Could you please provide some pretraining code for GNN-based baselines? Is the training process similar to common procedures? For example, is the last layer of the GNN the same as the number of categories, or does the GNN generate representations that are then used to train a separate logistic regression classifier?

Regarding the zero-shot process for the baseline, could you specify the exact configurations? For instance, ArXiv has 40 categories, and for Cora and PubMed, the number of categories is different. How should this discrepancy be handled? If possible, could you provide some example code?

Thank you for your response!

tjb-tech commented 6 months ago

Thanks for your interests! We claim the details of zero-shot settings of GNNs at the beginning of the Sec 4.2, which could be demonstrated as follows:

image

And we will consider to release codes of baselines of both supervised and zero-shot settings.

W-rudder commented 6 months ago

Thanks for your interests! We claim the details of zero-shot settings of GNNs at the beginning of the Sec 4.2, which could be demonstrated as follows: image And we will consider to release codes of baselines of both supervised and zero-shot settings.

Thanks for your response!