HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
521 stars 39 forks source link

Why do you align GNN with a transformer training from scratch instead of LLaMA? #22

Closed yzhHoward closed 8 months ago

yzhHoward commented 9 months ago

Your work is very impressive. Aligning GNN with LM can help it fit with LLaMA. But I wonder why align with a transformer training from scratch? Is this better than align with LLaMA?

tjb-tech commented 8 months ago

Your work is very impressive. Aligning GNN with LM can help it fit with LLaMA. But I wonder why align with a transformer training from scratch? Is this better than align with LLaMA?

Thank you for your interest in our GraphGPT. I apologize for the delayed response due to the academic workload at the end of the semester. Your question is great! The reason why we align GNNs with a transformer training from scratch at early text-grounding stage is that aligning GNN with a transformer is more scalable, which means more effient. We can use more text-graph data to let GNN obtain the initial semantic capabilities. Hope that my answer is helpful for you! Wishing you an early Merry Christmas!

yzhHoward commented 8 months ago

Thanks! It address my concern.