HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
521 stars 39 forks source link

Was LLM not fine-tuned at either stage in the paper? #5

Closed zhihui-shao closed 10 months ago

zhihui-shao commented 10 months ago

Reading the paper, it seems that the parameters of the LLM are always frozen

tjb-tech commented 10 months ago

Yes, your understanding is correct. We only tune the parameters of the projector in the both stage. Thank you for your attention in our GraphGPT.