HKUDS / GraphGPT

[SIGIR'2024] "GraphGPT: Graph Instruction Tuning for Large Language Models"
https://arxiv.org/abs/2310.13023
Apache License 2.0
636 stars 59 forks source link

Was LLM not fine-tuned at either stage in the paper? #5

Closed zhihui-shao closed 1 year ago

zhihui-shao commented 1 year ago

Reading the paper, it seems that the parameters of the LLM are always frozen

tjb-tech commented 1 year ago

Yes, your understanding is correct. We only tune the parameters of the projector in the both stage. Thank you for your attention in our GraphGPT.