RManLuo / reasoning-on-graphs

Official Implementation of ICLR 2024 paper: "Reasoning on Graphs: Faithful and Interpretable Large Language Model Reasoning"
https://arxiv.org/abs/2310.01061
MIT License
318 stars 38 forks source link

Have the parameters of llm's input embedding been tuned? #4

Closed LB0828 closed 11 months ago

LB0828 commented 11 months ago

Thank you for providing the code. In the paper, the introduction of new tokens, marked as <\path>, is mentioned. I have a question regarding the tuning of input embeddings for the language model (llm) parameters. I noticed in the training code, specifically within the get_input_embeddings().parameters(), the requires_grad property is not explicitly set to true. Could you please clarify the necessity for this tuning?

RManLuo commented 11 months ago

We do not use the LoRA in our experiments. If I understand the package of trl and huggingface Trainer correctly, all the parameters of the model will be set as trainable by calling model.train().