Outsider565 / LoRA-GA

155 stars 6 forks source link

The running command on Code-Feedback #6

Open lucasliunju opened 2 months ago

lucasliunju commented 2 months ago

Hi,

Thanks for your geat work.

May I ask the training command of LoRA-GA on Code-Feedback with LLaMA2 and LLaMA3?

Thank you very much in advance!

Best

Outsider565 commented 2 months ago

I have updated the code to incorporate the PEFT API. You can try out this new version, which should make it easier to adapt to new datasets and different models. I believe this update will streamline the process for your use case.

lucasliunju commented 2 months ago

Hi @Outsider565

Thanks for your reply.

In the updated code, I haven't find the running code or command about Code-Feedback. By the way, in your previous code, I noticed you use the tokenizer of llama2 to process the data in the data.py. If I use other models, such as llama3, I would like to ask I still need to change the tokenizer to llama3?

Outsider565 commented 2 months ago

In the legacy code, you should use the corresponding tokenizer in data.py to ensure the token limit(512 for math and 1024 for code-feedback) is correct.