Closed jihan-yin closed 1 year ago
We haven't done that yet. You can use the script train_it_wo_lora.sh
to finetuning the whole LLM. Currently, we are working hard on the next version. Once we have done, we will release the checkpoint. Stay Tuned.
Hi, we have release the finetuned checkpoint for your reference!
Have you guys tested finetuning the whole llama decoder for the finetuning stage instead of using LoRA? Curious what findings or insights y'all might have there, since I didn't see it included in the paper.