Open Yanllan opened 9 months ago
We will release the training code within one month.
Hi @Yanllan , we have just released the training code. Feel free to tell us if you need any help.
First of all, congratulations on being accepted by CVPR! Secondly, due to graphics card limitations, do you have code reference for LORA fine-tuning? I only have an A800.
We have implement LoRA tuning for pure LLaMA at: https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/model/LLM/llama_peft.py
You can 1. add Lora layers to onellm.py, and 2. freeze LLM and turn on lora layers in its __init__
function.
Hello! Your work is excellent and I am also very interested, I wonder when you can open source the training code or give some examples, thanks!