csuhan / OneLLM

[CVPR 2024] OneLLM: One Framework to Align All Modalities with Language
Other
596 stars 32 forks source link

Training code #15

Open Yanllan opened 9 months ago

Yanllan commented 9 months ago

Hello! Your work is excellent and I am also very interested, I wonder when you can open source the training code or give some examples, thanks!

csuhan commented 9 months ago

We will release the training code within one month.

csuhan commented 8 months ago

Hi @Yanllan , we have just released the training code. Feel free to tell us if you need any help.

Yanllan commented 8 months ago

First of all, congratulations on being accepted by CVPR! Secondly, due to graphics card limitations, do you have code reference for LORA fine-tuning? I only have an A800.

csuhan commented 8 months ago

We have implement LoRA tuning for pure LLaMA at: https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/model/LLM/llama_peft.py

You can 1. add Lora layers to onellm.py, and 2. freeze LLM and turn on lora layers in its __init__ function.