csuhan / OneLLM

[CVPR 2024] OneLLM: One Framework to Align All Modalities with Language
Other
552 stars 27 forks source link

Training code #15

Open Yanllan opened 6 months ago

Yanllan commented 6 months ago

Hello! Your work is excellent and I am also very interested, I wonder when you can open source the training code or give some examples, thanks!

csuhan commented 6 months ago

We will release the training code within one month.

csuhan commented 6 months ago

Hi @Yanllan , we have just released the training code. Feel free to tell us if you need any help.

Yanllan commented 6 months ago

First of all, congratulations on being accepted by CVPR! Secondly, due to graphics card limitations, do you have code reference for LORA fine-tuning? I only have an A800.

csuhan commented 5 months ago

We have implement LoRA tuning for pure LLaMA at: https://github.com/Alpha-VLLM/LLaMA2-Accessory/blob/main/accessory/model/LLM/llama_peft.py

You can 1. add Lora layers to onellm.py, and 2. freeze LLM and turn on lora layers in its __init__ function.