dvlab-research / LLaMA-VID

Official Implementation for LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models
Apache License 2.0
623 stars 40 forks source link

LORA SUPPORTING #36

Closed Deaddawn closed 5 months ago

Deaddawn commented 6 months ago

Hi, there. Will you be able to support lora?

yanwei-li commented 6 months ago

Hi, we will try to support LoRA. Because we do not find LoRA gives the performance or efficiency gain during training, we do not add it to the current code.

Deaddawn commented 6 months ago

Hi, we will try to support LoRA. Because we do not find LoRA gives the performance or efficiency gain during training, we do not add it to the current code.

Thank you for understanding, it's just impossible for GPU mem lower than 40G to train this without lora