dvlab-research / LLaMA-VID

Official Implementation for LLaMA-VID: An Image is Worth 2 Tokens in Large Language Models
Apache License 2.0
622 stars 39 forks source link

why not use LoRA for tunning Vicuna? #72

Closed dragen1860 closed 2 months ago

dragen1860 commented 3 months ago

Dear author: I noticed you finetunning the whole LLM model without using LoRA. I wonder have you did some experiments on using or not using LoRA? Thank you.

yanwei-li commented 3 months ago

Hi, because we donot find LoRA can significantly save memory or improve efficiency and performance with flash attention added. In this case, we maintain the full finetuning.