NVlabs / DoRA

[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
https://nbasyl.github.io/DoRA-project-page/
Other
588 stars 36 forks source link

Using LLaMA-3-8b in visual instruction tuning #3

Closed enkaranfiles closed 5 months ago

enkaranfiles commented 5 months ago

Hello everyone,

Great work!

Have you tested DoRA performance by using LLaMa-v3-8b in the visual instruction tuning and how can I tested in my setting?

nbasyl commented 5 months ago

Hi, thanks for your interest in our work. Unfortunately, the projector weight for llama3 is currently not available in LLaVA. Unless you train the projector yourself, using llama3 within the llava framework is not possible at the moment. For the complete list of supported LLMs by LLaVA, please refer to the following link: LLaVA Model Zoo.