OpenGVLab / LLaMA-Adapter

[ICLR 2024] Fine-tuning LLaMA to follow Instructions within 1 Hour and 1.2M Parameters
GNU General Public License v3.0
5.68k stars 371 forks source link

training time #88

Open cissoidx opened 1 year ago

cissoidx commented 1 year ago

Hello, Thanks for open sourcing your great work! I tried to finetune llama_adapter_v2_multimodal on 1 A100, I kept all the configurations unchanged. And I use the four datasets you mentioned in readme. It takes about 100 hours to finish finetuning. However, you mentioned in the description that it only needs 1 hour to train. Can you please tell if this is normal?

cheers, xu

csuhan commented 1 year ago

Hi @cissoidx , our llama-adapter v1 requires 1 hour for finetuning on Alpaca. But llama-adapter v2 usually spends more time and it depends on the size of finetuning data.

adda1221 commented 1 year ago

Hello, Thanks for open sourcing your great work! I tried to finetune llama_adapter_v2_multimodal on 1 A100, I kept all the configurations unchanged. And I use the four datasets you mentioned in readme. It takes about 100 hours to finish finetuning. However, you mentioned in the description that it only needs 1 hour to train. Can you please tell if this is normal?

cheers, xu

hi, i am training this model as well, do u read images using url or download them to local files? looking forward to your reply!