Open cissoidx opened 1 year ago
Hi @cissoidx , our llama-adapter v1 requires 1 hour for finetuning on Alpaca. But llama-adapter v2 usually spends more time and it depends on the size of finetuning data.
Hello, Thanks for open sourcing your great work! I tried to finetune llama_adapter_v2_multimodal on 1 A100, I kept all the configurations unchanged. And I use the four datasets you mentioned in readme. It takes about 100 hours to finish finetuning. However, you mentioned in the description that it only needs 1 hour to train. Can you please tell if this is normal?
cheers, xu
hi, i am training this model as well, do u read images using url or download them to local files? looking forward to your reply!
Hello, Thanks for open sourcing your great work! I tried to finetune llama_adapter_v2_multimodal on 1 A100, I kept all the configurations unchanged. And I use the four datasets you mentioned in readme. It takes about 100 hours to finish finetuning. However, you mentioned in the description that it only needs 1 hour to train. Can you please tell if this is normal?
cheers, xu