Closed ZJULiHongxin closed 1 month ago
Thank you for open-sourcing this great @Yuliang-Liu @MelosY
I tried to finetune Qwen-VL using finetune_ds_debug.sh and got these training info: trainable params: 7848497728 || all params: 9708053824 || trainable%: 80.8452
trainable params: 7848497728 || all params: 9708053824 || trainable%: 80.8452
Besides, args.use_lora is also set as False.
Is this large amount of trainable params normal?
I would appreciate it if anyone could help me.
It's normal that our trainable parameters include LLM, resampler, and LoRA in the visual encoder.
Thank you for open-sourcing this great @Yuliang-Liu @MelosY
I tried to finetune Qwen-VL using finetune_ds_debug.sh and got these training info:
trainable params: 7848497728 || all params: 9708053824 || trainable%: 80.8452
Besides, args.use_lora is also set as False.
Is this large amount of trainable params normal?
I would appreciate it if anyone could help me.