Open kanxueli opened 1 year ago
I find that it does not encounter the above error, but the loss is always 0 when I try to change the value of 'mm_use_im_patch_token' from False to True. I noticed that you asked me to make sure that 'mm_use_im_patch_token' is correctly set as False when using these projector weights to instruction tune my LMM.I trying the above because I wanted the program to run smoothly . And I want to know why the training loss becomes 0 when these flags are set to True during fine-tuning. @haotian-liu
I also meet this error of "“CUDA error: device-side assert triggered” when fine-tuning llama2 on V100", how to solve it?
Describe the issue
Issue: When I run finetune_qlora.sh on v100 for finetuning llama2 I get CUDA error. Do you konw how to solve it? Thanks a lot. @haotian-liu Command:
Log: