Hi, I am very happy to find a repo that can be used to fine-tune blip2 quickly
While using it (Llava instruction data), I ran into some issues.
I only have V100, but the model appears the following error when running. Where should I modify bf16 to fp16?
RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16.
Hi, I am very happy to find a repo that can be used to fine-tune blip2 quickly While using it (Llava instruction data), I ran into some issues. I only have V100, but the model appears the following error when running. Where should I modify bf16 to fp16?
RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16.
Thanks.