RulinShao / VL-Instruct

Codes for vision-language instruction tuning. Currently support BLIP2-t5 and BLIP2-vicuna.
3 stars 0 forks source link

change bf16 to fp16 #1

Open palchenli opened 1 year ago

palchenli commented 1 year ago

Hi, I am very happy to find a repo that can be used to fine-tune blip2 quickly While using it (Llava instruction data), I ran into some issues. I only have V100, but the model appears the following error when running. Where should I modify bf16 to fp16?

RuntimeError: Current CUDA Device does not support bfloat16. Please switch dtype to float16.

Thanks.

RulinShao commented 1 year ago

Hi, thanks for the interest! You can convert both the model and the input data to fp16 using

model = model.to(torch.float16) and image = image.to(torch.float16)