X-PLUG / mPLUG-Owl

mPLUG-Owl: The Powerful Multi-modal Large Language Model Family
https://www.modelscope.cn/studios/damo/mPLUG-Owl
MIT License
2.25k stars 171 forks source link

ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0 #81

Open YuchenLiu98 opened 1 year ago

YuchenLiu98 commented 1 year ago

ValueError: Your setup doesn't support bf16/gpu. You need torch>=1.10, using Ampere GPU with cuda>=11.0 How to solve this problem? When I set bf16 flag to Flase. Another issue happens "RuntimeError: "erfinv_vml_cpu" not implemented for 'Half'". Thanks.

MAGAer13 commented 1 year ago

You can initalize the model on cpu with float32. After the initialization, convert the model into half then put it on GPU.

ljwdust commented 1 year ago

Set bf16 to Flase and set low_cpu_mem_usage=True may work.

wang9danzuishuai commented 1 year ago

@YuchenLiu98 Hi, Liu. Sorry to bother you. I've met the same error as yours. After initializing the model on cpu with float32 and convert the model into half then put it on GPU, a GPU out of memory ERROR occurred. So have you solved this problem? Please let me know, thanks!

shaswati1 commented 10 months ago

@MAGAer13, I set bf16 to False and low_cpu_mem_usage to True but still getting the error which says ""erfinv_vml_cpu" not implemented for 'Half'". Can you please help?