OpenBMB / VisCPM

[ICLR'24 spotlight] Chinese and English Multimodal Large Model Series (Chat and Paint) | 基于CPM基础模型的中英双语多模态大模型系列
1.09k stars 94 forks source link

你好,运行demo_chat.py的时候出现CUDA out of memory,配置是四张8G的gpu,这是什么问题,该如何解决? #38

Closed abandonnnnn closed 10 months ago

abandonnnnn commented 10 months ago

(viscpm) zzz@zzz:~/yz/AllVscodes/VisCPM-main$ python demo_chat.py use CUDA_MEMORY_CPMBEE_MAX=1g to limit cpmbee cuda memory cost /home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/bminf/wrapper.py:57: UserWarning: quantization is set to true but torch.nn.Linear is not found in your model. warnings.warn("quantization is set to true but torch.nn.Linear is not found in your model.") Traceback (most recent call last): File "/home/zzz/yz/AllVscodes/VisCPM-main/demo_chat.py", line 11, in viscpm_chat = VisCPMChat(model_path, image_safety_checker=False) File "/home/zzz/yz/AllVscodes/VisCPM-main/VisCPM/viscpm_chat.py", line 72, in init self.vlu_cpmbee.vpm.to(self.device) File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 989, in to return self._apply(convert) File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) [Previous line repeated 4 more times] File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 664, in _apply param_applied = fn(param) File "/home/zzz/anaconda3/envs/viscpm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 987, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 7.93 GiB total capacity; 1.91 GiB already allocated; 42.31 MiB free; 1.91 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

JamesHujy commented 10 months ago

您好,您可以尝试加入export CUDA_MEMORY_CPMBEE_MAX=1g的环境变量

JamesHujy commented 10 months ago

如果您显存不够,也可以尝试使用我们最新的MiniCPM-V,总参数量只有2.8B