QwenLM / Qwen2-VL

Qwen2-VL is the multimodal large language model series developed by Qwen team, Alibaba Cloud.
Apache License 2.0
3.25k stars 203 forks source link

Demo web_demo_mm.py推理报显存不足 #86

Open Guangming92 opened 2 months ago

Guangming92 commented 2 months ago

环境:40904,python=3.10.12,ubuntu 报错如下: User: 请描述图片内容 Exception in thread Thread-7 (generate): Traceback (most recent call last): File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/threading.py", line 953, in run self._target(self._args, self._kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context return func(args, kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/transformers/generation/utils.py", line 2015, in generate result = self._sample( File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/transformers/generation/utils.py", line 2961, in _sample outputs = self(model_inputs, return_dict=True) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/accelerate/hooks.py", line 169, in new_forward output = module._old_forward(*args, *kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1580, in forward image_embeds = self.visual(pixel_values, grid_thw=image_grid_thw).to(inputs_embeds.device) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/accelerate/hooks.py", line 169, in new_forward output = module._old_forward(*args, *kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 1026, in forward hidden_states = blk(hidden_states, cu_seqlens=cu_seqlens, rotary_pos_emb=rotary_pos_emb) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/accelerate/hooks.py", line 169, in new_forward output = module._old_forward(*args, *kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 377, in forward hidden_states = hidden_states + self.attn( File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1562, in _call_impl return forward_call(*args, *kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/accelerate/hooks.py", line 169, in new_forward output = module._old_forward(args, **kwargs) File "/home/user/anaconda3/envs/ms-swift/lib/python3.10/site-packages/transformers/models/qwen2_vl/modeling_qwen2_vl.py", line 350, in forward attn_output = F.scaled_dot_product_attention(q, k, v, attention_mask, dropout_p=0.0) torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.20 GiB. GPU 0 has a total capacity of 23.64 GiB of which 2.85 GiB is free. Including non-PyTorch memory, this process has 20.78 GiB memory in use. Of the allocated memory 20.25 GiB is allocated by PyTorch, and 84.37 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables) 请问这是什么原因?需要添加什么参数?感谢您的简答

ShuaiBai623 commented 2 months ago

try flash att2 and reduce max_pixels

wandouqiang commented 2 months ago

1.reduce max_pixels 2.clear task_history when you predict. eg: task_history.clear() in predict function. this method worked when i input multi-round dialogues