THUDM / CogVLM2

GPT4V-level open-source multi-modal model based on Llama3-8B
Apache License 2.0
1.93k stars 121 forks source link

opanapi是否能够支持多卡推理 #61

Closed ZXP8080 closed 3 months ago

ZXP8080 commented 3 months ago

opanapi是否能够支持多卡推理

zRzRzRzRzRzRzR commented 3 months ago

改一下模型载入部分就支持,直接复制cli multi gpu模型载入的部分进来

stay-leave commented 3 months ago

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. low_cpu_mem_usage was None, now set to True since model is quantized. Traceback (most recent call last): File "/home/liuhaodong/CogVLM2/basic_demo/cli_demo_multi_gpus.py", line 53, in model = load_checkpoint_and_dispatch( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/accelerate/big_modeling.py", line 607, in load_checkpoint_and_dispatch load_checkpoint_in_model( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1736, in load_checkpoint_in_model set_module_tensor_to_device( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 424, in set_module_tensor_to_device new_value = param_cls(new_value, requires_grad=old_value.requires_grad).to(device) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 324, in to return self._quantize(device) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 289, in _quantize w_4bit, quant_state = bnb.functional.quantize_4bit( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/bitsandbytes/functional.py", line 1234, in quantize_4bit raise ValueError(f"Blockwise quantization only supports 16/32-bit floats, but got {A.dtype}") ValueError: Blockwise quantization only supports 16/32-bit floats, but got torch.uint8

你好,我的报错,该怎么解决。单卡是没问题的

stay-leave commented 3 months ago

Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. low_cpu_mem_usage was None, now set to True since model is quantized. Traceback (most recent call last): File "/home/liuhaodong/CogVLM2/basic_demo/cli_demo_multi_gpus.py", line 53, in model = load_checkpoint_and_dispatch( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/accelerate/big_modeling.py", line 607, in load_checkpoint_and_dispatch load_checkpoint_in_model( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 1736, in load_checkpoint_in_model set_module_tensor_to_device( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/accelerate/utils/modeling.py", line 424, in set_module_tensor_to_device new_value = param_cls(new_value, requires_grad=old_value.requires_grad).to(device) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 324, in to return self._quantize(device) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/bitsandbytes/nn/modules.py", line 289, in _quantize w_4bit, quant_state = bnb.functional.quantize_4bit( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/bitsandbytes/functional.py", line 1234, in quantize_4bit raise ValueError(f"Blockwise quantization only supports 16/32-bit floats, but got {A.dtype}") ValueError: Blockwise quantization only supports 16/32-bit floats, but got torch.uint8

你好,我的报错,该怎么解决。单卡是没问题的

int4版本的会报这个错,使用未量化加载成功,但报错:Traceback (most recent call last): File "/home/liuhaodong/CogVLM2/basic_demo/cli_demo_multi_gpus.py", line 116, in outputs = model.generate(inputs, gen_kwargs) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context return func(*args, kwargs) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/transformers/generation/utils.py", line 1758, in generate result = self._sample( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/transformers/generation/utils.py", line 2397, in _sample outputs = self( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/accelerate/hooks.py", line 166, in new_forward output = module._old_forward(*args, kwargs) File "/home/liuhaodong/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/modeling_cogvlm.py", line 649, in forward outputs = self.model( File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "/home/liuhaodong/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/modeling_cogvlm.py", line 390, in forward images_features = self.encode_images(images) File "/home/liuhaodong/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/modeling_cogvlm.py", line 362, in encode_images images_features = self.vision(images) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, kwargs) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, *kwargs) File "/home/liuhaodong/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/visual.py", line 130, in forward x = self.transformer(x) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(args, kwargs) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(*args, kwargs) File "/home/liuhaodong/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/visual.py", line 94, in forward hidden_states = layer_module(hidden_states) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) File "/home/liuhaodong/python_venv/cogvlm/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl return forward_call(args, kwargs) File "/home/liuhaodong/.cache/huggingface/modules/transformers_modules/cogvlm2-llama3-chinese-chat-19B/visual.py", line 83, in forward output = mlp_input + mlp_output RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:2 and cuda:3!