THUDM / CogVLM2

GPT4V-level open-source multi-modal model based on Llama3-8B
Apache License 2.0
2.02k stars 134 forks source link

cogvlm2-llama3-chinese-chat-19B-int4, 使用CLI demo,输出报错 #145

Closed tectal closed 2 months ago

tectal commented 2 months ago

System Info / 系統信息

win10

Who can help? / 谁可以帮助到您?

No response

Information / 问题信息

Reproduction / 复现过程

1.使用pycharm,运行cli_demo.py代码,测试图片,报错如下

Expected behavior / 期待表现

C:\Users\jxny02\anaconda3\envs\ygf_swift\python.exe E:/ygf/swift/uav/Cogvlm2-19b-chat.py --quant 4 Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.3.1+cu121 with CUDA 1201 (you have 2.3.0+cu118) Python 3.11.9 (you have 3.11.9) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details Unused kwargs: ['_load_in_4bit', '_load_in_8bit', 'quant_method']. These kwargs are not used in <class 'transformers.utils.quantization_config.BitsAndBytesConfig'>. C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\transformers\quantizers\auto.py:167: UserWarning: You passed quantization_config or equivalent parameters to from_pretrained but the model you're loading already has a quantization_config attribute. The quantization_config from the model will be used. warnings.warn(warning_msg) image path >>>>> E:/ygf/swift/1.jpg Human:描述图片 C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\bitsandbytes\nn\modules.py:426: UserWarning: Input type into Linear4bit is torch.float16, but bnb_4bit_compute_dtype=torch.float32 (default). This will lead to slow inference or training speed. warnings.warn( Traceback (most recent call last): File "E:\ygf\swift\uav\Cogvlm2-19b-chat.py", line 105, in outputs = model.generate(inputs, gen_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\transformers\generation\utils.py", line 1758, in generate result = self._sample( ^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\transformers\generation\utils.py", line 2397, in _sample outputs = self( ^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02.cache\huggingface\modules\transformers_modules\Cogvlm2-19b-chat\modeling_cogvlm.py", line 620, in forward outputs = self.model( ^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02.cache\huggingface\modules\transformers_modules\Cogvlm2-19b-chat\modeling_cogvlm.py", line 389, in forward images_features = self.encode_images(images) ^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02.cache\huggingface\modules\transformers_modules\Cogvlm2-19b-chat\modeling_cogvlm.py", line 361, in encode_images images_features = self.vision(images) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02.cache\huggingface\modules\transformers_modules\Cogvlm2-19b-chat\visual.py", line 130, in forward x = self.transformer(x) ^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02.cache\huggingface\modules\transformers_modules\Cogvlm2-19b-chat\visual.py", line 94, in forward hidden_states = layer_module(hidden_states) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(*args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02.cache\huggingface\modules\transformers_modules\Cogvlm2-19b-chat\visual.py", line 79, in forward attention_output = self.input_layernorm(self.attention(attention_input)) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1532, in _wrapped_call_impl return self._call_impl(*args, *kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\torch\nn\modules\module.py", line 1541, in _call_impl return forward_call(args, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\accelerate\hooks.py", line 169, in new_forward output = module._old_forward(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02.cache\huggingface\modules\transformers_modules\Cogvlm2-19b-chat\visual.py", line 40, in forward out = xops.memory_efficient_attention( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\xformers\ops\fmha__init.py", line 276, in memory_efficient_attention return _memory_efficient_attention( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\xformers\ops\fmha\init.py", line 395, in _memory_efficient_attention return _memory_efficient_attention_forward( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\xformers\ops\fmha\init__.py", line 414, in _memory_efficient_attention_forward op = _dispatch_fw(inp, False) ^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 119, in _dispatch_fw return _run_priority_list( ^^^^^^^^^^^^^^^^^^^ File "C:\Users\jxny02\anaconda3\envs\ygf_swift\Lib\site-packages\xformers\ops\fmha\dispatch.py", line 55, in _run_priority_list raise NotImplementedError(msg) NotImplementedError: No operator found for memory_efficient_attention_forward with inputs: query : shape=(1, 9217, 16, 112) (torch.float16) key : shape=(1, 9217, 16, 112) (torch.float16) value : shape=(1, 9217, 16, 112) (torch.float16) attn_bias : <class 'NoneType'> p : 0.0 decoderF is not supported because: xFormers wasn't build with CUDA support attn_bias type is <class 'NoneType'> operator wasn't built - see python -m xformers.info for more info flshattF@0.0.0 is not supported because: xFormers wasn't build with CUDA support requires device with capability > (8, 0) but your GPU has capability (7, 0) (too old) operator wasn't built - see python -m xformers.info for more info cutlassF is not supported because: xFormers wasn't build with CUDA support operator wasn't built - see python -m xformers.info for more info smallkF is not supported because: max(query.shape[-1] != value.shape[-1]) > 32 xFormers wasn't build with CUDA support dtype=torch.float16 (supported: {torch.float32}) has custom scale operator wasn't built - see python -m xformers.info for more info unsupported embed per head: 112

Process finished with exit code 1

zRzRzRzRzRzRzR commented 2 months ago

需要使用Linux设备才能正常安装xformers