THUDM / VisualGLM-6B

Chinese and English multimodal conversational language model | 多模态中英双语对话语言模型
Apache License 2.0
4.1k stars 418 forks source link

web_demo_hf.py报错:RuntimeError: GET was unable to find an engine to execute this computation #3

Closed MingJiaAn closed 1 year ago

MingJiaAn commented 1 year ago

在上传图片之后,运行报错,错误信息如下: Traceback (most recent call last): File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/gradio/routes.py", line 412, in run_predict output = await app.get_blocks().process_api( File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/gradio/blocks.py", line 1299, in process_api result = await self.call_function( File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/gradio/blocks.py", line 1035, in call_function prediction = await anyio.to_thread.run_sync( File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, args) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/gradio/utils.py", line 491, in async_iteration return next(iterator) File "/mnt/amj/VisualGLM-6B/web_demo_hf.py", line 63, in predict for response, history in model.stream_chat(tokenizer, image_path, input, history, max_length=max_length, top_p=top_p, File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) File "/root/.cache/huggingface/modules/transformers_modules/visualglm-6b/modeling_chatglm.py", line 1439, in stream_chat for outputs in self.stream_generate(inputs, gen_kwargs): File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) File "/root/.cache/huggingface/modules/transformers_modules/visualglm-6b/modeling_chatglm.py", line 1291, in stream_generate outputs = self( File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/root/.cache/huggingface/modules/transformers_modules/visualglm-6b/modeling_chatglm.py", line 1462, in forward image_embeds = self.image_encoder(images) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(*args, *kwargs) File "/root/.cache/huggingface/modules/transformers_modules/visualglm-6b/visual.py", line 69, in forward enc = self.vit(image)[0] File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/root/.cache/huggingface/modules/transformers_modules/visualglm-6b/visual.py", line 28, in forward return super().forward(input_ids=input_ids, position_ids=None, attention_mask=attention_mask, image=image) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/sat/model/base_model.py", line 144, in forward return self.transformer(*args, kwargs) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/sat/model/transformer.py", line 451, in forward hidden_states = self.hooks['word_embedding_forward'](input_ids, output_cross_layer=output_cross_layer, kw_args) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/sat/model/official/vit_model.py", line 55, in word_embedding_forward embeddings = self.proj(images) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1501, in _call_impl return forward_call(args, kwargs) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 463, in forward return self._conv_forward(input, self.weight, self.bias) File "/mnt/amj/conda/envs/lora/lib/python3.9/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: GET was unable to find an engine to execute this computation

Sleepychord commented 1 year ago

似乎是没有半精度卷积算子导致的,可否提供您的运行环境和其他信息?

MingJiaAn commented 1 year ago

似乎是没有半精度卷积算子导致的,可否提供您的运行环境和其他信息?

您能说下正确的环境吗?比如需要的包详细的版本

Sleepychord commented 1 year ago

https://github.com/microsoft/TaskMatrix/issues/283#issuecomment-1497346164 你看一下这个回答,好像是降低pytorch版本到1.13就行