THUDM / GLM-4

GLM-4 series: Open Multilingual Multimodal Chat LMs | 开源多语言多模态对话模型
Apache License 2.0
5.14k stars 427 forks source link

Uncaught exception: Traceback (most recent call last) 运行web端的glm4v模型提示报错 #504

Closed 949418761 closed 1 week ago

949418761 commented 2 months ago

System Info / 系統信息

Uncaught exception: Traceback (most recent call last): File "D:\Big_model\ChatGLM\GLM-4-main\composite_demo\src\main.py", line 288, in main for response, chat_history in client.generate_stream( File "D:\Big_model\ChatGLM\GLM-4-main\composite_demo\src\clients\hf.py", line 57, in generate_stream for token_text in streamer: File "C:\Miniconda3\envs\VL_model\lib\site-packages\transformers\generation\streamers.py", line 223, in next value = self.text_queue.get(timeout=self.timeout) File "C:\Miniconda3\envs\VL_model\lib\queue.py", line 179, in get raise Empty _queue.Empty

您好我在执行glm4v - 9b模型的时候报错了,显卡是4090 24G。图片和文字一起发送返回的错误

Who can help? / 谁可以帮助到您?

No response

Information / 问题信息

Reproduction / 复现过程

Uncaught exception: Traceback (most recent call last): File "D:\Big_model\ChatGLM\GLM-4-main\composite_demo\src\main.py", line 288, in main for response, chat_history in client.generate_stream( File "D:\Big_model\ChatGLM\GLM-4-main\composite_demo\src\clients\hf.py", line 57, in generate_stream for token_text in streamer: File "C:\Miniconda3\envs\VL_model\lib\site-packages\transformers\generation\streamers.py", line 223, in next value = self.text_queue.get(timeout=self.timeout) File "C:\Miniconda3\envs\VL_model\lib\queue.py", line 179, in get raise Empty _queue.Empty

您好我在执行glm4v - 9b模型的时候报错了,显卡是4090 24G。图片和文字一起发送返回的错误

Expected behavior / 期待表现

Uncaught exception: Traceback (most recent call last): File "D:\Big_model\ChatGLM\GLM-4-main\composite_demo\src\main.py", line 288, in main for response, chat_history in client.generate_stream( File "D:\Big_model\ChatGLM\GLM-4-main\composite_demo\src\clients\hf.py", line 57, in generate_stream for token_text in streamer: File "C:\Miniconda3\envs\VL_model\lib\site-packages\transformers\generation\streamers.py", line 223, in next value = self.text_queue.get(timeout=self.timeout) File "C:\Miniconda3\envs\VL_model\lib\queue.py", line 179, in get raise Empty _queue.Empty

您好我在执行glm4v - 9b模型的时候报错了,显卡是4090 24G。图片和文字一起发送返回的错误

zhipuch commented 2 months ago

readme地址:https://github.com/THUDM/GLM-4/blob/main/composite_demo/README.md 我对多模态进行了复现,按照readme进行配置之后,没有出现问题。

截屏2024-08-23 18 08 36