chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
32.26k stars 5.6k forks source link

[BUG] 使用chatyuan模型时,对话Error,has no attribute 'stream_chat' #282

Closed cocomany closed 1 year ago

cocomany commented 1 year ago

问题描述 / Problem Description 用简洁明了的语言描述这个问题 / Describe the problem in a clear and concise manner. 使用chatyuan模型时,对话Error, 提示AttributeError: 'T5ForConditionalGeneration' object has no attribute 'stream_chat'

复现问题的步骤 / Steps to Reproduce

  1. 模型配置选 chatyuan
  2. 等待日志中输出"模型已成功重新加载,可以开始对话,或从右侧选择模式后开始对话"
  3. 选择对话
  4. 输入问题点提交

预期的结果 / Expected Result 描述应该出现的结果 / Describe the expected result. 可继续输入对话

实际结果 / Actual Result 描述实际发生的结果 / Describe the actual result. 对话框下显示Error

环境信息 / Environment Information

附加信息 / Additional Information 添加与问题相关的任何其他信息 / Add any other information related to the issue. 错误日志: 模型已成功重新加载,可以开始对话,或从右侧选择模式后开始对话 Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/gradio/routes.py", line 412, in run_predict output = await app.get_blocks().process_api( File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1299, in process_api result = await self.call_function( File "/usr/local/lib/python3.8/dist-packages/gradio/blocks.py", line 1035, in call_function prediction = await anyio.to_thread.run_sync( File "/usr/local/lib/python3.8/dist-packages/anyio/to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "/usr/local/lib/python3.8/dist-packages/anyio/_backends/_asyncio.py", line 867, in run result = context.run(func, *args) File "/usr/local/lib/python3.8/dist-packages/gradio/utils.py", line 491, in async_iteration return next(iterator) File "webui.py", line 52, in get_answer for resp, history in local_doc_qa.llm._call(query, history, File "/chatGLM/models/chatglm_llm.py", line 65, in _call for inum, (streamresp, ) in enumerate(self.model.stream_chat( File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1614, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'T5ForConditionalGeneration' object has no attribute 'stream_chat'

xx-zhang commented 1 year ago

t5 不支持 straming_chat , 你直接把 streaming = False 在 config 里面改下。应该就可以了