chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.83k stars 5.55k forks source link

调用zhipu官方api接口,不能流式输出 #3793

Closed GLY-123 closed 6 months ago

GLY-123 commented 6 months ago

调用zhipu官方api接口,不能流式输出

实际结果 / Actual Result data中设置stream为false时能正常问答,不能流式;data中stream设置为true时报错如下:连接成功* 2024-04-17 07:57:58 | INFO | httpx | HTTP Request: POST https://open.bigmodel.cn/api/paas/v4/chat/completions "HTTP/1.1 200 " 请求成功*****: <Response [200 ]> response*****: <Response [200 ]> 2024-04-17 07:58:01,034 - utils.py[line:38] - ERROR: object of type 'NoneType' has no len() Traceback (most recent call last): File "/guo/Langchain-Chatchat/server/utils.py", line 36, in wrap_done await fn File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 385, in acall raise e File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/base.py", line 379, in acall await self._acall(inputs, run_manager=run_manager) File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/llm.py", line 275, in _acall response = await self.agenerate([inputs], run_manager=run_manager) File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain/chains/llm.py", line 142, in agenerate return await self.llm.agenerate_prompt( File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 553, in agenerate_prompt return await self.agenerate( File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 513, in agenerate raise exceptions[0] File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 616, in _agenerate_with_cache return await self._agenerate( File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain_community/chat_models/openai.py", line 522, in _agenerate return await agenerate_from_stream(stream_iter) File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain_core/language_models/chat_models.py", line 86, in agenerate_from_stream async for chunk in stream: File "/data/anaconda3/envs/langchain/lib/python3.8/site-packages/langchain_community/chat_models/openai.py", line 493, in _astream if len(chunk["choices"]) == 0: TypeError: object of type 'NoneType' has no len()

环境信息 / Environment Information langchian-chatchat-2.10 源码部署

dosubot[bot] commented 6 months ago

The error TypeError: object of type 'NoneType' has no len() indicates that the streaming functionality is expecting an object that can be measured with len(), but is instead receiving None. This typically happens when the data expected for streaming isn't properly initialized or a function expected to return data returns None.

To address this issue:

  1. Check Data Initialization: Ensure any data used for streaming is correctly initialized and not None when passed to the streaming process.
  2. Review Function Returns: Make sure all functions in the streaming process return the correct data types, especially those expected to return objects that support len().
  3. Implement Error Handling: Proper error handling can prevent passing None values inadvertently during the streaming process.
  4. Verify Streaming Configuration: Ensure the streaming setup, including any relevant parameters, is correctly configured.

The stream parameter in the Zhipu API controls whether the output is streamed (True) or returned as a single response (False). When streaming is enabled (True), the API streams each token as it's generated, allowing for real-time updates. The wrap_done function in server/utils.py is designed to execute an asynchronous task and signal its completion or any exceptions, facilitating synchronization in asynchronous operations.

For a precise solution, closely review the implementation details related to data initialization, function return values, and error handling in your streaming process.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
zRzRzRzRzRzRzR commented 6 months ago

是的,没有做,有bug,在0.2.x不会做了