Closed pulikeyati closed 6 months ago
Hey @pulikeyati, back again I see! 😄 Let's dive into this digital conundrum together. 🛠️
Based on the information you provided and the similar issues I found in the Langchain-Chatchat repository, the "TypeError: object of type 'NoneType' has no len()" error you're encountering seems to be related to an operation that expects a collection or an object with a length (like a list, string, or dictionary) but receives a None
value instead. This error typically occurs when the knowledge base is empty or not properly initialized.
Here are some steps you can take to resolve this issue:
Check Knowledge Base Initialization: Ensure that the knowledge base is being correctly initialized before it's used. An empty knowledge base should be initialized to an empty list or an appropriate data structure rather than None
.
Review Code for Knowledge Base Usage: Examine the code to identify where the knowledge base is being accessed or modified. Ensure that any operations performed on the knowledge base are guarded against the case where the knowledge base might be None
.
Update Knowledge Base Handling: If the knowledge base can legitimately be empty or None
in certain scenarios, update the code to handle these cases gracefully. This might involve adding checks before attempting operations that assume the knowledge base has content.
Debugging and Logging: Add logging around the areas where the knowledge base is accessed to confirm its state (whether it's None
or has been initialized to an empty data structure). This can help pinpoint where the error occurs.
For example, in the agenerate_from_stream
function where the error seems to occur, you could add a check before the operation that causes the error:
if stream is not None:
# existing code
else:
# handle the case where stream is None
This would prevent the TypeError
from occurring when stream
is None
.
In addition, based on the similar issues I found, you might want to try the following solutions:
running_models
is always an iterable object by defaulting to an empty list if api.list_running_models()
returns None
. This should resolve the TypeError
encountered. Here is the suggested code change:running_models = list(api.list_running_models() or [])
list_running_models()
function to diagnose why it might be returning None
. For example:def list_running_models():
# ... existing code ...
if response.status_code != 200:
logger.error(f"Failed to list running models: {response.content}")
return None
# ... existing code ...
This will log an error message whenever the function fails to list the running models, which could help in understanding the root cause of the issue.
Please try these solutions and let me know if they help resolve the issue. If the problem persists, please provide more information such as the specific code snippets where the error occurs, and I'll be glad to assist you further.
这个问题已经被标记为 stale
,因为它已经超过 30 天没有任何活动。
用cuda的chatglm3-6b提示cuda不行,但我都是最新版, 用Qwen-1_8B-Chat能启动 知识库也能用,llm一提问就空白了
==============================Langchain-Chatchat Configuration============================== 操作系统:Windows-10-10.0.19045-SP0. python版本:3.11.8 | packaged by Anaconda, Inc. | (main, Feb 26 2024, 21:34:05) [MSC v.1916 64 bit (AMD64)] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['Qwen-1_8B-Chat', 'zhipu-api', 'openai-api'] @ cpu {'device': 'cpu', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'Qwen/Qwen-1_8B-Chat', 'port': 20002} {'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cpu ==============================Langchain-Chatchat Configuration==============================
2024-04-03 13:09:50,427 - startup.py[line:655] - INFO: 正在启动服务: 2024-04-03 13:09:50,427 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 E:\Langchain-Chatchat-0.2.10\logs E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-04-03 13:09:58 | INFO | model_worker | Register to controller 2024-04-03 13:09:59 | ERROR | stderr | INFO: Started server process [13652] 2024-04-03 13:09:59 | ERROR | stderr | INFO: Waiting for application startup. 2024-04-03 13:09:59 | ERROR | stderr | INFO: Application startup complete. 2024-04-03 13:09:59 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:20000 (Press CTRL+C to quit) 2024-04-03 13:10:05 | INFO | model_worker | Loading the model ['Qwen-1_8B-Chat'] on worker cce8aec6 ... 2024-04-03 13:10:56 | WARNING | transformers_modules.Qwen.Qwen-1_8B-Chat.1d0f68de57b88cfde81f3c3e537f24464d889081.modeling_qwen | Try importing flash-attention for faster inference... 2024-04-03 13:10:56 | WARNING | transformers_modules.Qwen.Qwen-1_8B-Chat.1d0f68de57b88cfde81f3c3e537f24464d889081.modeling_qwen | Warning: import flash_attn rotary fail, please install FlashAttention rotary to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/rotary 2024-04-03 13:10:56 | WARNING | transformers_modules.Qwen.Qwen-1_8B-Chat.1d0f68de57b88cfde81f3c3e537f24464d889081.modeling_qwen | Warning: import flash_attn rms_norm fail, please install FlashAttention layer_norm to get higher efficiency https://github.com/Dao-AILab/flash-attention/tree/main/csrc/layer_norm 2024-04-03 13:10:56 | WARNING | transformers_modules.Qwen.Qwen-1_8B-Chat.1d0f68de57b88cfde81f3c3e537f24464d889081.modeling_qwen | Warning: import flash_attn fail, please install FlashAttention to get higher efficiency https://github.com/Dao-AILab/flash-attention Loading checkpoint shards: 0%| | 0/2 [00:00<?, ?it/s] Loading checkpoint shards: 50%|████████████████████████████▌ | 1/2 [00:01<00:01, 1.80s/it] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:04<00:00, 2.59s/it] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████| 2/2 [00:04<00:00, 2.47s/it] 2024-04-03 13:11:01 | ERROR | stderr | 2024-04-03 13:11:31 | INFO | model_worker | Register to controller INFO: Started server process [15972] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:7861 (Press CTRL+C to quit)
E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: The class
langchain_community.chat_models.openai.ChatOpenAI
was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it runpip install -U langchain-openai
and import asfrom langchain_openai import ChatOpenAI
. warn_deprecated( 2024-04-03 13:13:00 | INFO | stdout | INFO: 127.0.0.1:9664 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2024-04-03 13:13:00,089 - _client.py[line:1758] - INFO: HTTP Request: POST http://127.0.0.1:20000/v1/chat/completions "HTTP/1.1 200 OK" 2024-04-03 13:13:00 | INFO | httpx | HTTP Request: POST http://127.0.0.1:20002/worker_generate_stream "HTTP/1.1 200 OK" 2024-04-03 13:13:01,818 - utils.py[line:38] - ERROR: object of type 'NoneType' has no len() Traceback (most recent call last): File "E:\Langchain-Chatchat-0.2.10\server\utils.py", line 36, in wrap_done await fn File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain\chains\base.py", line 385, in acall raise e File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain\chains\base.py", line 379, in acall await self._acall(inputs, run_manager=run_manager) File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain\chains\llm.py", line 275, in _acall response = await self.agenerate([inputs], run_manager=run_manager) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain\chains\llm.py", line 142, in agenerate return await self.llm.agenerate_prompt( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 554, in agenerate_prompt return await self.agenerate( ^^^^^^^^^^^^^^^^^^^^^ File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 514, in agenerate raise exceptions[0] File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 617, in _agenerate_with_cache return await self._agenerate( ^^^^^^^^^^^^^^^^^^^^^^ File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain_community\chat_models\openai.py", line 522, in _agenerate return await agenerate_from_stream(stream_iter) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain_core\language_models\chat_models.py", line 87, in agenerate_from_stream async for chunk in stream: File "E:\Langchain-Chatchat-0.2.10\anaconda3\envs\chatchat\Lib\site-packages\langchain_community\chat_models\openai.py", line 493, in _astream if len(chunk["choices"]) == 0: