Closed CyberXie closed 4 months ago
python startup.py -a
==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.27. python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cuda {'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': '/opt/env/models/chatglm3-6b', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cuda ==============================Langchain-Chatchat Configuration==============================
2024-06-04 16:34:23,814 - startup.py[line:655] - INFO: 正在启动服务: 2024-06-04 16:34:23,814 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /opt/env/langchain/logs /opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-06-04 16:34:29 | INFO | model_worker | Register to controller Process api_worker - zhipu-api: Traceback (most recent call last): File "/opt/module/moniconda3/envs/langchain/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/opt/module/moniconda3/envs/langchain/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/opt/env/langchain/startup.py", line 389, in run_model_worker app = create_model_worker_app(log_level=log_level, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/env/langchain/startup.py", line 100, in create_model_worker_app worker = worker_class(model_names=args.model_names, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/env/langchain/server/model_workers/zhipu.py", line 56, in init super().init(*kwargs) File "/opt/env/langchain/server/model_workers/base.py", line 124, in init self.init_heart_beat() File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/base_model_worker.py", line 79, in init_heart_beat self.register_to_controller() File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/base_model_worker.py", line 97, in register_to_controller assert r.status_code == 200 ^^^^^^^^^^^^^^^^^^^^ AssertionError 2024-06-04 16:34:29 | ERROR | stderr | INFO: Started server process [8818] 2024-06-04 16:34:29 | ERROR | stderr | INFO: Waiting for application startup. 2024-06-04 16:34:29 | ERROR | stderr | INFO: Application startup complete. 2024-06-04 16:34:29 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit) 2024-06-04 16:34:30 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 82540daf ... 2024-06-04 16:34:30 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting eos_token is not supported, use the default one. 2024-06-04 16:34:30 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting pad_token is not supported, use the default one. 2024-06-04 16:34:30 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting unk_token is not supported, use the default one. Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] Loading checkpoint shards: 14%|████▋ | 1/7 [00:01<00:09, 1.66s/it] Loading checkpoint shards: 29%|█████████▍ | 2/7 [00:03<00:08, 1.70s/it] Loading checkpoint shards: 43%|██████████████▏ | 3/7 [00:05<00:07, 1.75s/it] Loading checkpoint shards: 57%|██████████████████▊ | 4/7 [00:06<00:05, 1.74s/it] Loading checkpoint shards: 71%|███████████████████████▌ | 5/7 [00:08<00:03, 1.74s/it] Loading checkpoint shards: 86%|████████████████████████████▎ | 6/7 [00:10<00:01, 1.70s/it] Loading checkpoint shards: 100%|█████████████████████████████████| 7/7 [00:11<00:00, 1.47s/it] Loading checkpoint shards: 100%|█████████████████████████████████| 7/7 [00:11<00:00, 1.61s/it] 2024-06-04 16:34:42 | ERROR | stderr | 2024-06-04 16:34:51 | INFO | model_worker | Register to controller 2024-06-04 16:34:51 | ERROR | stderr | Process model_worker - chatglm3-6b: 2024-06-04 16:34:51 | ERROR | stderr | Traceback (most recent call last): 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap 2024-06-04 16:34:51 | ERROR | stderr | self.run() 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/multiprocessing/process.py", line 108, in run 2024-06-04 16:34:51 | ERROR | stderr | self._target(self._args, self._kwargs) 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/env/langchain/startup.py", line 389, in run_model_worker 2024-06-04 16:34:51 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, kwargs) 2024-06-04 16:34:51 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/env/langchain/startup.py", line 217, in create_model_worker_app 2024-06-04 16:34:51 | ERROR | stderr | worker = ModelWorker( 2024-06-04 16:34:51 | ERROR | stderr | ^^^^^^^^^^^^ 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 102, in init 2024-06-04 16:34:51 | ERROR | stderr | self.init_heart_beat() 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/base_model_worker.py", line 79, in init_heart_beat 2024-06-04 16:34:51 | ERROR | stderr | self.register_to_controller() 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/base_model_worker.py", line 97, in register_to_controller 2024-06-04 16:34:51 | ERROR | stderr | assert r.status_code == 200 2024-06-04 16:34:51 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^ 2024-06-04 16:34:51 | ERROR | stderr | AssertionError
遇到同样的问题, 谁有解决方案。
检查下zhipu-api配置文件那块
请查阅 wiki 中常见问题。
这个错误是本地模型进程注册到 fastchat controller 失败了。一般有两种原因:1、开了系统全局代理,关闭即可。2、DEFAULT_BIND_HOST 设为'0.0.0.0',改成'127.0.0.1' 或 本机实际 IP 即可。或者更新到最新版本代码也可以解决。
python startup.py -a
==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-3.10.0-1160.el7.x86_64-x86_64-with-glibc2.27. python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cuda {'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': '/opt/env/models/chatglm3-6b', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cuda ==============================Langchain-Chatchat Configuration==============================
2024-06-04 16:34:23,814 - startup.py[line:655] - INFO: 正在启动服务: 2024-06-04 16:34:23,814 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /opt/env/langchain/logs /opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-06-04 16:34:29 | INFO | model_worker | Register to controller Process api_worker - zhipu-api: Traceback (most recent call last): File "/opt/module/moniconda3/envs/langchain/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/opt/module/moniconda3/envs/langchain/lib/python3.11/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/opt/env/langchain/startup.py", line 389, in run_model_worker app = create_model_worker_app(log_level=log_level, kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/env/langchain/startup.py", line 100, in create_model_worker_app worker = worker_class(model_names=args.model_names, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/env/langchain/server/model_workers/zhipu.py", line 56, in init super().init(*kwargs) File "/opt/env/langchain/server/model_workers/base.py", line 124, in init self.init_heart_beat() File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/base_model_worker.py", line 79, in init_heart_beat self.register_to_controller() File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/base_model_worker.py", line 97, in register_to_controller assert r.status_code == 200 ^^^^^^^^^^^^^^^^^^^^ AssertionError 2024-06-04 16:34:29 | ERROR | stderr | INFO: Started server process [8818] 2024-06-04 16:34:29 | ERROR | stderr | INFO: Waiting for application startup. 2024-06-04 16:34:29 | ERROR | stderr | INFO: Application startup complete. 2024-06-04 16:34:29 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit) 2024-06-04 16:34:30 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 82540daf ... 2024-06-04 16:34:30 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting eos_token is not supported, use the default one. 2024-06-04 16:34:30 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting pad_token is not supported, use the default one. 2024-06-04 16:34:30 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting unk_token is not supported, use the default one. Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] Loading checkpoint shards: 14%|████▋ | 1/7 [00:01<00:09, 1.66s/it] Loading checkpoint shards: 29%|█████████▍ | 2/7 [00:03<00:08, 1.70s/it] Loading checkpoint shards: 43%|██████████████▏ | 3/7 [00:05<00:07, 1.75s/it] Loading checkpoint shards: 57%|██████████████████▊ | 4/7 [00:06<00:05, 1.74s/it] Loading checkpoint shards: 71%|███████████████████████▌ | 5/7 [00:08<00:03, 1.74s/it] Loading checkpoint shards: 86%|████████████████████████████▎ | 6/7 [00:10<00:01, 1.70s/it] Loading checkpoint shards: 100%|█████████████████████████████████| 7/7 [00:11<00:00, 1.47s/it] Loading checkpoint shards: 100%|█████████████████████████████████| 7/7 [00:11<00:00, 1.61s/it] 2024-06-04 16:34:42 | ERROR | stderr | 2024-06-04 16:34:51 | INFO | model_worker | Register to controller 2024-06-04 16:34:51 | ERROR | stderr | Process model_worker - chatglm3-6b: 2024-06-04 16:34:51 | ERROR | stderr | Traceback (most recent call last): 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/multiprocessing/process.py", line 314, in _bootstrap 2024-06-04 16:34:51 | ERROR | stderr | self.run() 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/multiprocessing/process.py", line 108, in run 2024-06-04 16:34:51 | ERROR | stderr | self._target(self._args, self._kwargs) 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/env/langchain/startup.py", line 389, in run_model_worker 2024-06-04 16:34:51 | ERROR | stderr | app = create_model_worker_app(log_level=log_level, kwargs) 2024-06-04 16:34:51 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/env/langchain/startup.py", line 217, in create_model_worker_app 2024-06-04 16:34:51 | ERROR | stderr | worker = ModelWorker( 2024-06-04 16:34:51 | ERROR | stderr | ^^^^^^^^^^^^ 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/model_worker.py", line 102, in init 2024-06-04 16:34:51 | ERROR | stderr | self.init_heart_beat() 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/base_model_worker.py", line 79, in init_heart_beat 2024-06-04 16:34:51 | ERROR | stderr | self.register_to_controller() 2024-06-04 16:34:51 | ERROR | stderr | File "/opt/module/moniconda3/envs/langchain/lib/python3.11/site-packages/fastchat/serve/base_model_worker.py", line 97, in register_to_controller 2024-06-04 16:34:51 | ERROR | stderr | assert r.status_code == 200 2024-06-04 16:34:51 | ERROR | stderr | ^^^^^^^^^^^^^^^^^^^^ 2024-06-04 16:34:51 | ERROR | stderr | AssertionError