chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.43k stars 5.48k forks source link

无法启动webui #2802

Closed TigerHH6866 closed 8 months ago

TigerHH6866 commented 8 months ago

没有任何报错,但webui打不开 Autodl的机器

(my-env) root@autodl-container-ede4119248-6b0b6df3:~/autodl-tmp/Langchain-Chatchat# python startup.py --all-webui 2024-01-26 13:45:24,432 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:27,774 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto

==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.4.0-107-generic-x86_64-with-glibc2.35. python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] 项目版本:v0.2.9 langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 2024-01-26 13:45:29,012 - utils.py[line:525] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cuda {'device': 'cuda', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'chatglm3-6b', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'chatglm_turbo', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'model_name': 'gpt-3.5-turbo', 'online_api': True, 'openai_proxy': '', 'port': 20002} 2024-01-26 13:45:29,013 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 当前Embbedings模型: bge-large-zh @ cuda ==============================Langchain-Chatchat Configuration==============================

2024-01-26 13:45:29,013 - startup.py[line:651] - INFO: 正在启动服务: 2024-01-26 13:45:29,013 - startup.py[line:652] - INFO: 如需查看 llm_api 日志,请前往 /root/autodl-tmp/Langchain-Chatchat/logs 2024-01-26 13:45:31,180 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:32 | ERROR | stderr | INFO: Started server process [43482] 2024-01-26 13:45:32 | ERROR | stderr | INFO: Waiting for application startup. 2024-01-26 13:45:32 | ERROR | stderr | INFO: Application startup complete. 2024-01-26 13:45:32 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:20001 (Press CTRL+C to quit) 2024-01-26 13:45:34,820 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:34,853 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:34,909 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:36 | INFO | model_worker | Register to controller 2024-01-26 13:45:36 | INFO | controller | Register a new worker: http://127.0.0.1:21001 2024-01-26 13:45:36 | INFO | controller | Register done: http://127.0.0.1:21001, {'model_names': ['zhipu-api'], 'speed': 1, 'queue_length': 0} 2024-01-26 13:45:36 | INFO | stdout | INFO: 127.0.0.1:57410 - "POST /register_worker HTTP/1.1" 200 OK 2024-01-26 13:45:36 | ERROR | stderr | INFO: Started server process [43517] 2024-01-26 13:45:36 | ERROR | stderr | INFO: Waiting for application startup. 2024-01-26 13:45:36 | ERROR | stderr | INFO: Application startup complete. 2024-01-26 13:45:36 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:10001 (Press CTRL+C to quit) INFO: Started server process [43523] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:21001 (Press CTRL+C to quit) 2024-01-26 13:45:36 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker ef3cd7c8 ... Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] Loading checkpoint shards: 14%|█████████▍ | 1/7 [00:01<00:08, 1.38s/it] Loading checkpoint shards: 29%|██████████████████▊ | 2/7 [00:02<00:06, 1.31s/it] Loading checkpoint shards: 43%|████████████████████████████▎ | 3/7 [00:03<00:04, 1.13s/it] Loading checkpoint shards: 57%|█████████████████████████████████████▋ | 4/7 [00:04<00:03, 1.14s/it] Loading checkpoint shards: 71%|███████████████████████████████████████████████▏ | 5/7 [00:05<00:02, 1.15s/it] Loading checkpoint shards: 86%|████████████████████████████████████████████████████████▌ | 6/7 [00:07<00:01, 1.21s/it] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████| 7/7 [00:08<00:00, 1.22s/it] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████| 7/7 [00:08<00:00, 1.21s/it] 2024-01-26 13:45:45 | ERROR | stderr | 2024-01-26 13:45:52 | INFO | model_worker | Register to controller 2024-01-26 13:45:52 | INFO | controller | Register a new worker: http://127.0.0.1:20002 2024-01-26 13:45:52 | INFO | controller | Register done: http://127.0.0.1:20002, {'model_names': ['chatglm3-6b'], 'speed': 1, 'queue_length': 0} 2024-01-26 13:45:52 | INFO | stdout | INFO: 127.0.0.1:57442 - "POST /register_worker HTTP/1.1" 200 OK 2024-01-26 13:45:52 | ERROR | stderr | INFO: Started server process [43522] 2024-01-26 13:45:52 | ERROR | stderr | INFO: Waiting for application startup. 2024-01-26 13:45:52 | ERROR | stderr | INFO: Application startup complete. 2024-01-26 13:45:52 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:20002 (Press CTRL+C to quit) 2024-01-26 13:45:55,125 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:59,853 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:59,854 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:59,854 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 2024-01-26 13:45:59,854 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto INFO: Started server process [43658] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:7861 (Press CTRL+C to quit) 2024-01-26 13:46:02,595 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto

==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.4.0-107-generic-x86_64-with-glibc2.35. python版本:3.11.7 (main, Dec 15 2023, 18:12:31) [GCC 11.2.0] 项目版本:v0.2.9 langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 2024-01-26 13:46:04,007 - utils.py[line:525] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cuda {'device': 'cuda', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'chatglm3-6b', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'chatglm_turbo', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'model_name': 'gpt-3.5-turbo', 'online_api': True, 'openai_proxy': '', 'port': 20002} 2024-01-26 13:46:04,008 - utils.py[line:543] - WARNING: device not in ['cuda', 'mps', 'cpu','xpu'], device = auto 当前Embbedings模型: bge-large-zh @ cuda

服务端运行信息: OpenAI API Server: http://127.0.0.1:10001/v1 Chatchat API Server: http://127.0.0.1:7861 Chatchat WEBUI Server: http://127.0.0.1:6006 ==============================Langchain-Chatchat Configuration==============================

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.

You can now view your Streamlit app in your browser.

URL: http://127.0.0.1:6006

2024-01-26 13:46:21 | INFO | model_worker | Send heart beat. Models: ['zhipu-api']. Semaphore: Semaphore(value=5, locked=False). call_ct: 0. worker_id: b34f70ae. 2024-01-26 13:46:21 | INFO | controller | Receive heart beat. http://127.0.0.1:21001 2024-01-26 13:46:21 | INFO | stdout | INFO: 127.0.0.1:57488 - "POST /receive_heart_beat HTTP/1.1" 200 OK

微信图片_20240126135158

dosubot[bot] commented 8 months ago

🤖

Hey @TigerHH6866! Good to see you again in the world of Langchain-Chatchat. Hope your code is behaving better than a cat chasing a laser pointer today! 😺

Based on the information you've provided, it seems like there might be a couple of issues causing the problem.

Firstly, the command you're using to launch the webui might be incorrect. Instead of python startup.py --all-webui, you should try using python webui_allinone.py. This command is used to start the webui in the Langchain-Chatchat project. You can find more information about this in the server/webui_allinone_stale.py file.

Secondly, the warning "device not in ['cuda', 'mps', 'cpu','xpu'], device = auto" suggests that the device you're trying to use is not recognized. The device should be one of 'cuda', 'mps', 'cpu', or 'xpu'. If you're trying to use a GPU, make sure your system has a compatible GPU and the necessary drivers installed.

If you're still facing issues after trying these solutions, it would be helpful to have more detailed error messages. You can find these in the logs, which are located at the path specified by the LOG_PATH variable in the script.

Regarding the system requirements for running the Langchain-Chatchat webui, the Python version, Langchain version, and fastchat version are not explicitly mentioned in the project documentation. However, based on the code and libraries used, it can be inferred that Python 3.6 or higher is required. The versions of Langchain and Fastchat can be obtained by calling langchain.__version__ and fastchat.__version__ respectively.

I hope this helps! If you have any other questions or need further clarification, feel free to ask.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

zRzRzRzRzRzRzR commented 8 months ago

如果是autodl的话 很有可能是宿主机的问题