chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
32.16k stars 5.59k forks source link

[BUG] 无法访问web页面 #2406

Closed CaffeineOddity closed 10 months ago

CaffeineOddity commented 11 months ago

chatchat$ sudo python startup.py -a

==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.15.0-43.46-custom-generic-x86_64-with-glibc2.35. python版本:3.9.5 (default, Dec 16 2023, 01:54:16) [GCC 11.4.0] 项目版本:v0.2.8 langchain版本:0.0.344. fastchat版本:0.2.34

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b', 'openai-api'] @ cuda {'device': 'cuda', 'gpus': '4,5,6,7', 'host': '0.0.0.0', 'infer_turbo': False, 'max_gpu_memory': '5GiB', 'model_path': '/data1/ai_models/huggingface/hub/models--THUDM--chatglm3-6b/blobs', 'model_path_exists': True, 'num_gpus': 4, 'port': 20002} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'gpus': '4,5,6,7', 'host': '0.0.0.0', 'infer_turbo': False, 'max_gpu_memory': '5GiB', 'model_name': 'gpt-3.5-turbo', 'num_gpus': 4, 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cuda ==============================Langchain-Chatchat Configuration==============================

2023-12-19 10:13:27,219 - startup.py[line:650] - INFO: 正在启动服务: 2023-12-19 10:13:27,219 - startup.py[line:651] - INFO: 如需查看 llm_api 日志,请前往 /data1/chatchat/chatchat/logs 2023-12-19 10:13:30 | ERROR | stderr | INFO: Started server process [34214] 2023-12-19 10:13:30 | ERROR | stderr | INFO: Waiting for application startup. 2023-12-19 10:13:30 | ERROR | stderr | INFO: Application startup complete. 2023-12-19 10:13:30 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20001 (Press CTRL+C to quit) 2023-12-19 10:13:34 | ERROR | stderr | INFO: Started server process [34488] 2023-12-19 10:13:34 | ERROR | stderr | INFO: Waiting for application startup. 2023-12-19 10:13:34 | ERROR | stderr | INFO: Application startup complete. 2023-12-19 10:13:34 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit) 2023-12-19 10:13:35 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker ea412628 ... Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] Loading checkpoint shards: 14%|███████████████▋ | 1/7 [00:02<00:17, 2.85s/it] Loading checkpoint shards: 29%|███████████████████████████████▍ | 2/7 [00:05<00:13, 2.78s/it] Loading checkpoint shards: 43%|███████████████████████████████████████████████▏ | 3/7 [00:07<00:09, 2.30s/it] Loading checkpoint shards: 57%|██████████████████████████████████████████████████████████████▊ | 4/7 [00:09<00:07, 2.42s/it] Loading checkpoint shards: 71%|██████████████████████████████████████████████████████████████████████████████▌ | 5/7 [00:12<00:05, 2.53s/it] Loading checkpoint shards: 86%|██████████████████████████████████████████████████████████████████████████████████████████████▎ | 6/7 [00:14<00:02, 2.26s/it] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:15<00:00, 1.85s/it] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:15<00:00, 2.20s/it] 2023-12-19 10:13:51 | ERROR | stderr | 2023-12-19 10:13:51 | INFO | model_worker | Register to controller 2023-12-19 10:13:51 | INFO | controller | Register a new worker: http://127.0.0.1:20002 2023-12-19 10:13:51 | INFO | controller | Register done: http://127.0.0.1:20002, {'model_names': ['chatglm3-6b'], 'speed': 1, 'queue_length': 0} 2023-12-19 10:13:51 | INFO | stdout | INFO: 127.0.0.1:54238 - "POST /register_worker HTTP/1.1" 200 OK 2023-12-19 10:13:51 | ERROR | stderr | INFO: Started server process [34489] 2023-12-19 10:13:51 | ERROR | stderr | INFO: Waiting for application startup. 2023-12-19 10:13:51 | ERROR | stderr | INFO: Application startup complete. 2023-12-19 10:13:51 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20002 (Press CTRL+C to quit) INFO: Started server process [35089] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit)

==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.15.0-43.46-custom-generic-x86_64-with-glibc2.35. python版本:3.9.5 (default, Dec 16 2023, 01:54:16) [GCC 11.4.0] 项目版本:v0.2.8 langchain版本:0.0.344. fastchat版本:0.2.34

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b', 'openai-api'] @ cuda {'device': 'cuda', 'gpus': '4,5,6,7', 'host': '0.0.0.0', 'infer_turbo': False, 'max_gpu_memory': '5GiB', 'model_path': '/data1/ai_models/huggingface/hub/models--THUDM--chatglm3-6b/blobs', 'model_path_exists': True, 'num_gpus': 4, 'port': 20002} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'gpus': '4,5,6,7', 'host': '0.0.0.0', 'infer_turbo': False, 'max_gpu_memory': '5GiB', 'model_name': 'gpt-3.5-turbo', 'num_gpus': 4, 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh @ cuda

服务端运行信息: OpenAI API Server: http://127.0.0.1:20000/v1 Chatchat API Server: http://127.0.0.1:7861 Chatchat WEBUI Server: http://0.0.0.0:8501 ==============================Langchain-Chatchat Configuration==============================

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.

You can now view your Streamlit app in your browser.

URL: http://0.0.0.0:8501

================================================================================ 环境: ubuntu# 22.04 python # 3.9.5

部署在远程服务器上 web专区域名指向目标机器: xxx.com/chatchat python startup.py -a 后可以看到访问记录: 2023-12-19 10:31:07 | INFO | stdout | INFO: 127.0.0.1:54290 - "POST /receive_heart_beat HTTP/1.1" 200 OK 2023-12-19 10:14:14 | INFO | stdout | INFO: 10.16.16.37:41306 - "GET /chatchat/ HTTP/1.1" 404 Not Found

想问一下要怎么定位

liunux4odoo commented 11 months ago

这应该不是项目本身的问题,可能是反向代理没配置好,你可以按照部署机器本机或局域网直接访问测试看看。

Xingyuxingyu commented 10 months ago

请问是如何解决的呢?如何访问部署在远程服务器上的webui?

zRzRzRzRzRzRzR commented 10 months ago

你的服务器有公网就直接把你的局域网IP放上去 替换 127.0.0.1就行了

Xingyuxingyu commented 10 months ago

这是来自QQ邮箱的假期自动回复邮件。您好,已收到您的邮件。我会尽快给您回复。

liyiyiya commented 7 months ago

你的服务器有公网就直接把你的局域网IP放上去 替换 127.0.0.1就行了

我换成127.0.0.1 也不行欸,还有什么办法吗?大佬

Xingyuxingyu commented 7 months ago

这是来自QQ邮箱的假期自动回复邮件。您好,已收到您的邮件。我会尽快给您回复。