Closed knight-mao closed 7 months ago
针对您遇到的问题,这里有一些可能的解决步骤:
检查端口配置:确保您尝试访问的端口与您的服务配置一致。对于Web UI,地址是http://127.0.0.1:8501
,对于API服务器,地址是http://127.0.0.1:7861
。
服务配置检查:确认configs/server_config.py.example
中的DEFAULT_BIND_HOST
、WEBUI_SERVER
和API_SERVER
的端口配置是否与您的设置匹配。如果需要从外部访问服务,请将DEFAULT_BIND_HOST
设置为"0.0.0.0"
。
防火墙和网络配置:确保没有防火墙或网络设置阻止访问您的服务端口。
查看服务启动日志:检查是否有错误或警告提示服务无法正常访问。
跨源资源共享(CORS)配置:如果您的应用需要从不同的域访问API,确保在配置中启用CORS。
如果这些步骤未能解决您的问题,您可以尝试查看类似问题的解决方案,例如在讨论和问题中找到的解决方案。这些解决方案包括确保CORS正确配置,验证网络配置,检查端口冲突,以及查看客户端和服务器日志中的错误或警告。
希望这些建议能帮助您解决问题。如果您需要进一步的帮助,请提供更多的错误信息或日志,以便我们能提供更具体的指导。
To continue the conversation, mention @dosu.
@dosu 1 端口配置检查完成,地址及端口无误。 2 服务配置检查完成,设置无误。 3 防火墙已经完全关闭 4 启动日志:没看出来有什么问题,启动日志附后 openai_api_sever: 2024-04-22 17:39:26 | ERROR | stderr | [32mINFO[0m: Started server process [[36m18040[0m] 2024-04-22 17:39:26 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. 2024-04-22 17:39:26 | ERROR | stderr | [32mINFO[0m: Application startup complete. 2024-04-22 17:39:26 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://127.0.0.1:20000[0m (Press CTRL+C to quit) 2024-04-22 19:40:16 | ERROR | stderr | [32mINFO[0m: Started server process [[36m18156[0m] 2024-04-22 19:40:16 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. 2024-04-22 19:40:16 | ERROR | stderr | [32mINFO[0m: Application startup complete. 2024-04-22 19:40:16 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://127.0.0.1:20000[0m (Press CTRL+C to quit) 2024-04-22 20:15:04 | ERROR | stderr | [32mINFO[0m: Started server process [[36m17756[0m] 2024-04-22 20:15:04 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. 2024-04-22 20:15:04 | ERROR | stderr | [32mINFO[0m: Application startup complete. 2024-04-22 20:15:04 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://127.0.0.1:20000[0m (Press CTRL+C to quit) 2024-04-22 20:18:05 | INFO | stdout | [32mINFO[0m: 127.0.0.1:16938 - "[1mGET / HTTP/1.1[0m" [31m404 Not Found[0m 2024-04-22 20:18:05 | INFO | stdout | [32mINFO[0m: 127.0.0.1:16938 - "[1mGET /favicon.ico HTTP/1.1[0m" [31m404 Not Found[0m 2024-04-22 20:22:58 | ERROR | stderr | [32mINFO[0m: Started server process [[36m20448[0m] 2024-04-22 20:22:58 | ERROR | stderr | [32mINFO[0m: Waiting for application startup. 2024-04-22 20:22:58 | ERROR | stderr | [32mINFO[0m: Application startup complete. 2024-04-22 20:22:58 | ERROR | stderr | [32mINFO[0m: Uvicorn running on [1mhttp://127.0.0.1:20000[0m (Press CTRL+C to quit) 2024-04-22 20:28:13 | INFO | stdout | [32mINFO[0m: 127.0.0.1:17191 - "[1mGET / HTTP/1.1[0m" [31m404 Not Found[0m 2024-04-22 20:31:35 | INFO | stdout | [32mINFO[0m: 127.0.0.1:17236 - "[1mGET / HTTP/1.1[0m" [31m404 Not Found[0m
model_worker_0006e080.log: 2024-04-22 20:22:59 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 0006e080 ... 2024-04-22 20:22:59 | ERROR | stderr | Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] 2024-04-22 20:23:00 | ERROR | stderr | Loading checkpoint shards: 14%|███████████████▏ | 1/7 [00:01<00:06, 1.13s/it] 2024-04-22 20:23:01 | ERROR | stderr | Loading checkpoint shards: 29%|██████████████████████████████▎ | 2/7 [00:02<00:05, 1.18s/it] 2024-04-22 20:23:03 | ERROR | stderr | Loading checkpoint shards: 43%|█████████████████████████████████████████████▍ | 3/7 [00:03<00:04, 1.19s/it] 2024-04-22 20:23:03 | ERROR | stderr | Loading checkpoint shards: 57%|████████████████████████████████████████████████████████████▌ | 4/7 [00:04<00:03, 1.07s/it] 2024-04-22 20:23:05 | ERROR | stderr | Loading checkpoint shards: 71%|███████████████████████████████████████████████████████████████████████████▋ | 5/7 [00:05<00:02, 1.14s/it] 2024-04-22 20:23:06 | ERROR | stderr | Loading checkpoint shards: 86%|██████████████████████████████████████████████████████████████████████████████████████████▊ | 6/7 [00:06<00:01, 1.18s/it] 2024-04-22 20:23:07 | ERROR | stderr | Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:07<00:00, 1.03s/it] 2024-04-22 20:23:07 | ERROR | stderr | Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:07<00:00, 1.10s/it] 2024-04-22 20:23:07 | ERROR | stderr | 2024-04-22 20:23:11 | INFO | model_worker | Register to controller
5 不存在跨域访问需求,是本地访问。
所以是你梯子或者节点的问题,openai_api_sever没有访问成功
兄弟,请问解决了吗?一样的问题,求指教
救救孩子
所以是你梯子或者节点的问题,openai_api_sever没有访问成功
您好,我关掉了vpn,也尝试了开放7861windows防火墙端口,还是无法运行,请问该如何解决?谢谢
问题描述 / Problem Description 成功启动服务,但无法访问api或webui
复现问题的步骤 / Steps to Reproduce
预期的结果 / Expected Result 预期结果是网页显示webui或者api
实际结果 / Actual Result Chatchat WEBUI Server: http://127.0.0.1:8501 显示 127.0.0.1 拒绝了我们的连接请求。 Chatchat API Server: http://127.0.0.1:7861 显示 空白,页面上什么也没有,conda端口显示 INFO: 127.0.0.1:17444 - "GET / HTTP/1.1" 307 Temporary Redirect INFO: 127.0.0.1:17444 - "GET /docs HTTP/1.1" 200 OK
环境信息 / Environment Information ==============================Langchain-Chatchat Configuration============================== 操作系统:Windows-10-10.0.19045-SP0. python版本:3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cuda {'device': 'cuda', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'E:\OneDrive\Coding\AI\Langchain-Chatchat\Embeding-LLM-models\chatglm3-6b', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cuda ==============================Langchain-Chatchat Configuration==============================
2024-04-22 20:22:52,034 - startup.py[line:655] - INFO: 正在启动服务: 2024-04-22 20:22:52,034 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 E:\OneDrive\Coding\AI\Langchain-Chatchat\Langchain-Chatchat-0.2.10\logs D:\Anaconda\envs\snowchat\Lib\site-packages\langchain_core_api\deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-04-22 20:22:58 | INFO | model_worker | Register to controller 2024-04-22 20:22:58 | ERROR | stderr | INFO: Started server process [20448] 2024-04-22 20:22:58 | ERROR | stderr | INFO: Waiting for application startup. 2024-04-22 20:22:58 | ERROR | stderr | INFO: Application startup complete. 2024-04-22 20:22:58 | ERROR | stderr | INFO: Uvicorn running on http://127.0.0.1:20000 (Press CTRL+C to quit) 2024-04-22 20:22:59 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 0006e080 ... 2024-04-22 20:22:59 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting eos_token is not supported, use the default one. 2024-04-22 20:22:59 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting pad_token is not supported, use the default one. 2024-04-22 20:22:59 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting unk_token is not supported, use the default one. Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] Loading checkpoint shards: 14%|███████████████▏ | 1/7 [00:01<00:06, 1.13s/it] Loading checkpoint shards: 29%|██████████████████████████████▎ | 2/7 [00:02<00:05, 1.18s/it] Loading checkpoint shards: 43%|█████████████████████████████████████████████▍ | 3/7 [00:03<00:04, 1.19s/it] Loading checkpoint shards: 57%|████████████████████████████████████████████████████████████▌ | 4/7 [00:04<00:03, 1.07s/it] Loading checkpoint shards: 71%|███████████████████████████████████████████████████████████████████████████▋ | 5/7 [00:05<00:02, 1.14s/it] Loading checkpoint shards: 86%|██████████████████████████████████████████████████████████████████████████████████████████▊ | 6/7 [00:06<00:01, 1.18s/it] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:07<00:00, 1.03s/it] Loading checkpoint shards: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:07<00:00, 1.10s/it] 2024-04-22 20:23:07 | ERROR | stderr | 2024-04-22 20:23:11 | INFO | model_worker | Register to controller INFO: Started server process [12420] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:7861 (Press CTRL+C to quit)
==============================Langchain-Chatchat Configuration============================== 操作系统:Windows-10-10.0.19045-SP0. python版本:3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35
当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b', 'zhipu-api', 'openai-api'] @ cuda {'device': 'cuda', 'host': '127.0.0.1', 'infer_turbo': False, 'model_path': 'E:\OneDrive\Coding\AI\Langchain-Chatchat\Embeding-LLM-models\chatglm3-6b', 'model_path_exists': True, 'port': 20002} {'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'online_api': True, 'port': 21001, 'provider': 'ChatGLMWorker', 'version': 'glm-4', 'worker_class': <class 'server.model_workers.zhipu.ChatGLMWorker'>} {'api_base_url': 'https://api.openai.com/v1', 'api_key': '', 'device': 'auto', 'host': '127.0.0.1', 'infer_turbo': False, 'model_name': 'gpt-4', 'online_api': True, 'openai_proxy': '', 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cuda
服务端运行信息: OpenAI API Server: http://127.0.0.1:20000/v1 Chatchat API Server: http://127.0.0.1:7861 Chatchat WEBUI Server: http://127.0.0.1:8501 ==============================Langchain-Chatchat Configuration==============================
附加信息 / Additional Information 重装了服务好几次,情况也不变,大哥们救救,谢谢了!!!