chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
32.28k stars 5.6k forks source link

Mac M1出现openai.APIConnectionError: Connection error #3540

Closed whxleemdddd closed 6 months ago

whxleemdddd commented 8 months ago

操作系统:macOS-14.1.1-arm64-arm-64bit. python版本:3.10.13 (main, Sep 11 2023, 08:16:02) [Clang 14.0.6 ] 项目版本:v0.2.10 langchain版本:0.1.13. fastchat版本:0.2.36

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b'] @ mps {'device': 'mps', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': '/Users/wanghongxi/Desktop/work/llm/models/chatglm3-6b', 'model_path_exists': True, 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ mps

服务端运行信息: OpenAI API Server: http://127.0.0.1:20000/v1 Chatchat API Server: http://127.0.0.1:7861 Chatchat WEBUI Server: http://0.0.0.0:8501 ==============================Langchain-Chatchat Configuration==============================

You can now view your Streamlit app in your browser.

URL: http://0.0.0.0:8501

A new version of Streamlit is available.

See what's new at https://discuss.streamlit.io/c/announcements

Enter the following command to upgrade: $ pip install streamlit --upgrade

/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain/chat_models/init.py:31: LangChainDeprecationWarning: Importing chat models from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.chat_models import ChatOpenAI.

To install langchain-community run pip install -U langchain-community. warnings.warn( /Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain/llms/init.py:548: LangChainDeprecationWarning: Importing LLMs from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.llms import OpenAI.

To install langchain-community run pip install -U langchain-community. warnings.warn( /Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain/document_loaders/init.py:36: LangChainDeprecationWarning: Importing document loaders from langchain is deprecated. Importing from langchain will no longer be supported as of langchain==0.2.0. Please import from langchain-community instead:

from langchain_community.document_loaders import JSONLoader.

To install langchain-community run pip install -U langchain-community. warnings.warn( 2024-03-27 16:31:56,516 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:58186 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-27 16:31:56,517 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-03-27 16:31:56,643 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:58186 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-27 16:31:56,645 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:58186 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-27 16:31:56,650 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" 2024-03-27 16:32:09,328 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:58203 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-27 16:32:09,329 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-03-27 16:32:09,348 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:58203 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-03-27 16:32:09,349 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:58203 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-03-27 16:32:09,358 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:58203 - "POST /chat/chat HTTP/1.1" 200 OK 2024-03-27 16:32:09,362 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK" /Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class langchain_community.chat_models.openai.ChatOpenAI was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI. warn_deprecated( /Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function acall was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use ainvoke instead. warn_deprecated( 2024-03-27 16:32:09,464 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 0.913845 seconds 2024-03-27 16:32:10,382 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 1.707381 seconds 2024-03-27 16:32:12,118 - utils.py[line:38] - ERROR: Connection error. Traceback (most recent call last): File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions yield File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpx/_transports/default.py", line 371, in handle_async_request resp = await self._pool.handle_async_request(req) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request raise exc from None File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request response = await connection.handle_async_request( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpcore/_async/connection.py", line 99, in handle_async_request raise exc File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpcore/_async/connection.py", line 76, in handle_async_request stream = await self._connect(request) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpcore/_async/connection.py", line 122, in _connect stream = await self._network_backend.connect_tcp(**kwargs) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpcore/_backends/auto.py", line 30, in connect_tcp return await self._backend.connect_tcp( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpcore/_backends/anyio.py", line 112, in connect_tcp with map_exceptions(exc_map): File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ConnectError: All connection attempts failed

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/_base_client.py", line 1435, in _request response = await self._client.send( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpx/_client.py", line 1646, in send response = await self._send_handling_auth( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpx/_client.py", line 1674, in _send_handling_auth response = await self._send_handling_redirects( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpx/_client.py", line 1711, in _send_handling_redirects response = await self._send_single_request(request) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpx/_client.py", line 1748, in _send_single_request response = await transport.handle_async_request(request) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpx/_transports/default.py", line 370, in handle_async_request with map_httpcore_exceptions(): File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ConnectError: All connection attempts failed

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/Users/wanghongxi/Langchain-Chatchat/server/utils.py", line 36, in wrap_done await fn File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 154, in awarning_emitting_wrapper return await wrapped(*args, kwargs) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain/chains/base.py", line 428, in acall return await self.ainvoke( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain/chains/base.py", line 212, in ainvoke raise e File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain/chains/base.py", line 203, in ainvoke await self._acall(inputs, run_manager=run_manager) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain/chains/llm.py", line 275, in _acall response = await self.agenerate([inputs], run_manager=run_manager) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain/chains/llm.py", line 142, in agenerate return await self.llm.agenerate_prompt( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 556, in agenerate_prompt return await self.agenerate( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 516, in agenerate raise exceptions[0] File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 638, in _agenerate_with_cache result = await self._agenerate( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 526, in _agenerate return await agenerate_from_stream(stream_iter) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 84, in agenerate_from_stream async for chunk in stream: File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 490, in _astream async for chunk in await acompletion_with_retry( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 105, in acompletion_with_retry return await llm.async_client.create(kwargs) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 1300, in create return await self._post( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/_base_client.py", line 1703, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/_base_client.py", line 1406, in request return await self._request( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/_base_client.py", line 1459, in _request return await self._retry_request( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/_base_client.py", line 1528, in _retry_request return await self._request( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/_base_client.py", line 1459, in _request return await self._retry_request( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/_base_client.py", line 1528, in _retry_request return await self._request( File "/Users/wanghongxi/miniconda3/envs/langchain-chatchat/lib/python3.10/site-packages/openai/_base_client.py", line 1469, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error. 2024-03-27 16:32:12,134 - utils.py[line:40] - ERROR: APIConnectionError: Caught exception: Connection error.

dosubot[bot] commented 8 months ago

Hey there, @whxleemdddd! Great to see you back 😊. How's everything going besides this pesky issue?

Based on the information you provided, the openai.APIConnectionError you're encountering could be due to several reasons. Here are the most likely causes:

  1. Network Issues: Ensure your machine has a stable internet connection. The error might indicate that your script cannot reach the OpenAI API servers, possibly due to network connectivity issues on your end.

  2. Incorrect API Keys or Configuration: The script includes an argument for --api-keys, which suggests it requires API keys to authenticate requests to an API server. If these keys are missing, incorrect, or have not been properly passed to the script, it would result in connection errors. Double-check that you have correctly set the API keys and that they are valid.

  3. Server Configuration Issues: The script is designed to launch several services (controller, worker, openai_api_server) and relies on specific ports and hostnames (localhost in this case). If there's an issue with the server configuration, such as ports being blocked or already in use, or if the services do not start up correctly, it could lead to connection errors when trying to communicate between the components.

  4. Local Environment Setup: Since you're using an M1 Mac, ensure that all dependencies and the Python environment are correctly set up for ARM64 architecture. Compatibility issues could lead to unexpected errors. Specifically, check that all Python packages and dependencies installed via pip or conda are compatible with ARM64.

  5. Firewall or Security Software: Sometimes, firewall settings or security software on your Mac might block outgoing connections to certain ports or domains. Check your firewall settings and ensure that your security software allows the script to make outbound connections to the OpenAI API servers.

  6. Incorrect Use of the --device Argument: You mentioned using an M1 Mac, but the script has a --device argument set to cuda by default. M1 Macs do not support CUDA because it's NVIDIA-specific. You should change this to mps (Apple's Metal Performance Shaders) for GPU acceleration on M1 chips or cpu if you're not using GPU acceleration.

To diagnose the issue further, check the logs located in the ./logs/ directory as mentioned in your script. These logs might contain more specific error messages that can help pinpoint the exact cause of the APIConnectionError.

Regarding your question about the openai_proxy field in the model_config.py file, it is designed to specify a proxy server for routing requests to the OpenAI API. This can be useful in scenarios where direct access to the OpenAI API is restricted or when you want to route traffic through a specific network path for reasons such as monitoring, logging, or bypassing geo-restrictions. If you're using a proxy server, ensure that the openai_proxy field is correctly set.

I also found some similar issues in the Langchain-Chatchat repository that might be helpful. You can refer to these issues for more information and potential solutions:

If none of the above solutions work, please provide more information about your network environment and the exact steps you're taking when the error occurs to help diagnose the issue.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
WoNeiDieA commented 8 months ago

我也遇到了,今天刚遇到的,不知道该怎么搞

zRzRzRzRzRzRzR commented 8 months ago

这个是本地运行的吗,大概率是你模型都没运行起来

zixiaotan21 commented 8 months ago

同遇到这个问题,embedding 模型使用的是bge-large-zh-v1.5,LLM无论选用zhipu-api, qwen-api都会出现api connection error. 使用LLM时不会回答问题,估计是调用api有问题?

zixiaotan21 commented 8 months ago

昨天运行的时候还没问题,不知道为什么今天出问题

whxleemdddd commented 8 months ago

@zRzRzRzRzRzRzR 换过Qwen1.5-0.5B-Chat 问题依旧,同样本地部署chat-with-mlx;运行并没有问题

zixiaotan21 commented 8 months ago

本地部署chat-with-mlx 的意思是LLM选用chat-with-mlx ,在本地运行吗

whxleemdddd commented 8 months ago

是的,Qwen1.5-0.5B在mac M1 16G内存下,很流畅,只是有点弱智而已

tangrm commented 7 months ago

同样的问题,llm为chatglm3-6b,Embbedings模型为bge-large-zh-v1.5,启动没有问题,对话报错: ==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.4.0-172-generic-x86_64-with-glibc2.35. python版本:3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b'] @ cuda {'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': 'THUDM/chatglm3-6b', 'model_path_exists': True, 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cuda ==============================Langchain-Chatchat Configuration==============================

2024-04-12 10:10:27,999 - startup.py[line:655] - INFO: 正在启动服务: 2024-04-12 10:10:27,999 - startup.py[line:656] - INFO: 如需查看 llm_api 日志,请前往 /Langchain-Chatchat/logs /usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: 模型启动功能将于 Langchain-Chatchat 0.3.x重写,支持更多模式和加速启动,0.2.x中相关功能将废弃 warn_deprecated( 2024-04-12 10:10:33 | ERROR | stderr | INFO: Started server process [121] 2024-04-12 10:10:33 | ERROR | stderr | INFO: Waiting for application startup. 2024-04-12 10:10:33 | ERROR | stderr | INFO: Application startup complete. 2024-04-12 10:10:33 | ERROR | stderr | INFO: Uvicorn running on http://0.0.0.0:20000 (Press CTRL+C to quit) 2024-04-12 10:10:33 | INFO | model_worker | Loading the model ['chatglm3-6b'] on worker 6fc3abd0 ... 2024-04-12 10:10:34 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting eos_token is not supported, use the default one. 2024-04-12 10:10:34 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting pad_token is not supported, use the default one. 2024-04-12 10:10:34 | WARNING | transformers_modules.chatglm3-6b.tokenization_chatglm | Setting unk_token is not supported, use the default one. Loading checkpoint shards: 0%| | 0/7 [00:00<?, ?it/s] Loading checkpoint shards: 14%|████████████▋ | 1/7 [00:00<00:02, 2.07it/s] Loading checkpoint shards: 29%|█████████████████████████▍ | 2/7 [00:00<00:01, 2.64it/s] Loading checkpoint shards: 43%|██████████████████████████████████████▏ | 3/7 [00:01<00:01, 2.62it/s] Loading checkpoint shards: 57%|██████████████████████████████████████████████████▊ | 4/7 [00:01<00:01, 2.87it/s] Loading checkpoint shards: 71%|███████████████████████████████████████████████████████████████▌ | 5/7 [00:01<00:00, 2.65it/s] Loading checkpoint shards: 86%|████████████████████████████████████████████████████████████████████████████▎ | 6/7 [00:02<00:00, 2.48it/s] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:02<00:00, 2.71it/s] Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:02<00:00, 2.64it/s] 2024-04-12 10:10:36 | ERROR | stderr | 2024-04-12 10:10:40 | INFO | model_worker | Register to controller INFO: Started server process [215] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:7861 (Press CTRL+C to quit)

==============================Langchain-Chatchat Configuration============================== 操作系统:Linux-5.4.0-172-generic-x86_64-with-glibc2.35. python版本:3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] 项目版本:v0.2.10 langchain版本:0.0.354. fastchat版本:0.2.35

当前使用的分词器:ChineseRecursiveTextSplitter 当前启动的LLM模型:['chatglm3-6b'] @ cuda {'device': 'cuda', 'host': '0.0.0.0', 'infer_turbo': False, 'model_path': 'THUDM/chatglm3-6b', 'model_path_exists': True, 'port': 20002} 当前Embbedings模型: bge-large-zh-v1.5 @ cuda

服务端运行信息: OpenAI API Server: http://127.0.0.1:20000/v1 Chatchat API Server: http://127.0.0.1:7861 Chatchat WEBUI Server: http://0.0.0.0:8501 ==============================Langchain-Chatchat Configuration==============================

Collecting usage statistics. To deactivate, set browser.gatherUsageStats to False.

You can now view your Streamlit app in your browser.

URL: http://0.0.0.0:8501

2024-04-12 10:10:59,933 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:59486 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-04-12 10:10:59,937 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-04-12 10:11:00,170 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:59486 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-04-12 10:11:00,173 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:59486 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-04-12 10:11:00,193 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" 2024-04-12 10:11:03,276 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:46378 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-04-12 10:11:03,280 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" 2024-04-12 10:11:03,408 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:20001/list_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:46378 - "POST /llm_model/list_running_models HTTP/1.1" 200 OK 2024-04-12 10:11:03,411 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_running_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:46378 - "POST /llm_model/list_config_models HTTP/1.1" 200 OK 2024-04-12 10:11:03,430 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/llm_model/list_config_models "HTTP/1.1 200 OK" INFO: 127.0.0.1:46378 - "POST /chat/chat HTTP/1.1" 200 OK /usr/local/lib/python3.10/dist-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The class langchain_community.chat_models.openai.ChatOpenAI was deprecated in langchain-community 0.0.10 and will be removed in 0.2.0. An updated version of the class exists in the langchain-openai package and should be used instead. To use it run pip install -U langchain-openai and import as from langchain_openai import ChatOpenAI. warn_deprecated( 2024-04-12 10:11:03,771 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/chat "HTTP/1.1 200 OK" 2024-04-12 10:11:03,794 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 0.947264 seconds 2024-04-12 10:11:04,753 - _base_client.py[line:1524] - INFO: Retrying request to /chat/completions in 1.831595 seconds 2024-04-12 10:11:06,598 - utils.py[line:38] - ERROR: Connection error. Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 67, in map_httpcore_exceptions yield File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 371, in handle_async_request resp = await self._pool.handle_async_request(req) File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection_pool.py", line 216, in handle_async_request raise exc from None File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection_pool.py", line 196, in handle_async_request response = await connection.handle_async_request( File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/http_proxy.py", line 207, in handle_async_request return await self._connection.handle_async_request(proxy_request) File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/connection.py", line 101, in handle_async_request return await self._connection.handle_async_request(request) File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/http11.py", line 143, in handle_async_request raise exc File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/http11.py", line 113, in handle_async_request ) = await self._receive_response_headers(**kwargs) File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/http11.py", line 186, in _receive_response_headers event = await self._receive_event(timeout=timeout) File "/usr/local/lib/python3.10/dist-packages/httpcore/_async/http11.py", line 224, in _receive_event data = await self._network_stream.read( File "/usr/local/lib/python3.10/dist-packages/httpcore/_backends/anyio.py", line 32, in read with map_exceptions(exc_map): File "/usr/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.10/dist-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.ReadError

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1435, in _request response = await self._client.send( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1646, in send response = await self._send_handling_auth( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1674, in _send_handling_auth response = await self._send_handling_redirects( File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1711, in _send_handling_redirects response = await self._send_single_request(request) File "/usr/local/lib/python3.10/dist-packages/httpx/_client.py", line 1748, in _send_single_request response = await transport.handle_async_request(request) File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 370, in handle_async_request with map_httpcore_exceptions(): File "/usr/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.10/dist-packages/httpx/_transports/default.py", line 84, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.ReadError

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/Langchain-Chatchat/server/utils.py", line 36, in wrap_done await fn File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 385, in acall raise e File "/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py", line 379, in acall await self._acall(inputs, run_manager=run_manager) File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 275, in _acall response = await self.agenerate([inputs], run_manager=run_manager) File "/usr/local/lib/python3.10/dist-packages/langchain/chains/llm.py", line 142, in agenerate return await self.llm.agenerate_prompt( File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 554, in agenerate_prompt return await self.agenerate( File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 514, in agenerate raise exceptions[0] File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 617, in _agenerate_with_cache return await self._agenerate( File "/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/openai.py", line 522, in _agenerate return await agenerate_from_stream(stream_iter) File "/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/chat_models.py", line 87, in agenerate_from_stream async for chunk in stream: File "/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/openai.py", line 488, in _astream async for chunk in await acompletion_with_retry( File "/usr/local/lib/python3.10/dist-packages/langchain_community/chat_models/openai.py", line 105, in acompletion_with_retry return await llm.async_client.create(**kwargs) File "/usr/local/lib/python3.10/dist-packages/openai/resources/chat/completions.py", line 1300, in create return await self._post( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1703, in post return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1406, in request return await self._request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1459, in _request return await self._retry_request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1528, in _retry_request return await self._request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1459, in _request return await self._retry_request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1528, in _retry_request return await self._request( File "/usr/local/lib/python3.10/dist-packages/openai/_base_client.py", line 1469, in _request raise APIConnectionError(request=request) from err openai.APIConnectionError: Connection error. 2024-04-12 10:11:06,601 - utils.py[line:40] - ERROR: APIConnectionError: Caught exception: Connection error.

github-actions[bot] commented 6 months ago

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。

github-actions[bot] commented 6 months ago

这个问题已经被自动关闭,因为它被标为 stale 后超过 14 天没有任何活动。

shymaggie commented 2 months ago

default llm model glm4-chat is not found in available llms, using qwen:7b instead INFO: 127.0.0.1:57650 - "POST /chat/chat/completions HTTP/1.1" 200 OK 2024-09-14 10:45:32,067 httpx 22262 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 200 OK" 2024-09-14 10:45:32,067 openai._base_client 22253 INFO Retrying request to /chat/completions in 0.807033 seconds 2024-09-14 10:45:32,879 openai._base_client 22253 INFO Retrying request to /chat/completions in 1.548385 seconds 2024-09-14 10:45:34.432 | ERROR | chatchat.server.api_server.openai_routes:generator:105 - openai request error: Connection error.

我也遇到了。。。。。。。。。