chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.23k stars 5.45k forks source link

API通信遇到错误:peer closed connection without sending complete mess #2493

Closed weipeng008005 closed 8 months ago

weipeng008005 commented 8 months ago

问题描述 / Problem Description 用简洁明了的语言描述这个问题 / Describe the problem in a clear and concise manner.

复现问题的步骤 / Steps to Reproduce

  1. 执行 '...' / Run '...'
  2. 点击 '...' / Click '...'
  3. 滚动到 '...' / Scroll to '...'
  4. 问题出现 / Problem occurs

预期的结果 / Expected Result 描述应该出现的结果 / Describe the expected result.

实际结果 / Actual Result 描述实际发生的结果 / Describe the actual result.

环境信息 / Environment Information

附加信息 / Additional Information 添加与问题相关的任何其他信息 / Add any other information related to the issue.

python startup.py -a LLM对话可以正常使用,知识库对话会报API通信错误

运行信息:
操作系统:Linux-5.4.0-150-generic-x86_64-with-glibc2.27.
python版本:3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:43:09) [GCC 12.3.0]
项目版本:v0.2.8
langchain版本:0.0.344. fastchat版本:0.2.34

当前使用的分词器:ChineseRecursiveTextSplitter
当前启动的LLM模型:['Qwen-14B-Chat'] @ cuda
{'device': 'cuda',
'gpus': '0',
'host': '0.0.0.0',
'infer_turbo': False,
'model_path': '/home/gpu/ai/projects/Qwen/models/qwen-14b-chat',
'model_path_exists': True,
'num_gpus': 1,
'port': 20003}
当前Embbedings模型: nlp_gte_sentence-embedding_chinese-large @ cuda

服务端运行信息:
OpenAI API Server: http://127.0.0.1:20000/v1
Chatchat API Server: http://127.0.0.1:7861
Chatchat WEBUI Server: http://0.0.0.0:8501

bug信息: 2023-12-28 09:16:02,440 - _client.py[line:1027] - INFO: HTTP Request: POST http://127.0.0.1:7861/chat/knowledge_base_chat "HTTP/1.1 200 OK"
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/uvicorn/protocols/http/httptools_impl.py", line 426, in run_asgi result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in call
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call
await super().call(scope, receive, send)
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/applications.py", line 122, in call
await self.middleware_stack(scope, receive, send)
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/errors.py", line 184, in call
raise exc
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/errors.py", line 162, in call await self.app(scope, receive, _send)
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 79, in call
raise exc
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 68, in call await self.app(scope, receive, sender)
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 718, in call
await route.handle(scope, receive, send)
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 276, in handle
await self.app(scope, receive, send)
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/routing.py", line 69, in app
await response(scope, receive, send)
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/responses.py", line 270, in call
async with anyio.create_task_group() as task_group:
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 597, in aexit
raise exceptions[0]
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/responses.py", line 273, in wrap
await func()
File "/home/gpu/ai/anaconda3/envs/langchain/lib/python3.11/site-packages/starlette/responses.py", line 262, in stream_response
async for chunk in self.body_iterator:
File "/home/gpu/ai/projects/Langchain-Chatchat/server/chat/knowledge_base_chat.py", line 81, in knowledge_base_chat_iterator
input_msg = History(role="user", content=prompt_template).to_msg_template(False)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for History
content
none is not an allowed value (type=type_error.none.not_allowed)
2023-12-28 09:16:02,450 - utils.py[line:188] - ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete mess age body (incomplete chunked read)

dosubot[bot] commented 8 months ago

🤖

你好,@weipeng008005!我是Dosu,一个可以帮助你解决 bugs,回答问题以及成为贡献者的机器人。在等待人类维护者的同时,我会尽我所能帮助你。让我们一起解决问题吧!

根据您提供的错误日志,这个问题可能是由于在处理API请求时发生了异常。具体来说,这个错误通常表示服务器在发送完整的响应之前关闭了连接。在Langchain-Chatchat v0.2.8的上下文中,这可能是由于以下几个原因:

  1. OpenAI API密钥或基础URL不正确:openai.api_keyopenai.api_base是从config字典中设置的。如果这些值不正确,对openai.ChatCompletion.acreate(**data)的API调用可能会失败。

  2. OpenAI API服务器正在关闭连接:这可能是由于各种原因,如速率限制、服务器端错误或网络问题。

  3. 在API调用期间抛出了异常:openai.ChatCompletion.acreate(**data)调用被包装在一个try-except块中。如果在此调用期间抛出异常,它将被记录并关闭连接。

要调试此问题,您可以:

没有更多的信息,很难确切地说为什么会出现这个错误。然而,这些步骤应该可以帮助您开始调试问题。

另外,我在Langchain-Chatchat的问题库中找到了一些类似的已解决问题,可能对您有所帮助:

  1. [BUG] 简洁阐述问题 / Concis我安装后,LLM问答没问题,知识库不能问答,也不能上传文件,API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)e description of the issue,请教如何查找服务器端口,做映射

  2. [BUG] API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)

  3. [BUG] dev v0.2.8 API通信遇到错误

希望这些信息对您有所帮助。如果您有任何其他问题或需要进一步的帮助,请随时告诉我。

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

weipeng008005 commented 8 months ago

🤖

你好,@weipeng008005!我是Dosu,一个可以帮助你解决 bugs,回答问题以及成为贡献者的机器人。在等待人类维护者的同时,我会尽我所能帮助你。让我们一起解决问题吧!

根据您提供的错误日志,这个问题可能是由于在处理API请求时发生了异常。具体来说,这个错误通常表示服务器在发送完整的响应之前关闭了连接。在Langchain-Chatchat v0.2.8的上下文中,这可能是由于以下几个原因:

  1. OpenAI API密钥或基础URL不正确:openai.api_keyopenai.api_base是从config字典中设置的。如果这些值不正确,对openai.ChatCompletion.acreate(**data)的API调用可能会失败。
  2. OpenAI API服务器正在关闭连接:这可能是由于各种原因,如速率限制、服务器端错误或网络问题。
  3. 在API调用期间抛出了异常:openai.ChatCompletion.acreate(**data)调用被包装在一个try-except块中。如果在此调用期间抛出异常,它将被记录并关闭连接。

要调试此问题,您可以:

  • 检查OpenAI API密钥和基础URL:确保这些值正确,并且服务器可以访问。
  • 检查日志:如果在API调用期间抛出异常,应该会被记录。检查日志中的任何错误消息。
  • 检查网络:确保没有网络问题可能导致连接过早关闭。

没有更多的信息,很难确切地说为什么会出现这个错误。然而,这些步骤应该可以帮助您开始调试问题。

另外,我在Langchain-Chatchat的问题库中找到了一些类似的已解决问题,可能对您有所帮助:

  1. [BUG] 简洁阐述问题 / Concis我安装后,LLM问答没问题,知识库不能问答,也不能上传文件,API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)e description of the issue,请教如何查找服务器端口,做映射
  2. [BUG] API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)
  3. [BUG] dev v0.2.8 API通信遇到错误

希望这些信息对您有所帮助。如果您有任何其他问题或需要进一步的帮助,请随时告诉我。

Sources

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

我查看了链接,还是没有看到解决办法

dosubot[bot] commented 8 months ago

🤖

根据您的问题描述和我在Langchain-Chatchat仓库中找到的相关信息,这个问题可能是由于在使用OpenAI API进行通信时,由于某种原因,连接被提前关闭,导致没有接收到完整的消息体。这可能是由于网络问题,或者是由于OpenAI API服务器的问题。

在这种情况下,您可以尝试以下解决方案:

  1. 检查您的网络连接,确保您的服务器可以正常访问OpenAI API。

  2. 检查您的OpenAI API密钥是否正确,以及是否有足够的请求配额。

  3. 在调用OpenAI API时,增加错误处理和重试逻辑。例如,如果在接收消息体时连接被关闭,可以尝试重新建立连接并重新发送请求。

  4. 如果问题仍然存在,可能是OpenAI API服务器的问题,您可以尝试联系OpenAI的技术支持。

在您的代码中,可以在openai_chat函数中增加错误处理和重试逻辑,例如:

import asyncio

async def openai_chat(msg: OpenAiChatMsgIn):
    # ... existing code ...

    async def get_response(msg):
        data = msg.dict()

        for _ in range(3):  # retry up to 3 times
            try:
                response = await openai.ChatCompletion.acreate(**data)
                # ... existing code ...
                break  # if successful, break the retry loop
            except Exception as e:
                msg = f"获取ChatCompletion时出错:{e}"
                logger.error(f'{e.__class__.__name__}: {msg}',
                             exc_info=e if log_verbose else None)
                if "peer closed connection" in str(e):
                    await asyncio.sleep(1)  # wait for 1 second before retrying
                else:
                    break  # if the error is not about connection, break the retry loop

    # ... existing code ...

这段代码在调用OpenAI API时增加了错误处理和重试逻辑,如果在接收消息体时连接被关闭,会等待1秒后重新发送请求,最多重试3次。

这个解决方案是基于我在Langchain-Chatchat仓库中找到的相关信息,以及我对OpenAI API的理解。如果您在实施这个解决方案时遇到任何问题,或者这个解决方案不能解决您的问题,请随时向我提问,我会尽我所能提供进一步的帮助。

相关资源:

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

CaffeineOddity commented 8 months ago

向量数据库没创建成功,确认一下vector_store目录下对应嵌入模型有没有faiss文件

zRzRzRzRzRzRzR commented 8 months ago

如果创建成功是会有log显示创建时间的