chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.29k stars 5.45k forks source link

[BUG] 两个知识库问同一个问题,一个能回答,一个报错ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without #3462

Closed 15927566740 closed 4 months ago

15927566740 commented 6 months ago

问题描述 / Problem Description 两个知识库问同一个问题,一个能回答,一个报错ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)

29ba59c742777620d4bca3fb087cfac

环境信息 / Environment Information

dosubot[bot] commented 6 months ago

你好,@15927566740!很高兴再次见到你。希望你今天过得不错。

根据您提供的信息,这个错误 "RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)" 通常表示在客户端和服务器之间的HTTP通信中存在问题,可能是由于服务器过早地关闭了连接,或者在处理分块传输编码时存在配置错误。这可能与Langchain-Chatchat的特定设置、Docker环境,或者与使用的模型和嵌入的交互有关。

在Langchain-Chatchat的代码库中,处理API通信的具体函数或方法可能会抛出这个错误,但在提供的上下文中并没有直接提到这个特定错误或其直接原因。因此,很难确定错误的确切原因。

解决此类错误的一般方法是查看服务器日志以获取更详细的错误消息,检查正在使用的软件版本是否存在已知问题,确保HTTP服务器和客户端在分块传输编码方面的正确配置,并可能根据发现更新或调整设置。然而,如果没有具体的信息将错误链接到提供的上下文中的已知问题或配置细节,就无法提供精确的解决方案。

在Langchain-Chatchat的问题跟踪中,我找到了一些类似的问题,但没有一个直接提到这个特定错误或其直接原因。这些问题的建议解决方案包括检查网络连接,确保服务器可以正确访问OpenAI API,验证OpenAI API密钥是否正确,以及是否有足够的请求配额。如果在接收消息体时连接被关闭,尝试重新建立连接并重新发送请求。如果问题仍然存在,可能是OpenAI API服务器的问题,可能需要联系OpenAI的技术支持。

然而,后来澄清说用户并未使用OpenAI的模型,实际的解决方案与代理问题有关。用户没有提供关于如何解决代理问题的具体细节。

在Langchain-Chatchat的问题跟踪中,我还找到了一些类似的已关闭问题。这些问题的建议解决方案包括更新到版本0.2.7或开发分支以解决此问题,因为它在开发环境中正常运行。另一个建议的解决方案是直接拉取最新版本0.2.4的,重新部署就没有这个问题。还有一个建议的解决方案是检查是否安装了duckduckgo的包,通过pip install duckduckgo-search解决。

以上信息是根据我在Langchain-Chatchat的问题跟踪中找到的信息得出的。如果您需要更多的帮助,或者如果您有更多的信息可以提供,那么我将很乐意提供更多的帮助。

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
15927566740 commented 6 months ago

你好,@15927566740!很高兴再次见到你。希望你今天过得不错。

根据您提供的信息,这个错误 "RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)" 通常表示在客户端和服务器之间的HTTP通信中存在问题,可能是由于服务器过早地关闭了连接,或者在处理分块传输编码时存在配置错误。这可能与Langchain-Chatchat的特定设置、Docker环境,或者与使用的模型和嵌入的交互有关。

在Langchain-Chatchat的代码库中,处理API通信的具体函数或方法可能会抛出这个错误,但在提供的上下文中并没有直接提到这个特定错误或其直接原因。因此,很难确定错误的确切原因。

解决此类错误的一般方法是查看服务器日志以获取更详细的错误消息,检查正在使用的软件版本是否存在已知问题,确保HTTP服务器和客户端在分块传输编码方面的正确配置,并可能根据发现更新或调整设置。然而,如果没有具体的信息将错误链接到提供的上下文中的已知问题或配置细节,就无法提供精确的解决方案。

在Langchain-Chatchat的问题跟踪中,我找到了一些类似的问题,但没有一个直接提到这个特定错误或其直接原因。这些问题的建议解决方案包括检查网络连接,确保服务器可以正确访问OpenAI API,验证OpenAI API密钥是否正确,以及是否有足够的请求配额。如果在接收消息体时连接被关闭,尝试重新建立连接并重新发送请求。如果问题仍然存在,可能是OpenAI API服务器的问题,可能需要联系OpenAI的技术支持。

然而,后来澄清说用户并未使用OpenAI的模型,实际的解决方案与代理问题有关。用户没有提供关于如何解决代理问题的具体细节。

在Langchain-Chatchat的问题跟踪中,我还找到了一些类似的已关闭问题。这些问题的建议解决方案包括更新到版本0.2.7或开发分支以解决此问题,因为它在开发环境中正常运行。另一个建议的解决方案是直接拉取最新版本0.2.4的,重新部署就没有这个问题。还有一个建议的解决方案是检查是否安装了duckduckgo的包,通过pip install duckduckgo-search解决。

以上信息是根据我在Langchain-Chatchat的问题跟踪中找到的信息得出的。如果您需要更多的帮助,或者如果您有更多的信息可以提供,那么我将很乐意提供更多的帮助。

Sources

About Dosu

This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

忘了贴报错了,以下是报错内容:

2024-03-21 16:29:14,407 - faiss_cache.py[line:92] - INFO: loading vector store in '331节点知识库/vector_store/bge-large-zh-v1.5' from disk. ERROR: Exception in ASGI application Traceback (most recent call last): File "/root/miniconda3/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 412, in run_asgi result = await app( # type: ignore[func-returns-value] File "/root/miniconda3/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call return await self.app(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/starlette/applications.py", line 119, in call await self.middleware_stack(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "/root/miniconda3/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "/root/miniconda3/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/root/miniconda3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/root/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 762, in call await self.middleware_stack(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 782, in app await route.handle(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 297, in handle await self.app(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 77, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/root/miniconda3/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/root/miniconda3/lib/python3.10/site-packages/starlette/routing.py", line 75, in app await response(scope, receive, send) File "/root/miniconda3/lib/python3.10/site-packages/sse_starlette/sse.py", line 255, in call async with anyio.create_task_group() as task_group: File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 597, in aexit raise exceptions[0] File "/root/miniconda3/lib/python3.10/site-packages/sse_starlette/sse.py", line 258, in wrap await func() File "/root/miniconda3/lib/python3.10/site-packages/sse_starlette/sse.py", line 245, in stream_response async for data in self.body_iterator: File "/Langchain-Chatchat/server/chat/knowledge_base_chat.py", line 81, in knowledge_base_chat_iterator docs = await run_in_threadpool(search_docs, File "/root/miniconda3/lib/python3.10/site-packages/starlette/concurrency.py", line 40, in run_in_threadpool return await anyio.to_thread.run_sync(func, args) File "/root/miniconda3/lib/python3.10/site-packages/anyio/to_thread.py", line 33, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread return await future File "/root/miniconda3/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 807, in run result = context.run(func, args) File "/Langchain-Chatchat/server/knowledge_base/kb_doc_api.py", line 38, in search_docs docs = kb.search_docs(query, top_k, score_threshold) File "/Langchain-Chatchat/server/knowledge_base/kb_service/base.py", line 182, in search_docs docs = self.do_search(query, top_k, score_threshold) File "/Langchain-Chatchat/server/knowledge_base/kb_service/faiss_kb_service.py", line 67, in do_search with self.load_vector_store().acquire() as vs: File "/Langchain-Chatchat/server/knowledge_base/kb_service/faiss_kb_service.py", line 28, in load_vector_store return kb_faiss_pool.load_vector_store(kb_name=self.kb_name, File "/Langchain-Chatchat/server/knowledge_base/kb_cache/faiss_cache.py", line 97, in load_vector_store vector_store = FAISS.load_local(vs_path, embeddings, normalize_L2=True,distance_strategy="METRIC_INNER_PRODUCT") File "/root/miniconda3/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py", line 1116, in load_local docstore, index_to_docstore_id = pickle.load(f) EOFError: Ran out of input 2024-03-21 16:29:15,321 - utils.py[line:197] - ERROR: RemoteProtocolError: API通信遇到错误:peer closed connection without sending complete message body (incomplete chunked read)

dosubot[bot] commented 6 months ago

Based on the error log you provided, it seems like the error is occurring when the application is trying to load a vector store from disk. The specific error message "EOFError: Ran out of input" typically indicates that the Python pickle module is trying to unpickle an empty file or a file that was not completely written.

Here are some steps you can take to troubleshoot this issue:

  1. Check the Vector Store Files: The error occurs when the application is trying to load a vector store from the '331节点知识库/vector_store/bge-large-zh-v1.5' directory. Check if the files in this directory are complete and not corrupted. You can try to manually load the files using the pickle module in a Python script to see if they are readable.

  2. Check Disk Space: Ensure that your system has enough disk space. If the disk is full, it might result in incomplete files being written, which could cause this error.

  3. Check for Concurrent Writes: If multiple processes or threads are trying to write to the same file at the same time, it could result in an incomplete file. Make sure that any operation that writes to the vector store files is properly synchronized.

  4. Recreate the Vector Store: If the vector store files are indeed corrupted or incomplete, you might need to recreate them. You can do this by re-running the process that initially creates the vector store files.

In the Langchain-Chatchat codebase, the faiss_cache.py script is responsible for loading the vector store from disk. You can find the relevant code in the load_vector_store function in this script. Here is the relevant code snippet:

def load_vector_store(self, kb_name: str, embeddings: str) -> ThreadSafeFaiss:
    vs_path = self.get_vs_path(kb_name, embeddings)
    logger.info(f"loading vector store in '{vs_path}' from disk.")
    vector_store = FAISS.load_local(vs_path, embeddings, normalize_L2=True,distance_strategy="METRIC_INNER_PRODUCT")
    return ThreadSafeFaiss(vector_store)

This function tries to load a vector store from the specified path using the FAISS.load_local function. If the file at the specified path is empty or incomplete, it would raise an EOFError when trying to unpickle the file.

You can find more information about this in the faiss_cache.py script in the Langchain-Chatchat codebase.

If the above steps do not resolve the issue, please provide more details about how you are creating the vector store files and any additional error messages or logs you might have.

Sources

#### About Dosu This response is meant to be useful and save you time. It is **not meant to be a precise solution**, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot.
github-actions[bot] commented 5 months ago

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。