chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.14k stars 5.44k forks source link

0.3.1版本中仍然出现httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) #4542

Closed nilin1998 closed 1 week ago

nilin1998 commented 1 month ago
image

当前选用的模型为glm4-chat,对话时报错,详细的错误为:

INFO:     127.0.0.1:62988 - "GET /tools HTTP/1.1" 200 OK
2024-07-16 16:39:32,138 httpx        21692 INFO     HTTP Request: GET http://127.0.0.1:7861/tools "HTTP/1.1 200 OK"
OpenAIChatInput(
    user=None,
    extra_headers=None,
    extra_query=None,
    extra_json=None,
    timeout=None,
    messages=[{'content': '你好', 'role': 'user'}],
    model='glm4-chat',
    frequency_penalty=None,
    function_call=None,
    functions=None,
    logit_bias=None,
    logprobs=None,
    max_tokens=None,
    n=None,
    presence_penalty=None,
    response_format=None,
    seed=None,
    stop=None,
    stream=True,
    temperature=0.7,
    tool_choice=None,
    tools=None,
    top_logprobs=None,
    top_p=None,
    metadata=None,
    chat_model_config={
        'preprocess_model': {
            'glm4-chat': {
                'model': '',
                'temperature': 0.05,
                'max_tokens': 4096,
                'history_len': 10,
                'prompt_name': 'default',
                'callbacks': False
            }
        },
        'llm_model': {'glm4-chat': {}},
        'action_model': {
            'glm4-chat': {
                'model': 'glm4-chat',
                'temperature': 0.01,
                'max_tokens': 4096,
                'history_len': 10,
                'prompt_name': 'ChatGLM3',
                'callbacks': True
            }
        },
        'postprocess_model': {
            'glm4-chat': {
                'model': '',
                'temperature': 0.01,
                'max_tokens': 4096,
                'history_len': 10,
                'prompt_name': 'default',
                'callbacks': True
            }
        },
        'image_model': {'sd-turbo': {'model': 'sd-turbo', 'size': '256*256'}}
    },
    conversation_id='59ea4e03359f4f29839f46400a6cb806',
    tool_input={},
    upload_image=None
)
INFO:     127.0.0.1:62993 - "POST /chat/chat/completions HTTP/1.1" 200 OK
2024-07-16 16:39:33,176 httpx        21692 INFO     HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 200 OK"
2024-07-16 16:39:33,226 httpx        19264 INFO     HTTP Request: POST http://127.0.0.1:9997/v1/chat/completions "HTTP/1.1 200 OK"
ERROR:    Exception in ASGI application
Traceback (most recent call last):
  File "C:\Anaconda\envs\new\Lib\site-packages\sse_starlette\sse.py", line 269, in __call__
    await wrap(partial(self.listen_for_disconnect, receive))
  File "C:\Anaconda\envs\new\Lib\site-packages\sse_starlette\sse.py", line 258, in wrap
    await func()
  File "C:\Anaconda\envs\new\Lib\site-packages\sse_starlette\sse.py", line 215, in listen_for_disconnect
    message = await receive()
              ^^^^^^^^^^^^^^^
  File "C:\Anaconda\envs\new\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 524, in receive
    await self.message_event.wait()
  File "C:\Anaconda\envs\new\Lib\asyncio\locks.py", line 213, in wait
    await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 1ee92801690

During handling of the above exception, another exception occurred:

  + Exception Group Traceback (most recent call last):
  |   File "C:\Anaconda\envs\new\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 396, in run_asgi
  |     result = await app(  # type: ignore[func-returns-value]
  |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "C:\Anaconda\envs\new\Lib\site-packages\uvicorn\middleware\proxy_headers.py", line 70, in __call__
  |     return await self.app(scope, receive, send)
  |            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  |   File "C:\Anaconda\envs\new\Lib\site-packages\fastapi\applications.py", line 1054, in __call__
  |     await super().__call__(scope, receive, send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\applications.py", line 123, in __call__
  |     await self.middleware_stack(scope, receive, send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\middleware\errors.py", line 186, in __call__
  |     raise exc
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\middleware\errors.py", line 164, in __call__
  |     await self.app(scope, receive, _send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\middleware\exceptions.py", line 62, in __call__
  |     await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
  |     raise exc
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
  |     await app(scope, receive, sender)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\routing.py", line 758, in __call__
  |     await self.middleware_stack(scope, receive, send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\routing.py", line 778, in app
  |     await route.handle(scope, receive, send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\routing.py", line 299, in handle
  |     await self.app(scope, receive, send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\routing.py", line 79, in app
  |     await wrap_app_handling_exceptions(app, request)(scope, receive, send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\_exception_handler.py", line 64, in wrapped_app
  |     raise exc
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\_exception_handler.py", line 53, in wrapped_app
  |     await app(scope, receive, sender)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\starlette\routing.py", line 77, in app
  |     await response(scope, receive, send)
  |   File "C:\Anaconda\envs\new\Lib\site-packages\sse_starlette\sse.py", line 255, in __call__
  |     async with anyio.create_task_group() as task_group:
  |   File "C:\Anaconda\envs\new\Lib\site-packages\anyio\_backends\_asyncio.py", line 680, in __aexit__
  |     raise BaseExceptionGroup(
  | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception)
  +-+---------------- 1 ----------------
    | Traceback (most recent call last):
    |   File "C:\Anaconda\envs\new\Lib\site-packages\sse_starlette\sse.py", line 258, in wrap
    |     await func()
    |   File "C:\Anaconda\envs\new\Lib\site-packages\sse_starlette\sse.py", line 245, in stream_response
    |     async for data in self.body_iterator:
    |   File "C:\Anaconda\envs\new\Lib\site-packages\chatchat\server\api_server\openai_routes.py", line 87, in generator
    |     async for chunk in await method(**params):
    |   File "C:\Anaconda\envs\new\Lib\site-packages\openai\_streaming.py", line 147, in __aiter__
    |     async for item in self._iterator:
    |   File "C:\Anaconda\envs\new\Lib\site-packages\openai\_streaming.py", line 174, in __stream__
    |     raise APIError(
    | openai.APIError: An error occurred during streaming
    +------------------------------------
2024-07-16 16:39:33.292 Uncaught app exception
Traceback (most recent call last):
  File "C:\Anaconda\envs\new\Lib\site-packages\httpx\_transports\default.py", line 69, in map_httpcore_exceptions
    yield
  File "C:\Anaconda\envs\new\Lib\site-packages\httpx\_transports\default.py", line 113, in __iter__
    for part in self._httpcore_stream:
  File "C:\Anaconda\envs\new\Lib\site-packages\httpcore\_sync\connection_pool.py", line 367, in __iter__
    raise exc from None
  File "C:\Anaconda\envs\new\Lib\site-packages\httpcore\_sync\connection_pool.py", line 363, in __iter__
    for part in self._stream:
  File "C:\Anaconda\envs\new\Lib\site-packages\httpcore\_sync\http11.py", line 349, in __iter__
    raise exc
  File "C:\Anaconda\envs\new\Lib\site-packages\httpcore\_sync\http11.py", line 341, in __iter__
    for chunk in self._connection._receive_response_body(**kwargs):
  File "C:\Anaconda\envs\new\Lib\site-packages\httpcore\_sync\http11.py", line 210, in _receive_response_body
    event = self._receive_event(timeout=timeout)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "C:\Anaconda\envs\new\Lib\site-packages\httpcore\_sync\http11.py", line 220, in _receive_event
    with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}):
  File "C:\Anaconda\envs\new\Lib\contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "C:\Anaconda\envs\new\Lib\site-packages\httpcore\_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "C:\Anaconda\envs\new\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script        
    exec(code, module.__dict__)
  File "C:\Anaconda\envs\new\Lib\site-packages\chatchat\webui.py", line 73, in <module>
    dialogue_page(api=api, is_lite=is_lite)
  File "C:\Anaconda\envs\new\Lib\site-packages\chatchat\webui_pages\dialogue\dialogue.py", line 426, in dialogue_page
    for d in client.chat.completions.create(
  File "C:\Anaconda\envs\new\Lib\site-packages\openai\_streaming.py", line 46, in __iter__
    for item in self._iterator:
  File "C:\Anaconda\envs\new\Lib\site-packages\openai\_streaming.py", line 58, in __stream__
    for sse in iterator:
  File "C:\Anaconda\envs\new\Lib\site-packages\openai\_streaming.py", line 50, in _iter_events
    yield from self._decoder.iter_bytes(self.response.iter_bytes())
  File "C:\Anaconda\envs\new\Lib\site-packages\openai\_streaming.py", line 280, in iter_bytes
    for chunk in self._iter_chunks(iterator):
  File "C:\Anaconda\envs\new\Lib\site-packages\openai\_streaming.py", line 291, in _iter_chunks
    for chunk in iterator:
  File "C:\Anaconda\envs\new\Lib\site-packages\httpx\_models.py", line 829, in iter_bytes
    for raw_bytes in self.iter_raw():
  File "C:\Anaconda\envs\new\Lib\site-packages\httpx\_models.py", line 883, in iter_raw
    for raw_stream_bytes in self.stream:
  File "C:\Anaconda\envs\new\Lib\site-packages\httpx\_client.py", line 126, in __iter__
    for chunk in self._stream:
  File "C:\Anaconda\envs\new\Lib\site-packages\httpx\_transports\default.py", line 112, in __iter__
    with map_httpcore_exceptions():
  File "C:\Anaconda\envs\new\Lib\contextlib.py", line 158, in __exit__
    self.gen.throw(typ, value, traceback)
  File "C:\Anaconda\envs\new\Lib\site-packages\httpx\_transports\default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)
Adolph3671 commented 1 month ago

我也是

Lxx2047734741 commented 1 month ago

这个怎么解决呀?

HuoShengLiangIT commented 1 month ago

我也遇到这个问题了,工具选择数学计算器,计算1+1=?是可以的,但是用本地知识库问答,就会报以上错误,试了python=3.8,3.10,3.11三个版本都有这个问题,

nickzzw commented 1 month ago

一样的问题。报错一模一样。

liunux4odoo commented 1 month ago

请按以下步骤排查一下:

  1. 更新 langchain-chatchat 到 0.3.1.1 版本
  2. 把数据目录下的 *.yaml 配置文件删掉,重新执行 chatchat init
  3. 如果还不行,请更换 qwen 模型试试。已知 glm4-chat Agent 能力匹配不太稳定。
madog1983 commented 1 month ago

请按以下步骤排查一下:

  1. 更新 langchain-chatchat 到 0.3.1.1 版本
  2. 把数据目录下的 *.yaml 配置文件删掉,重新执行 chatchat init
  3. 如果还不行,请更换 qwen 模型试试。已知 glm4-chat Agent 能力匹配不太稳定。

已操作,问题依旧。

madog1983 commented 1 month ago

xinference中如下报错

2024-07-19 01:55:48,242 xinference.api.restful_api 23736 ERROR    Chat completion stream got an error: [address=127.0.0.1:55254, pid=27020] 1 validation error for CreateCompletion
tool_choice
  extra fields not permitted (type=value_error.extra)
Traceback (most recent call last):
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\api\restful_api.py", line 1574, in stream_results
    iterator = await model.chat(
               ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\backends\context.py", line 231, in send
    return self._process_result_message(result)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\backends\context.py", line 102, in _process_result_message
    raise message.as_instanceof_cause()
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\backends\pool.py", line 656, in send
    result = await self._run_coro(message.message_id, coro)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\backends\pool.py", line 367, in _run_coro
    return await coro
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
    ^^^^^^^^^^^^^^^^^
  File "xoscar\\core.pyx", line 558, in __on_receive__
    raise ex
  File "xoscar\\core.pyx", line 520, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
    ^^^^^^^^^^^^^^^^^
  File "xoscar\\core.pyx", line 521, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
    ^^^^^^^^^^^^^^^^^
  File "xoscar\\core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    result = await result
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\utils.py", line 45, in wrapped
    ret = await func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\model.py", line 90, in wrapped_func
    ret = await fn(self, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\api.py", line 462, in _wrapper
    r = await func(self, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\model.py", line 505, in chat
    response = await self._call_wrapper(
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\model.py", line 114, in _async_wrapper
    return await fn(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\model.py", line 394, in _call_wrapper
    ret = await asyncio.to_thread(fn, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\asyncio\threads.py", line 25, in to_thread
    return await loop.run_in_executor(None, func_call)
      ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\model\llm\ggml\llamacpp.py", line 317, in chat
    generate_config = self._sanitize_generate_config(generate_config)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\model\llm\ggml\llamacpp.py", line 295, in _sanitize_generate_config
    generate_config = super()._sanitize_generate_config(generate_config)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\model\llm\ggml\llamacpp.py", line 101, in _sanitize_generate_config
    **CreateCompletionLlamaCpp(**generate_config).dict()
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
    raise validation_error
    ^^^^^^^^^^^^^^^^^
pydantic.v1.error_wrappers.ValidationError: [address=127.0.0.1:55254, pid=27020] 1 validation error for CreateCompletion
tool_choice
  extra fields not permitted (type=value_error.extra)
2024-07-19 01:56:59,483 xinference.api.restful_api 23736 ERROR    Chat completion stream got an error: [address=127.0.0.1:55254, pid=27020] 1 validation error for CreateCompletion
tool_choice
  extra fields not permitted (type=value_error.extra)
Traceback (most recent call last):
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\api\restful_api.py", line 1574, in stream_results
    iterator = await model.chat(
               ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\backends\context.py", line 231, in send
    return self._process_result_message(result)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\backends\context.py", line 102, in _process_result_message
    raise message.as_instanceof_cause()
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\backends\pool.py", line 656, in send
    result = await self._run_coro(message.message_id, coro)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\backends\pool.py", line 367, in _run_coro
    return await coro
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\api.py", line 384, in __on_receive__
    return await super().__on_receive__(message)  # type: ignore
    ^^^^^^^^^^^^^^^^^
  File "xoscar\\core.pyx", line 558, in __on_receive__
    raise ex
  File "xoscar\\core.pyx", line 520, in xoscar.core._BaseActor.__on_receive__
    async with self._lock:
    ^^^^^^^^^^^^^^^^^
  File "xoscar\\core.pyx", line 521, in xoscar.core._BaseActor.__on_receive__
    with debug_async_timeout('actor_lock_timeout',
    ^^^^^^^^^^^^^^^^^
  File "xoscar\\core.pyx", line 526, in xoscar.core._BaseActor.__on_receive__
    result = await result
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\utils.py", line 45, in wrapped
    ret = await func(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\model.py", line 90, in wrapped_func
    ret = await fn(self, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xoscar\api.py", line 462, in _wrapper
    r = await func(self, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\model.py", line 505, in chat
    response = await self._call_wrapper(
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\model.py", line 114, in _async_wrapper
    return await fn(*args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\core\model.py", line 394, in _call_wrapper
    ret = await asyncio.to_thread(fn, *args, **kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\asyncio\threads.py", line 25, in to_thread
    return await loop.run_in_executor(None, func_call)
      ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\concurrent\futures\thread.py", line 58, in run
    result = self.fn(*self.args, **self.kwargs)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\model\llm\ggml\llamacpp.py", line 317, in chat
    generate_config = self._sanitize_generate_config(generate_config)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\model\llm\ggml\llamacpp.py", line 295, in _sanitize_generate_config
    generate_config = super()._sanitize_generate_config(generate_config)
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\xinference\model\llm\ggml\llamacpp.py", line 101, in _sanitize_generate_config
    **CreateCompletionLlamaCpp(**generate_config).dict()
    ^^^^^^^^^^^^^^^^^
  File "d:\CondaEnv\xinfernce\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
    raise validation_error
    ^^^^^^^^^^^^^^^^^
pydantic.v1.error_wrappers.ValidationError: [address=127.0.0.1:55254, pid=27020] 1 validation error for CreateCompletion
tool_choice
  extra fields not permitted (type=value_error.extra)
liunux4odoo commented 1 month ago

看报错信息,是 llamacpp 不支持 tool_choice 参数。

Lxx2047734741 commented 1 month ago

请问一下,那我这个报错是啥原因呢? RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) Traceback: File "/usr/local/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 600, in _run_script exec(code, module.dict) File "/usr/local/lib/python3.10/site-packages/chatchat/webui.py", line 73, in dialogue_page(api=api, is_lite=is_lite) File "/usr/local/lib/python3.10/site-packages/chatchat/webui_pages/dialogue/dialogue.py", line 426, in dialogue_page for d in client.chat.completions.create( File "/usr/local/lib/python3.10/site-packages/openai/_streaming.py", line 46, in iter for item in self._iterator: File "/usr/local/lib/python3.10/site-packages/openai/_streaming.py", line 58, in stream for sse in iterator: File "/usr/local/lib/python3.10/site-packages/openai/_streaming.py", line 50, in _iter_events yield from self._decoder.iter_bytes(self.response.iter_bytes()) File "/usr/local/lib/python3.10/site-packages/openai/_streaming.py", line 280, in iter_bytes for chunk in self._iter_chunks(iterator): File "/usr/local/lib/python3.10/site-packages/openai/_streaming.py", line 291, in _iter_chunks for chunk in iterator: File "/usr/local/lib/python3.10/site-packages/httpx/_models.py", line 829, in iter_bytes for raw_bytes in self.iter_raw(): File "/usr/local/lib/python3.10/site-packages/httpx/_models.py", line 883, in iter_raw for raw_stream_bytes in self.stream: File "/usr/local/lib/python3.10/site-packages/httpx/_client.py", line 126, in iter for chunk in self._stream: File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 112, in iter with map_httpcore_exceptions(): File "/usr/local/lib/python3.10/contextlib.py", line 153, in exit self.gen.throw(typ, value, traceback) File "/usr/local/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc

kingke0620 commented 1 month ago

同,我是调用联网搜索场景,和agent加知识库场景出现的该问题

misaka100001 commented 1 month ago

我也遇到了这个问题,解决了吗?

github-actions[bot] commented 2 weeks ago

这个问题已经被标记为 stale ,因为它已经超过 30 天没有任何活动。