chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
32.27k stars 5.6k forks source link

agent对话报错:peer closed connection without sending complete message body (incomplete chunked read #4453

Closed ELvis168 closed 4 months ago

ELvis168 commented 4 months ago

问题描述 / Problem Description 试用agent对话报错:peer closed connection without sending complete message body (incomplete chunked read

base_url = "http://127.0.0.1:7861/chat"
tools = list(requests.get(f"http://127.0.0.1:7861/tools").json()["data"])
data = {
    "model": "chatglm3",
    "messages": [
        {"role": "user", "content": "37+48=?"},
    ],
    "stream": True,
    "temperature": 0.7,
    "tools": tools,
}

import requests
response = requests.post(f"{base_url}/chat/completions", json=data, stream=True)
for line in response.iter_content(None, decode_unicode=True):
    print(line)

复现问题的步骤 / Steps to Reproduce 按照api文档调agent接口

预期的结果 / Expected Result 返回正常

实际结果 / Actual Result

Entering new AgentExecutor chain... user=None extra_headers=None extra_query=None extra_json=None timeout=None messages=[{'content': 'Answer the following questions as best as you can. You have access to the following tools:\n{\n "name": "calculate",\n "description": " Useful to answer questions about simple calculations. translate user question to a math expression that can be evaluated by numexpr. ",\n "parameters": {\n "text": {\n "description": "a math expression",\n "type": "string"\n }\n }\n}', 'role': 'system'}, {'content': "Let's start! Human:37+48=?\n\n[]", 'role': 'user'}] model='chatglm3' frequency_penalty=None function_call=None functions=None logit_bias=None logprobs=None max_tokens=4096 n=1 presence_penalty=None response_format=None seed=None stop=['<|observation|>'] stream=True temperature=0.9 tool_choice=None tools=None top_logprobs=None top_p=None INFO: 127.0.0.1:35956 - "POST /v1/chat/completions HTTP/1.1" 200 OK 2024-07-09 20:36:40,350 httpx 53634 INFO HTTP Request: POST http://127.0.0.1:7861/v1/chat/completions "HTTP/1.1 200 OK" 2024-07-09 20:36:40,365 httpx 53634 INFO HTTP Request: POST http://127.0.0.1:9997/v1/chat/completions "HTTP/1.1 200 OK" ERROR: Exception in ASGI application Traceback (most recent call last): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/sse_starlette/sse.py", line 269, in call await wrap(partial(self.listen_for_disconnect, receive)) File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap await func() File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect message = await receive() ^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 524, in receive await self.message_event.wait() File "/root/anaconda3/envs/langchain3/lib/python3.11/asyncio/locks.py", line 213, in wait await fut asyncio.exceptions.CancelledError: Cancelled by cancel scope 7fcebba3e390

During handling of the above exception, another exception occurred:

  • Exception Group Traceback (most recent call last): | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi | result = await app( # type: ignore[func-returns-value] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in call | return await self.app(scope, receive, send) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call | await super().call(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/applications.py", line 123, in call | await self.middleware_stack(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call | raise exc | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call | await self.app(scope, receive, _send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in call | await self.app(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in call | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app | raise exc | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app | await app(scope, receive, sender) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/routing.py", line 758, in call | await self.middleware_stack(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/routing.py", line 778, in app | await route.handle(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle | await self.app(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/routing.py", line 79, in app | await wrap_app_handling_exceptions(app, request)(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app | raise exc | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app | await app(scope, receive, sender) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/starlette/routing.py", line 77, in app | await response(scope, receive, send) | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in call | async with anyio.create_task_group() as task_group: | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 680, in aexit | raise BaseExceptionGroup( | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap | await func() | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response | async for data in self.body_iterator: | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/chatchat/server/api_server/openai_routes.py", line 84, in generator | async for chunk in await method(params): | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 147, in aiter | async for item in self._iterator: | File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 174, in stream | raise APIError( | openai.APIError: An error occurred during streaming +------------------------------------ 2024-07-09 20:36:40,748 root 53634 ERROR peer closed connection without sending complete message body (incomplete chunked read) Traceback (most recent call last): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions yield File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_transports/default.py", line 254, in aiter async for part in self._httpcore_stream: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 367, in aiter raise exc from None File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 363, in aiter async for part in self._stream: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 349, in aiter raise exc File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 341, in aiter async for chunk in self._connection._receive_response_body(kwargs): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 210, in _receive_response_body event = await self._receive_event(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 220, in _receive_event with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): File "/root/anaconda3/envs/langchain3/lib/python3.11/contextlib.py", line 155, in exit self.gen.throw(typ, value, traceback) File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/chatchat/server/utils.py", line 46, in wrap_done await fn File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2536, in ainvoke input = await step.ainvoke( ^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/chains/base.py", line 212, in ainvoke raise e File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/chains/base.py", line 203, in ainvoke await self._acall(inputs, run_manager=run_manager) File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1481, in _acall next_step_output = await self._atake_next_step( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1275, in _atake_next_step [ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1275, in [ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1303, in _aiter_next_step output = await self.agent.aplan( ^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 436, in aplan async for chunk in self.runnable.astream( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2900, in astream async for chunk in self.atransform(input_aiter(), config, kwargs): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2883, in atransform async for chunk in self._atransform_stream_with_config( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config chunk: Output = await asyncio.create_task( # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform async for output in final_pipeline: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1316, in atransform async for ichunk in input: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4748, in atransform async for item in self.bound.atransform( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1334, in atransform async for output in self.astream(final, config, kwargs): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 319, in astream raise e File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 297, in astream async for chunk in self._astream( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 485, in _astream async for chunk in await self.async_client.create( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 147, in aiter async for item in self._iterator: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 160, in stream async for sse in iterator: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 151, in _iter_events async for sse in self._decoder.aiter_bytes(self.response.aiter_bytes()): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 302, in aiter_bytes async for chunk in self._aiter_chunks(iterator): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 313, in _aiter_chunks async for chunk in iterator: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_models.py", line 929, in aiter_bytes async for raw_bytes in self.aiter_raw(): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_models.py", line 987, in aiter_raw async for raw_stream_bytes in self.stream: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_client.py", line 149, in aiter async for chunk in self._stream: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_transports/default.py", line 253, in aiter with map_httpcore_exceptions(): File "/root/anaconda3/envs/langchain3/lib/python3.11/contextlib.py", line 155, in exit self.gen.throw(typ, value, traceback) File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) 2024-07-09 20:36:40,755 root 53634 ERROR RemoteProtocolError: Caught exception: peer closed connection without sending complete message body (incomplete chunked read) Traceback (most recent call last): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions yield File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_transports/default.py", line 254, in aiter async for part in self._httpcore_stream: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 367, in aiter raise exc from None File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/connection_pool.py", line 363, in aiter async for part in self._stream: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 349, in aiter raise exc File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 341, in aiter async for chunk in self._connection._receive_response_body(**kwargs): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 210, in _receive_response_body event = await self._receive_event(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_async/http11.py", line 220, in _receive_event with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): File "/root/anaconda3/envs/langchain3/lib/python3.11/contextlib.py", line 155, in exit self.gen.throw(typ, value, traceback) File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/chatchat/server/utils.py", line 46, in wrap_done await fn File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2536, in ainvoke input = await step.ainvoke( ^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/chains/base.py", line 212, in ainvoke raise e File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/chains/base.py", line 203, in ainvoke await self._acall(inputs, run_manager=run_manager) File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1481, in _acall next_step_output = await self._atake_next_step( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1275, in _atake_next_step [ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1275, in [ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1303, in _aiter_next_step output = await self.agent.aplan( ^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain/agents/agent.py", line 436, in aplan async for chunk in self.runnable.astream( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2900, in astream async for chunk in self.atransform(input_aiter(), config, kwargs): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2883, in atransform async for chunk in self._atransform_stream_with_config( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1980, in _atransform_stream_with_config chunk: Output = await asyncio.create_task( # type: ignore[call-arg] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2853, in _atransform async for output in final_pipeline: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1316, in atransform async for ichunk in input: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4748, in atransform async for item in self.bound.atransform( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1334, in atransform async for output in self.astream(final, config, kwargs): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 319, in astream raise e File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 297, in astream async for chunk in self._astream( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 485, in _astream async for chunk in await self.async_client.create( File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 147, in aiter async for item in self._iterator: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 160, in stream async for sse in iterator: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 151, in _iter_events async for sse in self._decoder.aiter_bytes(self.response.aiter_bytes()): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 302, in aiter_bytes async for chunk in self._aiter_chunks(iterator): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/openai/_streaming.py", line 313, in _aiter_chunks async for chunk in iterator: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_models.py", line 929, in aiter_bytes async for raw_bytes in self.aiter_raw(): File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_models.py", line 987, in aiter_raw async for raw_stream_bytes in self.stream: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_client.py", line 149, in aiter async for chunk in self._stream: File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_transports/default.py", line 253, in aiter with map_httpcore_exceptions(): File "/root/anaconda3/envs/langchain3/lib/python3.11/contextlib.py", line 155, in exit self.gen.throw(typ, value, traceback) File "/root/anaconda3/envs/langchain3/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

环境信息 / Environment Information

ELvis168 commented 4 months ago

返回值和样例不一样 data: {"id": "chate409d33e-fbe9-4290-a8bf-c9d948bdff0d", "object": "chat.completion.chunk", "model": "chatglm3", "created": 1720529812, "status": 1, "message_type": 1, "message_id": null, "is_ref": false, "choices": [{"delta": {"content": "", "tool_calls": []}, "role": "assistant"}]} data: {"id": "chat32887d5b-85c5-47bb-b3c9-e0d52d288eed", "object": "chat.completion.chunk", "model": "chatglm3", "created": 1720529818, "status": 8, "message_type": 1, "message_id": null, "is_ref": false, "choices": [{"delta": {"content": "peer closed connection without sending complete message body (incomplete chunked read)", "tool_calls": []}, "role": "assistant"}]}

liunux4odoo commented 4 months ago

模型配置错误

ELvis168 commented 4 months ago

模型配置错误

能帮看下哪个配置吗 { "DEFAULT_LLM_MODEL": "chatglm3", "DEFAULT_EMBEDDING_MODEL": "bge-large-zh-v1.5", "Agent_MODEL": "chatglm3", "HISTORY_LEN": 3, "MAX_TOKENS": null, "TEMPERATURE": 0.7, "SUPPORT_AGENT_MODELS": [ "chatglm3-6b", "openai-api", "Qwen-14B-Chat", "Qwen-7B-Chat", "qwen-turbo" ], "LLM_MODEL_CONFIG": { "preprocess_model": { "chatglm3": { "temperature": 0.05, "max_tokens": 4096, "history_len": 100, "prompt_name": "default", "callbacks": false } }, "llm_model": { "chatglm3": { "temperature": 0.9, "max_tokens": 4096, "history_len": 10, "prompt_name": "default", "callbacks": true } }, "action_model": { "chatglm3": { "temperature": 0.01, "max_tokens": 4096, "prompt_name": "ChatGLM3", "callbacks": true } }, "postprocess_model": { "chatglm3": { "temperature": 0.01, "max_tokens": 4096, "prompt_name": "default", "callbacks": true } }, "image_model": { "sd-turbo": { "size": "256*256" } } }, "MODEL_PLATFORMS": [ { "platform_name": "xinference", "platform_type": "xinference", "api_base_url": "http://127.0.0.1:9997/v1", "api_key": "EMPT", "api_concurrencies": 5, "llm_models": [ "chatglm3" ], "embed_models": [ "bge-large-zh-v1.5" ], "image_models": [], "reranking_models": [], "speech2text_models": [], "tts_models": [] } ], "TOOL_CONFIG": { "search_local_knowledgebase": { "use": false, "top_k": 3, "score_threshold": 1.0, "conclude_prompt": { "with_result": "<指令>根据已知信息,简洁和专业的来回答问题。如果无法从中得到答案,请说 \"根据已知信息无法回答该问题\",不允许在答案中添加编造成分,答案请使用中文。 </指令>\n<已知信息>{{ context }}</已知信息>\n<问题>{{ question }}</问题>\n", "without_result": "请你根据我的提问回答我的问题:\n{{ question }}\n请注意,你必须在回答结束后强调,你的回答是根据你的经验回答而不是参考资料回答的。\n" } }, "search_internet": { "use": false, "search_engine_name": "bing", "search_engine_config": { "bing": { "result_len": 3, "bing_search_url": "https://api.bing.microsoft.com/v7.0/search", "bing_key": "" }, "metaphor": { "result_len": 3, "metaphor_api_key": "", "split_result": false, "chunk_size": 500, "chunk_overlap": 0 }, "duckduckgo": { "result_len": 3 } }, "top_k": 10, "verbose": "Origin", "conclude_prompt": "<指令>这是搜索到的互联网信息,请你根据这些信息进行提取并有调理,简洁的回答问题。如果无法从中得到答案,请说 “无法搜索到能回答问题的内容”。 </指令>\n<已知信息>{{ context }}</已知信息>\n<问题>\n{{ question }}\n</问题>\n" }, "arxiv": { "use": false }, "shell": { "use": false }, "weather_check": { "use": false, "apikey": "S8vrB4U-c5mvAMiK" }, "search_youtube": { "use": false }, "wolfram": { "use": false, "appid": "" }, "calculate": { "use": false }, "vqa_processor": { "use": false, "model_path": "your model path", "tokenizer_path": "your tokenizer path", "device": "cuda:1" }, "aqa_processor": { "use": false, "model_path": "your model path", "tokenizer_path": "yout tokenizer path", "device": "cuda:2" }, "text2images": { "use": false }, "text2sql": { "use": false, "sqlalchemy_connect_str": "mysql+pymysql://用户名:密码@主机地址/数据库名称e", "read_only": false, "top_k": 50, "return_intermediate_steps": true, "table_names": [], "table_comments": {} } }, "class_name": "ConfigModel" }

ELvis168 commented 4 months ago

xinference报错 2024-07-10 11:10:35,271 xinference.api.restful_api 49223 ERROR Chat completion stream got an error: [address=0.0.0.0:45149, pid=49880] not enough values to unpack (expected 2, got 1) Traceback (most recent call last): File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/api/restful_api.py", line 1476, in stream_results async for item in iterator: File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 340, in anext return await self._actor_ref.xoscar_next(self._uid) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/context.py", line 227, in send return self._process_result_message(result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/pool.py", line 659, in send result = await self._run_coro(message.message_id, coro) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/pool.py", line 370, in _run_coro return await coro File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 384, in on_receive return await super().on_receive(message) # type: ignore ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 431, in xoscar_next raise e File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 417, in __xoscar_next__ r = await asyncio.to_thread(_wrapper, gen) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 402, in _wrapper return next(_gen) File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/core/model.py", line 300, in _to_json_generator for v in gen: File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/model/llm/utils.py", line 544, in _to_chat_completion_chunks for i, chunk in enumerate(chunks): ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/model/llm/pytorch/chatglm.py", line 172, in _stream_generator for chunktext, in self._model.stream_chat( ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/chatglm3-pytorch-6b/modeling_chatglm.py", line 1077, in stream_chat response, new_history = self.process_response(response, history) ^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/chatglm3-pytorch-6b/modeling_chatglm.py", line 1003, in process_response metadata, content = response.split("\n", maxsplit=1) ^^^^^^^^^^^^^^^^^ ValueError: [address=0.0.0.0:45149, pid=49880] not enough values to unpack (expected 2, got 1)

ELvis168 commented 4 months ago

模型配置错误

不选中"启用Agent",选择单个工具。是可以的

ASan1527 commented 4 months ago

xinference报错 2024-07-10 11:10:35,271 xinference.api.restful_api 49223 ERROR Chat completion stream got an error: [address=0.0.0.0:45149, pid=49880] not enough values to unpack (expected 2, got 1) Traceback (most recent call last): File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/api/restful_api.py", line 1476, in stream_results async for item in iterator: File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 340, in anext return await self._actor_ref.xoscar_next(self._uid) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/context.py", line 227, in send return self._process_result_message(result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/pool.py", line 659, in send result = await self._run_coro(message.message_id, coro) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/pool.py", line 370, in _run_coro return await coro File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 384, in on_receive return await super().on_receive(message) # type: ignore ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 431, in xoscar_next raise e File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 417, in xoscar_next r = await asyncio.to_thread(_wrapper, gen) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 402, in _wrapper return next(_gen) File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/core/model.py", line 300, in _to_json_generator for v in gen: File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/model/llm/utils.py", line 544, in _to_chat_completion_chunks for i, chunk in enumerate(chunks): ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/model/llm/pytorch/chatglm.py", line 172, in _stream_generator for chunktext, in self._model.stream_chat( ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/chatglm3-pytorch-6b/modeling_chatglm.py", line 1077, in stream_chat response, new_history = self.process_response(response, history) ^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/chatglm3-pytorch-6b/modeling_chatglm.py", line 1003, in process_response metadata, content = response.split("\n", maxsplit=1) ^^^^^^^^^^^^^^^^^ ValueError: [address=0.0.0.0:45149, pid=49880] not enough values to unpack (expected 2, got 1)

兄弟,我和你一样的错误,你咋解决的呀

ELvis168 commented 4 months ago

xinference报错 2024-07-10 11:10:35,271 xinference.api.restful_api 49223 ERROR Chat completion stream got an error: [address=0.0.0.0:45149, pid=49880] not enough values to unpack (expected 2, got 1) Traceback (most recent call last): File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/api/restful_api.py", line 1476, in stream_results async for item in iterator: File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 340, in anext return await self._actor_ref.xoscar_next(self._uid) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/context.py", line 227, in send return self._process_result_message(result) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/context.py", line 102, in _process_result_message raise message.as_instanceof_cause() File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/pool.py", line 659, in send result = await self._run_coro(message.message_id, coro) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/backends/pool.py", line 370, in _run_coro return await coro File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 384, in on_receive return await super().on_receive(message) # type: ignore ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 558, in on_receive raise ex File "xoscar/core.pyx", line 520, in xoscar.core._BaseActor.on_receive async with self._lock: ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 521, in xoscar.core._BaseActor.on_receive with debug_async_timeout('actor_lock_timeout', ^^^^^^^^^^^^^^^^^ File "xoscar/core.pyx", line 526, in xoscar.core._BaseActor.on_receive result = await result ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 431, in xoscar_next raise e File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 417, in xoscar_next r = await asyncio.to_thread(_wrapper, gen) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/asyncio/threads.py", line 25, in to_thread return await loop.run_in_executor(None, func_call) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xoscar/api.py", line 402, in _wrapper return next(_gen) File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/core/model.py", line 300, in _to_json_generator for v in gen: File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/model/llm/utils.py", line 544, in _to_chat_completion_chunks for i, chunk in enumerate(chunks): ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/xinference/model/llm/pytorch/chatglm.py", line 172, in _stream_generator for chunktext, in self._model.stream_chat( ^^^^^^^^^^^^^^^^^ File "/root/anaconda3/envs/xinference2/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 35, in generator_context response = gen.send(None) ^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/chatglm3-pytorch-6b/modeling_chatglm.py", line 1077, in stream_chat response, new_history = self.process_response(response, history) ^^^^^^^^^^^^^^^^^ File "/root/.cache/huggingface/modules/transformers_modules/chatglm3-pytorch-6b/modeling_chatglm.py", line 1003, in process_response metadata, content = response.split("\n", maxsplit=1) ^^^^^^^^^^^^^^^^^ ValueError: [address=0.0.0.0:45149, pid=49880] not enough values to unpack (expected 2, got 1)

兄弟,我和你一样的错误,你咋解决的呀 没解决,换了个模型qwen