chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.43k stars 5.48k forks source link

[BUG] xinference模型启动了但是chatchat页面回答报错 #4332

Closed wll0307 closed 2 months ago

wll0307 commented 3 months ago

2024-06-26 11:20:02,023 httpx 28454 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 200 OK" 2024-06-26 11:20:02,028 model_providers.bootstrap_web.openai_bootstrap_web 28416 INFO Received chat completion request: {'function_call': None, 'functions': None, 'max_tokens': 256, 'messages': [{'content': '你好、', 'role': <Role.USER: 'user'>}], 'model': 'glm4-chat', 'n': 1, 'stop': None, 'stream': True, 'temperature': 0.75, 'tool_choice': None, 'tools': None, 'top_k': None, 'top_p': 0.75} 2024-06-26 11:20:02,029 model_providers.bootstrap_web.openai_bootstrap_web 28416 ERROR Error while creating chat completion: [xinference] Error: 'server_url' 2024-06-26 11:20:02,030 uvicorn.access 28416 INFO 127.0.0.1:50100 - "POST /xinference/v1/chat/completions HTTP/1.1" 500 2024-06-26 11:20:02,030 httpx 28437 INFO HTTP Request: POST http://127.0.0.1:20000/xinference/v1/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-06-26 11:20:02,031 openai._base_client 28437 INFO Retrying request to /chat/completions in 0.876547 seconds 2024-06-26 11:20:02,914 model_providers.bootstrap_web.openai_bootstrap_web 28416 INFO Received chat completion request: {'function_call': None, 'functions': None, 'max_tokens': 256, 'messages': [{'content': '你好、', 'role': <Role.USER: 'user'>}], 'model': 'glm4-chat', 'n': 1, 'stop': None, 'stream': True, 'temperature': 0.75, 'tool_choice': None, 'tools': None, 'top_k': None, 'top_p': 0.75} 2024-06-26 11:20:02,915 model_providers.bootstrap_web.openai_bootstrap_web 28416 ERROR Error while creating chat completion: [xinference] Error: 'server_url' 2024-06-26 11:20:02,916 uvicorn.access 28416 INFO 127.0.0.1:50108 - "POST /xinference/v1/chat/completions HTTP/1.1" 500 2024-06-26 11:20:02,917 httpx 28437 INFO HTTP Request: POST http://127.0.0.1:20000/xinference/v1/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-06-26 11:20:02,918 openai._base_client 28437 INFO Retrying request to /chat/completions in 1.857409 seconds 2024-06-26 11:20:04,783 model_providers.bootstrap_web.openai_bootstrap_web 28416 INFO Received chat completion request: {'function_call': None, 'functions': None, 'max_tokens': 256, 'messages': [{'content': '你好、', 'role': <Role.USER: 'user'>}], 'model': 'glm4-chat', 'n': 1, 'stop': None, 'stream': True, 'temperature': 0.75, 'tool_choice': None, 'tools': None, 'top_k': None, 'top_p': 0.75} 2024-06-26 11:20:04,785 model_providers.bootstrap_web.openai_bootstrap_web 28416 ERROR Error while creating chat completion: [xinference] Error: 'server_url' 2024-06-26 11:20:04,785 uvicorn.access 28416 INFO 127.0.0.1:50124 - "POST /xinference/v1/chat/completions HTTP/1.1" 500 2024-06-26 11:20:04,787 httpx 28437 INFO HTTP Request: POST http://127.0.0.1:20000/xinference/v1/chat/completions "HTTP/1.1 500 Internal Server Error" ERROR: Exception in ASGI application Traceback (most recent call last): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 269, in call await wrap(partial(self.listen_for_disconnect, receive)) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap await func() File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect message = await receive() ^^^^^^^^^^^^^^^ File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 524, in receive await self.message_event.wait() File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/asyncio/locks.py", line 213, in wait await fut asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f70e4962b90

During handling of the above exception, another exception occurred:

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 589, in _run_script exec(code, module.dict) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/chatchat/webui.py", line 71, in dialogue_page(api=api, is_lite=is_lite) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/chatchat/webui_pages/dialogue/dialogue.py", line 304, in dialogue_page for d in client.chat.completions.create( File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 46, in iter for item in self._iterator: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 58, in stream for sse in iterator: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 50, in _iter_events yield from self._decoder.iter_bytes(self.response.iter_bytes()) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 280, in iter_bytes for chunk in self._iter_chunks(iterator): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 291, in _iter_chunks for chunk in iterator: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_models.py", line 829, in iter_bytes for raw_bytes in self.iter_raw(): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_models.py", line 883, in iter_raw for raw_stream_bytes in self.stream: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_client.py", line 126, in iter for chunk in self._stream: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_transports/default.py", line 112, in iter with map_httpcore_exceptions(): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/contextlib.py", line 158, in exit self.gen.throw(typ, value, traceback) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) 06c9b58eb192c9ff772ad1ddbc4946b 9f9a628c1242e51c639e82996212af6 b75fbfb80f28393ece8ba2029cc81c0

Blasy77777 commented 3 months ago

将DEFAULT_LLM_MODEL修改成你启动的模型试试

y-xie-stu commented 3 months ago

xinference启动的模型id要对应得上model_providers.yaml文件的model_uid image image

ricky4ya commented 3 months ago

我发现他传参 'tool_choice': None, 好像会使模型报错 同样的请求把tool_choice 去掉就没问题

ricky4ya commented 3 months ago

lib/python3.10/site-packages/chatchat/server/api_server/chat_routes.py:177 添加

        if body.tool_choice is None:
            del body.tool_choice
sk0829 commented 3 months ago

xinference启动的模型id要对应得上model_providers.yaml文件的model_uid image image

model_providers.yaml这个文件在哪里

binxuan98 commented 3 months ago

环境删掉,重新在创一遍就好了

ASan1527 commented 2 months ago

2024-06-26 11:20:02,023 httpx 28454 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 200 OK" 2024-06-26 11:20:02,028 model_providers.bootstrap_web.openai_bootstrap_web 28416 INFO Received chat completion request: {'function_call': None, 'functions': None, 'max_tokens': 256, 'messages': [{'content': '你好、', 'role': <Role.USER: 'user'>}], 'model': 'glm4-chat', 'n': 1, 'stop': None, 'stream': True, 'temperature': 0.75, 'tool_choice': None, 'tools': None, 'top_k': None, 'top_p': 0.75} 2024-06-26 11:20:02,029 model_providers.bootstrap_web.openai_bootstrap_web 28416 ERROR Error while creating chat completion: [xinference] Error: 'server_url' 2024-06-26 11:20:02,030 uvicorn.access 28416 INFO 127.0.0.1:50100 - "POST /xinference/v1/chat/completions HTTP/1.1" 500 2024-06-26 11:20:02,030 httpx 28437 INFO HTTP Request: POST http://127.0.0.1:20000/xinference/v1/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-06-26 11:20:02,031 openai._base_client 28437 INFO Retrying request to /chat/completions in 0.876547 seconds 2024-06-26 11:20:02,914 model_providers.bootstrap_web.openai_bootstrap_web 28416 INFO Received chat completion request: {'function_call': None, 'functions': None, 'max_tokens': 256, 'messages': [{'content': '你好、', 'role': <Role.USER: 'user'>}], 'model': 'glm4-chat', 'n': 1, 'stop': None, 'stream': True, 'temperature': 0.75, 'tool_choice': None, 'tools': None, 'top_k': None, 'top_p': 0.75} 2024-06-26 11:20:02,915 model_providers.bootstrap_web.openai_bootstrap_web 28416 ERROR Error while creating chat completion: [xinference] Error: 'server_url' 2024-06-26 11:20:02,916 uvicorn.access 28416 INFO 127.0.0.1:50108 - "POST /xinference/v1/chat/completions HTTP/1.1" 500 2024-06-26 11:20:02,917 httpx 28437 INFO HTTP Request: POST http://127.0.0.1:20000/xinference/v1/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-06-26 11:20:02,918 openai._base_client 28437 INFO Retrying request to /chat/completions in 1.857409 seconds 2024-06-26 11:20:04,783 model_providers.bootstrap_web.openai_bootstrap_web 28416 INFO Received chat completion request: {'function_call': None, 'functions': None, 'max_tokens': 256, 'messages': [{'content': '你好、', 'role': <Role.USER: 'user'>}], 'model': 'glm4-chat', 'n': 1, 'stop': None, 'stream': True, 'temperature': 0.75, 'tool_choice': None, 'tools': None, 'top_k': None, 'top_p': 0.75} 2024-06-26 11:20:04,785 model_providers.bootstrap_web.openai_bootstrap_web 28416 ERROR Error while creating chat completion: [xinference] Error: 'server_url' 2024-06-26 11:20:04,785 uvicorn.access 28416 INFO 127.0.0.1:50124 - "POST /xinference/v1/chat/completions HTTP/1.1" 500 2024-06-26 11:20:04,787 httpx 28437 INFO HTTP Request: POST http://127.0.0.1:20000/xinference/v1/chat/completions "HTTP/1.1 500 Internal Server Error" ERROR: Exception in ASGI application Traceback (most recent call last): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 269, in call await wrap(partial(self.listen_for_disconnect, receive)) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap await func() File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 215, in listen_for_disconnect message = await receive() ^^^^^^^^^^^^^^^ File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 524, in receive await self.message_event.wait() File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/asyncio/locks.py", line 213, in wait await fut asyncio.exceptions.CancelledError: Cancelled by cancel scope 7f70e4962b90

During handling of the above exception, another exception occurred:

  • Exception Group Traceback (most recent call last): | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi | result = await app( # type: ignore[func-returns-value] | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in call | return await self.app(scope, receive, send) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/fastapi/applications.py", line 1054, in call | await super().call(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/applications.py", line 123, in call | await self.middleware_stack(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/middleware/errors.py", line 186, in call | raise exc | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/middleware/errors.py", line 164, in call | await self.app(scope, receive, _send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/middleware/cors.py", line 83, in call | await self.app(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/middleware/exceptions.py", line 62, in call | await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app | raise exc | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app | await app(scope, receive, sender) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/routing.py", line 758, in call | await self.middleware_stack(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/routing.py", line 778, in app | await route.handle(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/routing.py", line 299, in handle | await self.app(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/routing.py", line 79, in app | await wrap_app_handling_exceptions(app, request)(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app | raise exc | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app | await app(scope, receive, sender) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/starlette/routing.py", line 77, in app | await response(scope, receive, send) | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 255, in call | async with anyio.create_task_group() as task_group: | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 680, in aexit | raise BaseExceptionGroup( | ExceptionGroup: unhandled errors in a TaskGroup (1 sub-exception) +-+---------------- 1 ---------------- | Traceback (most recent call last): | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 258, in wrap | await func() | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/sse_starlette/sse.py", line 245, in stream_response | async for data in self.body_iterator: | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/chatchat/server/api_server/openai_routes.py", line 79, in generator | async for chunk in await method(params): | ^^^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 1283, in create | return await self._post( | ^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_base_client.py", line 1805, in post | return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls) | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_base_client.py", line 1503, in request | return await self._request( | ^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request | return await self._retry_request( | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_base_client.py", line 1630, in _retry_request | return await self._request( | ^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_base_client.py", line 1584, in _request | return await self._retry_request( | ^^^^^^^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_base_client.py", line 1630, in _retry_request | return await self._request( | ^^^^^^^^^^^^^^^^^^^^ | File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_base_client.py", line 1599, in _request | raise self._make_status_error_from_response(err.response) from None | openai.InternalServerError: Error code: 500 - {'detail': "[xinference] Error: 'server_url'"} +------------------------------------ 2024-06-26 11:20:04.797 Uncaught app exception Traceback (most recent call last): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions yield File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_transports/default.py", line 113, in iter for part in self._httpcore_stream: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 367, in iter raise exc from None File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpcore/_sync/connection_pool.py", line 363, in iter for part in self._stream: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 349, in iter raise exc File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 341, in iter for chunk in self._connection._receive_response_body(kwargs): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 210, in _receive_response_body event = self._receive_event(timeout=timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpcore/_sync/http11.py", line 220, in _receive_event with map_exceptions({h11.RemoteProtocolError: RemoteProtocolError}): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/contextlib.py", line 158, in exit self.gen.throw(typ, value, traceback) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions raise to_exc(exc) from exc httpcore.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read)

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 589, in _run_script exec(code, module.dict) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/chatchat/webui.py", line 71, in dialogue_page(api=api, is_lite=is_lite) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/chatchat/webui_pages/dialogue/dialogue.py", line 304, in dialogue_page for d in client.chat.completions.create( File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 46, in iter for item in self._iterator: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 58, in stream for sse in iterator: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 50, in _iter_events yield from self._decoder.iter_bytes(self.response.iter_bytes()) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 280, in iter_bytes for chunk in self._iter_chunks(iterator): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/openai/_streaming.py", line 291, in _iter_chunks for chunk in iterator: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_models.py", line 829, in iter_bytes for raw_bytes in self.iter_raw(): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_models.py", line 883, in iter_raw for raw_stream_bytes in self.stream: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_client.py", line 126, in iter for chunk in self._stream: File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_transports/default.py", line 112, in iter with map_httpcore_exceptions(): File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/contextlib.py", line 158, in exit self.gen.throw(typ, value, traceback) File "/root/autodl-tmp/conda/envs/chatchat/lib/python3.11/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions raise mapped_exc(message) from exc httpx.RemoteProtocolError: peer closed connection without sending complete message body (incomplete chunked read) 06c9b58eb192c9ff772ad1ddbc4946b 9f9a628c1242e51c639e82996212af6 b75fbfb80f28393ece8ba2029cc81c0

请问楼主解决了吗

liunux4odoo commented 2 months ago

0.3.1 版已经发布,优化了配置方式,修改配置项无需重启服务器,可以更新尝试。