chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
30.11k stars 5.27k forks source link

langchain-chatchat 0.3, 在选择工具那选择天气查询,查询失败 #4286

Closed cqray1990 closed 1 week ago

cqray1990 commented 1 month ago

环境信息 / Environment Information

langchain-chatchat 0.3, 在选择工具那选择天气查询,然后对话输入,北京天气怎么样?,出现以下出错

model_providers.yaml:

                                    #openai:
                                    #  model_credential:
                                    #    - model: 'gpt-3.5-turbo'
                                    #      model_type: 'llm'
                                    #      model_credentials:
                                    #        openai_api_key: 'sk-'
                                    #        openai_organization: ''
                                    #        openai_api_base: ''
                                    #    - model: 'gpt-4'
                                    #      model_type: 'llm'
                                    #      model_credentials:
                                    #        openai_api_key: 'sk-'
                                    #        openai_organization: ''
                                    #        openai_api_base: ''
                                    #
                                    #  provider_credential:
                                    #    openai_api_key: 'sk-'
                                    #    openai_organization: ''
                                    #    openai_api_base: ''

                                    xinference:
                                      model_credential:
                                        - model: 'glm4-chat'
                                          model_type: 'llm'
                                          model_credentials:
                                            server_url: 'http://127.0.0.1:9997/'
                                            model_uid: 'glm4-chat'
                                    #    - model: 'qwen1.5-chat'
                                    #      model_type: 'llm'
                                    #      model_credentials:
                                    #        server_url: 'http://127.0.0.1:9997/'
                                    #        model_uid: 'qwen1.5-chat'
                                        - model: 'bge-large-zh-v1.5'
                                          model_type: 'text-embedding'
                                          model_credentials:
                                            server_url: 'http://127.0.0.1:9997/'
                                            model_uid: 'bge-large-zh-v1.5'

                                    #zhipuai:
                                    #  provider_credential:
                                    #    api_key: 'd4fa0690b6dfa205204cae2e12aa6fb6.1'

                                    #ollama:
                                    #  model_credential:
                                    #    - model: 'llama3'
                                    #      model_type: 'llm'
                                    #      model_credentials:
                                    #        base_url: 'http://172.21.192.1:11434'
                                    #        mode: 'completion'

INFO: 127.0.0.1:57742 - "POST /chat/chat/completions HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi result = await app( # type: ignore[func-returns-value] File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call return await self.app(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in call await self.app(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 758, in call await self.middleware_stack(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 74, in app response = await func(request) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/fastapi/routing.py", line 299, in app raise e File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/fastapi/routing.py", line 294, in app raw_response = await run_endpoint_function( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(values) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/server/api_server/chat_routes.py", line 96, in chat_completions tool_result = await tool.ainvoke(tool_input) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 723, in ainvoke return await run_in_executor(config, self.invoke, input, config, kwargs) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 514, in run_in_executor return await asyncio.get_running_loop().run_in_executor( File "/anaconda3/envs/llama_factory/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, self.kwargs) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 260, in invoke return self.run( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 452, in run raise e File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 409, in run context.run( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 750, in _run else self.func(*args, *kwargs) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/server/agent/tools_factory/weather_check.py", line 26, in weather_check raise Exception( Exception: Failed to retrieve weather: 403 2024-06-23 16:46:55,682 httpx 921791 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-06-23 16:46:55.683 Uncaught app exception Traceback (most recent call last): File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 589, in _run_script exec(code, module.dict) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/webui.py", line 71, in dialogue_page(api=api, is_lite=is_lite) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/webui_pages/dialogue/dialogue.py", line 304, in dialogue_page for d in client.chat.completions.create( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(args, kwargs) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 667, in create return self._post( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 1208, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 897, in request return self._request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 973, in _request return self._retry_request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 1021, in _retry_request return self._request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 973, in _request return self._retry_request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 1021, in _retry_request return self._request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 988, in _request raise self._make_status_error_from_response(err.response) from None openai.InternalServerError: Internal Server Error

天气的api不能访问?

url = f"https://api.seniverse.com/v3/weather/now.json?key={api_key}&location={city}&language=zh-Hans&unit=c"

anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/server/agent/tools_factory/weather_check.py", line 26, in weather_check raise Exception( Exception: Failed to retrieve weather: 403

MartinH04 commented 4 weeks ago

Exception: Failed to retrieve weather: 403,天气预报的{api_key}没提供呀

lizhao-8202 commented 4 weeks ago

环境信息 / Environment Information

  • langchain-ChatGLM 版本/commit 号:(例如:v2.0.1 或 commit 123456) / langchain-ChatGLM version/commit number: (e.g., v2.0.1 or commit 123456)
  • 是否使用 Docker 部署(是/否):是 / Is Docker deployment used (yes/no): yes
  • 使用的模型(ChatGLM2-6B / Qwen-7B 等):ChatGLM-6B / Model used (ChatGLM2-6B / Qwen-7B, etc.): GLM4-chat
  • 使用的 Embedding 模型(moka-ai/m3e-base 等):moka-ai/m3e-base / Embedding model used (moka-ai/m3e-base, etc.):
  • 使用的向量库类型 (faiss / milvus / pg_vector 等): faiss / Vector library used (faiss, milvus, pg_vector, etc.): faiss
  • 操作系统及版本 / Operating system and version:
  • Python 版本 / Python version:3.10
  • 其他相关环境信息 / Other relevant environment information:

langchain-chatchat 0.3, 在选择工具那选择天气查询,然后对话输入,北京天气怎么样?,出现以下出错

model_providers.yaml:

                                    #openai:
                                    #  model_credential:
                                    #    - model: 'gpt-3.5-turbo'
                                    #      model_type: 'llm'
                                    #      model_credentials:
                                    #        openai_api_key: 'sk-'
                                    #        openai_organization: ''
                                    #        openai_api_base: ''
                                    #    - model: 'gpt-4'
                                    #      model_type: 'llm'
                                    #      model_credentials:
                                    #        openai_api_key: 'sk-'
                                    #        openai_organization: ''
                                    #        openai_api_base: ''
                                    #
                                    #  provider_credential:
                                    #    openai_api_key: 'sk-'
                                    #    openai_organization: ''
                                    #    openai_api_base: ''

                                    xinference:
                                      model_credential:
                                        - model: 'glm4-chat'
                                          model_type: 'llm'
                                          model_credentials:
                                            server_url: 'http://127.0.0.1:9997/'
                                            model_uid: 'glm4-chat'
                                    #    - model: 'qwen1.5-chat'
                                    #      model_type: 'llm'
                                    #      model_credentials:
                                    #        server_url: 'http://127.0.0.1:9997/'
                                    #        model_uid: 'qwen1.5-chat'
                                        - model: 'bge-large-zh-v1.5'
                                          model_type: 'text-embedding'
                                          model_credentials:
                                            server_url: 'http://127.0.0.1:9997/'
                                            model_uid: 'bge-large-zh-v1.5'

                                    #zhipuai:
                                    #  provider_credential:
                                    #    api_key: 'd4fa0690b6dfa205204cae2e12aa6fb6.1'

                                    #ollama:
                                    #  model_credential:
                                    #    - model: 'llama3'
                                    #      model_type: 'llm'
                                    #      model_credentials:
                                    #        base_url: 'http://172.21.192.1:11434'
                                    #        mode: 'completion'

INFO: 127.0.0.1:57742 - "POST /chat/chat/completions HTTP/1.1" 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py", line 411, in run_asgi result = await app( # type: ignore[func-returns-value] File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 69, in call return await self.app(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in call await self.app(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 758, in call await self.middleware_stack(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/starlette/routing.py", line 74, in app response = await func(request) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/fastapi/routing.py", line 299, in app raise e File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/fastapi/routing.py", line 294, in app raw_response = await run_endpoint_function( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(values) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/server/api_server/chat_routes.py", line 96, in chat_completions tool_result = await tool.ainvoke(tool_input) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 723, in ainvoke return await run_in_executor(config, self.invoke, input, config, kwargs) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 514, in run_in_executor return await asyncio.get_running_loop().run_in_executor( File "/anaconda3/envs/llama_factory/lib/python3.10/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, self.kwargs) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 260, in invoke return self.run( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 452, in run raise e File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 409, in run context.run( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/langchain_core/tools.py", line 750, in _run else self.func(*args, kwargs) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/server/agent/tools_factory/weather_check.py", line 26, in weather_check raise Exception( Exception: Failed to retrieve weather: 403 2024-06-23 16:46:55,682 httpx 921791 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-06-23 16:46:55.683 Uncaught app exception Traceback (most recent call last): File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 589, in _run_script exec(code, module.dict*) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/webui.py", line 71, in dialogue_page(api=api, is_lite=is_lite) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/webui_pages/dialogue/dialogue.py", line 304, in dialogue_page for d in client.chat.completions.create( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_utils/_utils.py", line 275, in wrapper return func(args, kwargs) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 667, in create return self._post( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 1208, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 897, in request return self._request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 973, in _request return self._retry_request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 1021, in _retry_request return self._request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 973, in _request return self._retry_request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 1021, in _retry_request return self._request( File "/anaconda3/envs/llama_factory/lib/python3.10/site-packages/openai/_base_client.py", line 988, in _request raise self._make_status_error_from_response(err.response) from None openai.InternalServerError: Internal Server Error

天气的api不能访问?

url = f"https://api.seniverse.com/v3/weather/now.json?key={api_key}&location={city}&language=zh-Hans&unit=c"

anaconda3/envs/llama_factory/lib/python3.10/site-packages/chatchat/server/agent/tools_factory/weather_check.py", line 26, in weather_check raise Exception( Exception: Failed to retrieve weather: 403

兄弟,有没微信哈哈,想给你交流下,我这有点懵X,0.3的版本搭建的

liunux4odoo commented 1 week ago

0.3.1 版已经发布,增加了高德天气工具,建议更新尝试。