chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
32.29k stars 5.6k forks source link

修改model_settings.yaml中api_base_url为openai的api后,embedding模型也从这个链接获取了。报错404。 #4657

Closed mudapakka closed 4 months ago

mudapakka commented 4 months ago

问题描述 / Problem Description 修改model_settings.yaml中api_base_url为openai的api后,embedding模型也从这个链接获取了。报错404。如何实现调用api调用模型,使用本地的bge-large-zh-v1.5模型?

复现问题的步骤 / Steps to Reproduce

  1. 执行 '...' / Run '...'
  2. 点击 '...' / Click '...'
  3. 滚动到 '...' / Scroll to '...'
  4. 问题出现 / Problem occurs

预期的结果 / Expected Result 描述应该出现的结果 / Describe the expected result.

实际结果 / Actual Result 描述实际发生的结果 / Describe the actual result.

环境信息 / Environment Information

附加信息 / Additional Information 添加与问题相关的任何其他信息 / Add any other information related to the issue.

mudapakka commented 4 months ago

目前调用模型是成功的,也可以看到请求使用情况。就是检索的时候提示404错误 以下是具体信息 2024-07-31 14:27:13,147 httpx 4626 INFO HTTP Request: POST https://api.deepseek.com/embeddings "HTTP/1.1 404 Not Found" 2024-07-31 14:27:13.148 | ERROR | chatchat.server.utils:check_embed_model:333 - failed to access embed model 'bge-large-zh-v1.5': Error code: 404 Traceback (most recent call last):

File "/root/miniconda3/envs/chatchat/lib/python3.8/threading.py", line 890, in _bootstrap self._bootstrap_inner() │ └ <function Thread._bootstrap_inner at 0x7f8e544afdc0> └ <WorkerThread(AnyIO worker thread, started 140248494556928)> File "/root/miniconda3/envs/chatchat/lib/python3.8/threading.py", line 932, in _bootstrap_inner self.run() │ └ <function WorkerThread.run at 0x7f8e36dcfb80> └ <WorkerThread(AnyIO worker thread, started 140248494556928)> File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, *args) │ │ │ └ () │ │ └ functools.partial(<function search_docs at 0x7f8e3c21a5e0>, query='数据并行说说是什么', knowledge_base_name='samples', topk=3, score... │ └ <method 'run' of 'Context' objects> └ <Context object at 0x7f8e345cc8c0>

File "/root/Langchain-Chatchat/libs/chatchat-server/chatchat/server/knowledge_base/kb_doc_api.py", line 70, in search_docs docs = kb.search_docs(query, top_k, score_threshold) │ │ │ │ └ 2.0 │ │ │ └ 3 │ │ └ '数据并行说说是什么' │ └ <function KBService.search_docs at 0x7f8e40eccd30> └ samples @ bge-large-zh-v1.5

File "/root/Langchain-Chatchat/libs/chatchat-server/chatchat/server/knowledge_base/kb_service/base.py", line 215, in search_docs if not self.check_embed_model( │ └ <function KBService.check_embed_model at 0x7f8e40ecc700> └ samples @ bge-large-zh-v1.5

File "/root/Langchain-Chatchat/libs/chatchat-server/chatchat/server/knowledge_base/kb_service/base.py", line 84, in check_embed_model if not _check_embed_model(self.embed_model): │ │ └ 'bge-large-zh-v1.5' │ └ samples @ bge-large-zh-v1.5 └ <function check_embed_model at 0x7f8e40f2b670>

File "/root/Langchain-Chatchat/libs/chatchat-server/chatchat/server/utils.py", line 330, in check_embed_model embeddings.embed_query("this is a test") │ └ <function LocalAIEmbeddings.embed_query at 0x7f8e348b0550> └ LocalAIEmbeddings(client=<openai.resources.embeddings.Embeddings object at 0x7f8e34907220>, async_client=<openai.resources.em...

File "/root/Langchain-Chatchat/libs/chatchat-server/chatchat/server/localai_embeddings.py", line 367, in embed_query embedding = self._embedding_func(text, engine=self.deployment) │ │ │ │ └ 'text-embedding-ada-002' │ │ │ └ LocalAIEmbeddings(client=<openai.resources.embeddings.Embeddings object at 0x7f8e34907220>, async_client=<openai.resources.em... │ │ └ 'this is a test' │ └ <function LocalAIEmbeddings._embedding_func at 0x7f8e348b0310> └ LocalAIEmbeddings(client=<openai.resources.embeddings.Embeddings object at 0x7f8e34907220>, async_client=<openai.resources.em...

File "/root/Langchain-Chatchat/libs/chatchat-server/chatchat/server/localai_embeddings.py", line 288, in _embedding_func embed_with_retry( └ <function embed_with_retry at 0x7f8e348eef70>

File "/root/Langchain-Chatchat/libs/chatchat-server/chatchat/server/localai_embeddings.py", line 108, in embed_with_retry return _embed_with_retry(**kwargs) │ └ {'input': ['this is a test'], 'model': 'bge-large-zh-v1.5', 'timeout': None, 'extra_headers': None} └ <function embed_with_retry.._embed_with_retry at 0x7f8e345730d0>

File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/tenacity/init.py", line 336, in wrapped_f return copy(f, *args, *kw) │ │ │ └ {'input': ['this is a test'], 'model': 'bge-large-zh-v1.5', 'timeout': None, 'extra_headers': None} │ │ └ () │ └ <function embed_with_retry.._embed_with_retry at 0x7f8e34573040> └ <Retrying object at 0x7f8e3457c580 (stop=<tenacity.stop.stop_after_attempt object at 0x7f8e3461fd00>, wait=<tenacity.wait.wai... File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/tenacity/init.py", line 475, in call do = self.iter(retry_state=retry_state) │ │ └ <RetryCallState 140248740251152: attempt #6; slept for 30.0; last result: failed (NotFoundError Error code: 404)> │ └ <function BaseRetrying.iter at 0x7f8e487001f0> └ <Retrying object at 0x7f8e3457c580 (stop=<tenacity.stop.stop_after_attempt object at 0x7f8e3461fd00>, wait=<tenacity.wait.wai... File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/tenacity/init.py", line 376, in iter result = action(retry_state) │ └ <RetryCallState 140248740251152: attempt #6; slept for 30.0; last result: failed (NotFoundError Error code: 404)> └ <function BaseRetrying._post_stop_check_actions..exc_check at 0x7f8e36d1d430> File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/tenacity/init.py", line 418, in exc_check raise retry_exc.reraise() │ └ <function RetryError.reraise at 0x7f8e486f9820> └ RetryError(<Future at 0x7f8e34579100 state=finished raised NotFoundError>) File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/tenacity/init.py", line 185, in reraise raise self.last_attempt.result() │ │ └ <function Future.result at 0x7f8e54045a60> │ └ <Future at 0x7f8e34579100 state=finished raised NotFoundError> └ RetryError(<Future at 0x7f8e34579100 state=finished raised NotFoundError>) File "/root/miniconda3/envs/chatchat/lib/python3.8/concurrent/futures/_base.py", line 437, in result return self.get_result() └ None File "/root/miniconda3/envs/chatchat/lib/python3.8/concurrent/futures/_base.py", line 389, in get_result raise self._exception └ None File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/tenacity/init.py", line 478, in call result = fn(args, **kwargs) │ │ └ {'input': ['this is a test'], 'model': 'bge-large-zh-v1.5', 'timeout': None, 'extra_headers': None} │ └ () └ <function embed_with_retry.._embed_with_retry at 0x7f8e34573040>

File "/root/Langchain-Chatchat/libs/chatchat-server/chatchat/server/localai_embeddings.py", line 105, in _embed_with_retry response = embeddings.client.create(**kwargs) │ │ │ └ {'input': ['this is a test'], 'model': 'bge-large-zh-v1.5', 'timeout': None, 'extra_headers': None} │ │ └ <function Embeddings.create at 0x7f8e436e4700> │ └ <openai.resources.embeddings.Embeddings object at 0x7f8e34907220> └ LocalAIEmbeddings(client=<openai.resources.embeddings.Embeddings object at 0x7f8e34907220>, async_client=<openai.resources.em...

File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/openai/resources/embeddings.py", line 114, in create return self._post( │ └ <bound method SyncAPIClient.post of <openai.OpenAI object at 0x7f8e3461f700>> └ <openai.resources.embeddings.Embeddings object at 0x7f8e34907220> File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/openai/_base_client.py", line 1266, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) │ │ │ │ │ │ │ └ None │ │ │ │ │ │ └ False │ │ │ │ │ └ FinalRequestOptions(method='post', url='/embeddings', params={}, headers=NOT_GIVEN, max_retries=NOT_GIVEN, timeout=None, file... │ │ │ │ └ <class 'openai.types.create_embedding_response.CreateEmbeddingResponse'> │ │ │ └ <function SyncAPIClient.request at 0x7f8e43935f70> │ │ └ <openai.OpenAI object at 0x7f8e3461f700> │ └ ~ResponseT └ <function cast at 0x7f8e54230040> File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/openai/_base_client.py", line 942, in request return self._request( │ └ <function SyncAPIClient._request at 0x7f8e43938040> └ <openai.OpenAI object at 0x7f8e3461f700> File "/root/miniconda3/envs/chatchat/lib/python3.8/site-packages/openai/_base_client.py", line 1046, in _request raise self._make_status_error_from_response(err.response) from None │ └ <function BaseClient._make_status_error_from_response at 0x7f8e43926c10> └ <openai.OpenAI object at 0x7f8e3461f700>

openai.NotFoundError: Error code: 404 2024-07-31 14:27:13.200 | ERROR | chatchat.server.knowledge_base.kb_service.base:check_embed_model:85 - could not search docs because failed to access embed model.

SuMiaoALi commented 3 months ago

你在model配置里去掉OpenAI的embedding配置,把embedding配置为你自己本地的模型加载平台地址,例如xinference