chatchat-space / Langchain-Chatchat

Langchain-Chatchat(原Langchain-ChatGLM)基于 Langchain 与 ChatGLM, Qwen 与 Llama 等语言模型的 RAG 与 Agent 应用 | Langchain-Chatchat (formerly langchain-ChatGLM), local knowledge based LLM (like ChatGLM, Qwen and Llama) RAG and Agent app with langchain
Apache License 2.0
31.96k stars 5.56k forks source link

启动ollama+chatchat #4372

Closed Dhaizei closed 4 months ago

Dhaizei commented 4 months ago

==================================================== 步骤: 安装ollama 进行 ollama serve 和ollama run qwen:0.5b 安装chatchat 更改配置 chatchat-config model --set_model_platforms '[{ "platform_name": "ollama", "platform_type": "ollama", "api_base_url": "http://127.0.0.1:11434/v1", "api_key": "EMPT", "api_concurrencies": 5, "llm_models": [ "qweb:0.5b" ], "embed_models": [ "milkey/m3e" ], "image_models": [], "reranking_models": [], "speech2text_models": [], "tts_models": [] }]'

然后启动 chatchat
chatchat -a

报错:

NFO: 127.0.0.1:37268 - "POST /chat/chat/completions HTTP/1.1" 500 Internal Server Error 2024-07-02 11:27:03,902 httpx 25139 INFO HTTP Request: POST http://127.0.0.1:7861/chat/chat/completions "HTTP/1.1 500 Internal Server Error" 2024-07-02 11:27:03,902 openai._base_client 25139 INFO Retrying request to /chat/completions in 0.946417 seconds ERROR: Exception in ASGI application Traceback (most recent call last): File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/uvicorn/protocols/http/h11_impl.py", line 396, in run_asgi result = await app( # type: ignore[func-returns-value] File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in call return await self.app(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/applications.py", line 1054, in call await super().call(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/applications.py", line 123, in call await self.middleware_stack(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/middleware/errors.py", line 186, in call raise exc File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/middleware/errors.py", line 164, in call await self.app(scope, receive, _send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/middleware/cors.py", line 83, in call await self.app(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/middleware/exceptions.py", line 62, in call await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/routing.py", line 758, in call await self.middleware_stack(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/routing.py", line 778, in app await route.handle(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/routing.py", line 299, in handle await self.app(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/routing.py", line 79, in app await wrap_app_handling_exceptions(app, request)(scope, receive, send) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app raise exc File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app await app(scope, receive, sender) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/starlette/routing.py", line 74, in app response = await func(request) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/routing.py", line 299, in app raise e File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/routing.py", line 294, in app raw_response = await run_endpoint_function( File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/fastapi/routing.py", line 191, in run_endpoint_function return await dependant.call(**values) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/chatchat/server/api_server/chat_routes.py", line 57, in chat_completions client = get_OpenAIClient(model_name=body.model, is_async=True) File "/root/miniconda3/envs/langchain/lib/python3.10/site-packages/chatchat/server/utils.py", line 295, in get_OpenAIClient assert platform_info, f"cannot find configured platform: {platform_name}" AssertionError: cannot find configured platform: None INFO: 127.0.0.1:37276 - "POST /chat/chat/completions HTTP/1.1" 500 Internal Server Error

xcl1231 commented 4 months ago

我是在服务器上部署,局域网主机通过vscode映射访问端口,跟你出现了同样的问题,但是在服务器上用如下指令测试是正常的,会不会是 chatchat的请求到框架api还中转了什么端口我没有映射好? curl -X 'POST' \ 'http://127.0.0.1:7861/chat/chat/completions' \ -H 'accept: application/json' \ -H 'Content-Type: application/json' \ -d '{ "model": "qwen2-7b-chat", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is the largest animal?" } ] }'

xcl1231 commented 4 months ago

我是在服务器上部署,局域网主机通过vscode映射访问端口,跟你出现了同样的问题,但是在服务器上用如下指令测试是正常的,会不会是 chatchat的请求到框架api还中转了什么端口我没有映射好? curl -X 'POST' 'http://127.0.0.1:7861/chat/chat/completions' -H 'accept: application/json' -H 'Content-Type: application/json' -d '{ "model": "qwen2-7b-chat", "messages": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "What is the largest animal?" } ] }'

已解决,chatchat-config model --default_llm_model qwen2-7b-chat,默认模型配置需要与启动的模型名称一致

Qi0716 commented 4 months ago

我是在windows上进行chatchat-kb -r进行初始化知识库报错请问有没有大佬解决一下

(langchain) D:\other\Langchain-Chatchat-master>chatchat-kb -r
recreating all vector stores
C:\Users\30759.conda\envs\langchain\lib\site-packages\langchain_api\module_import.py:87: LangChainDeprecationWarning: Importing GuardrailsOutputParser from langchain.output_parsers is deprecated. Please replace the import with the following: from langchain_community.output_parsers.rail_parser import GuardrailsOutputParser warnings.warn( 2024-07-03 09:28:17,524 - utils.py[line:260] - ERROR: failed to create Embeddings for model: bge-large-zh-v1.5. Traceback (most recent call last): File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\server\utils.py", line 258, in get_Embeddings return LocalAIEmbeddings(params) File "C:\Users\30759.conda\envs\langchain\lib\site-packages\pydantic\v1\main.py", line 341, in init raise validation_error pydantic.v1.error_wrappers.ValidationError: 1 validation error for LocalAIEmbeddings root Did not find openai_api_key, please add an environment variable OPENAI_API_KEY which contains it, or pass openai_api_key as a named parameter. (type=value_error) 2024-07-03 09:28:17,537 - faiss_cache.py[line:140] - ERROR: 'NoneType' object has no attribute 'embed_documents' Traceback (most recent call last): File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\server\knowledge_base\kb_cache\faiss_cache.py", line 126, in load_vector_store vector_store = self.new_vector_store( File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\server\knowledge_base\kb_cache\faiss_cache.py", line 63, in new_vector_store vector_store = FAISS.from_documents([doc], embeddings, normalize_L2=True) File "C:\Users\30759.conda\envs\langchain\lib\site-packages\langchain_core\vectorstores.py", line 550, in from_documents return cls.from_texts(texts, embedding, metadatas=metadatas, kwargs) File "C:\Users\30759.conda\envs\langchain\lib\site-packages\langchain_community\vectorstores\faiss.py", line 930, in from_texts embeddings = embedding.embed_documents(texts) AttributeError: 'NoneType' object has no attribute 'embed_documents' AttributeError: 'NoneType' object has no attribute 'embed_documents'

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\init_database.py", line 129, in main folder2db( File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\server\knowledge_base\migrate.py", line 157, in folder2db kb.create_kb() File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\server\knowledge_base\kb_service\base.py", line 102, in create_kb self.do_create_kb() File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\server\knowledge_base\kb_service\faiss_kb_service.py", line 57, in do_create_kb self.load_vector_store() File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\server\knowledge_base\kb_service\faiss_kb_service.py", line 32, in load_vector_store return kb_faiss_pool.load_vector_store( File "C:\Users\30759.conda\envs\langchain\lib\site-packages\chatchat\server\knowledge_base\kb_cache\faiss_cache.py", line 141, in load_vector_store raise RuntimeError(f"向量库 {kb_name} 加载失败。") RuntimeError: 向量库 111 加载失败。 2024-07-03 09:28:17,580 - init_database.py[line:151] - WARNING: Caught KeyboardInterrupt! Setting stop event...

li775176364 commented 4 months ago

我和你遇到了一样的问题,应该是新版框架的基础模型和embedding模型都依赖第三方平台,而ollama里只运行了基础模型,所以框架没找到embedding也就无法完成向量初始化

但很奇怪,我搜索网络上,很多人在ollama上可以直接ollama pull znbang/bge:large-zh-v1.5,但ollama上并没有bge-large这个模型,是他们移除了么?

我目前还没找到解决方案

li775176364 commented 4 months ago

ollama上找到了一个 https://ollama.com/quentinz/bge-large-zh-v1.5:f16