danswer-ai / danswer

Gen-AI Chat for Teams - Think ChatGPT if it had access to your team's unique knowledge.
https://docs.danswer.dev/
Other
9.77k stars 1.09k forks source link

Gpt assistants can not use ollama source(Custom LLM Provider) LLM #1568

Open NDMAXX opened 1 month ago

NDMAXX commented 1 month ago

After I LLM Setup ollama source by Custom LLM Provider, both Danswer and Paraphrase assistant can work, but gpt assistant can not work like this: image

My ollama setup: image

Logs for api_server:

api_server-1              | 06/05/2024 04:48:01 AM             utils.py 328 : Failed to get max tokens for LLM with name qwen:0.5b. Defaulting to 4096.
api_server-1              | Traceback (most recent call last):
api_server-1              |   File "/app/danswer/llm/utils.py", line 318, in get_llm_max_tokens
api_server-1              |     model_obj = model_map[model_name]
api_server-1              |                 ~~~~~~~~~^^^^^^^^^^^^
api_server-1              | KeyError: 'qwen:0.5b'
api_server-1              | INFO:     192.168.144.9:38372 - "GET /chat/max-selected-document-tokens?persona_id=0 HTTP/1.1" 200 OK
api_server-1              | 06/05/2024 04:48:02 AM      chat_backend.py 246 : Received new chat message: hi
api_server-1              | INFO:     192.168.144.9:38376 - "POST /chat/send-message HTTP/1.1" 200 OK
api_server-1              | 06/05/2024 04:48:02 AM             utils.py 328 : Failed to get max tokens for LLM with name qwen:0.5b. Defaulting to 4096.
api_server-1              | Traceback (most recent call last):
api_server-1              |   File "/app/danswer/llm/utils.py", line 318, in get_llm_max_tokens
api_server-1              |     model_obj = model_map[model_name]
api_server-1              |                 ~~~~~~~~~^^^^^^^^^^^^
api_server-1              | KeyError: 'qwen:0.5b'
api_server-1              | 06/05/2024 04:48:02 AM   process_message.py 494 : Tool 'run_search' not found
api_server-1              | Traceback (most recent call last):
api_server-1              |   File "/app/danswer/chat/process_message.py", line 460, in stream_chat_message_objects
api_server-1              |     for packet in answer.processed_streamed_output:
api_server-1              |   File "/app/danswer/llm/answering/answer.py", line 423, in processed_streamed_output
api_server-1              |     for processed_packet in _process_stream(output_generator):
api_server-1              |   File "/app/danswer/llm/answering/answer.py", line 389, in _process_stream
api_server-1              |     for message in stream:
api_server-1              |   File "/app/danswer/llm/answering/answer.py", line 278, in _raw_output_for_non_explicit_tool_calling_llms
api_server-1              |     raise RuntimeError(f"Tool '{self.force_use_tool.tool_name}' not found")
api_server-1              | RuntimeError: Tool 'run_search' not found
api_server-1              | 06/05/2024 04:48:02 AM            timing.py  74 : stream_chat_message took 0.014301776885986328 seconds
lucashaha commented 1 month ago

I have the same problem,can anyone tell me how to solve it?

foreveryh commented 4 weeks ago

I encountered the same problem while using Ollama in Danswer but managed to resolve it. The issue stems from the pip dependency for litellm being upgraded, while Danswer's pip version is fixed and doesn't upgrade automatically.

To fix this issue, follow these steps:

  1. Locate the backend/requirements/default.txt file.
  2. Change litellm==1.37.7 to litellm==1.39.3.
  3. Ensure openai is set to version 1.30.5.

This should resolve the problem.

wenlong1234 commented 4 weeks ago

I have changed the default,ext as: litellm==1.39.3 llama-index==0.9.45 Mako==1.2.4 msal==1.26.0 nltk==3.8.1 Office365-REST-Python-Client==2.5.9 oauthlib==3.2.2 openai==1.30.5

AND restart docker,BUT the problem still exists

foreveryh commented 4 weeks ago

If the context you provide is too long, you might encounter this error. This is a known bug in Danswer, occurring, for example, when the uploaded PDF file exceeds 4096 tokens.

If you have updated litellm, first test if you can successfully add new local models. If adding models works, then try a simple conversation without providing too much context at once.

lucashaha commented 4 weeks ago

I solve the problem,after changing default.ext,you should rebulid the containers

NDMAXX commented 4 weeks ago

I encountered the same problem while using Ollama in Danswer but managed to resolve it. The issue stems from the pip dependency for litellm being upgraded, while Danswer's pip version is fixed and doesn't upgrade automatically.

To fix this issue, follow these steps:

  1. Locate the backend/requirements/default.txt file.
  2. Change litellm==1.37.7 to litellm==1.39.3.
  3. Ensure openai is set to version 1.30.5.

This should resolve the problem.

Thx, I have solved the problem. May i know when will this version or more suitable version will be fixed in main pipline?