Open nausher opened 5 months ago
I've been having the same issue. Was working great with ollama for awhile until I updated and now I can't get past it asking for an API key.
@exsodus2 / @nausher what happens if you put in an API key?
@Weves , If I type in my OpenAI API key, it works. I guess maybe the problem is that it seems to be ignoring my .env. I'm unable to see a way to use the ollama server I was using before I updated Danswer.
WEB_DOMAIN=http://localhost:3000
GEN_AI_MODEL_PROVIDER=ollama_chat
GEN_AI_MODEL_VERSION=llama3:instruct
GEN_AI_API_ENDPOINT=http://host.docker.internal:11434
QA_TIMEOUT=240 # Set a longer timeout, running models on CPU can be slow
DISABLE_LLM_CHOOSE_SEARCH=True
DISABLE_LLM_CHUNK_FILTER=True
DISABLE_LLM_QUERY_REPHRASE=True
DISABLE_LLM_FILTER_EXTRACTION=True
AUTH_TYPE=basic
SESSION_EXPIRE_TIME_SECONDS=86400
VALID_EMAIL_DOMAINS=example.com,example.org
will only allow usersPOSTGRES_USER=postgres POSTGRES_PASSWORD=password
In my case, when I enter the Open AI key, I get a Red pop up box at the bottom left that says "Not found" I've tried both Open AI (1) user keys & (2) project keys, they both keep give me the same error. I've also tried on both the LLM Options page as well as the pop-up on initial use.
Have you tried setting the local llm as the default from the user interface?
I don't see the providers. Here is the screen I see. And when attempting to add an Open AI key I get the error 'Not Found' in a red box as a toast in the bottom left corner
@nausher I was just able to get it working via the menu by setting up custom and using my ollama url:port in the API base field and putting the model name (in my case "llama3:instruct").
@exsodus2 - I don't see an option to set up a custom LLM provider.
A couple of questions, since I believe the .env is not being loaded correctly.
Is your .env file in the following location
/danswer/deployment/docker_compose/.env
?
Also, after updating / adding the .env file did you do a docker start with the following command -
docker compose -f docker-compose.dev.yml -p danswer-stack up
Or did you do a full build and deploy
docker compose -f docker-compose.dev.yml -p danswer-stack up -d --build --force-recreate
@nausher I also believe the .env isn't being loaded. The option to add a custom LLM is on the LLM tab at the bottom. I always use docker compose -f docker-compose.dev.yml -p danswer-stack up -d --pull always --force-recreate
I tried setting up a custom LLM provider after (1) pulling / building & force restarting the containers 2-3 times and (2) adding my Open AI keys.
However, when I try to add ollama as both llama2, llama3, llama3:instruct. I receive the following error message - 'NoneType' object has no attribute 'request'
Uploading the correct secreenshot for the 2nd image. I had mistakenly entered the model info in the "Fast Model" field. Entered it now in the "Model names" field but similar error.
@nausher in your screenshots I don't see the API Base
being set. That could be the issue?
@Weves - thanks for spotting that and chiming in! I noticed it too. But alas, no luck -
@nausher can you try running docker logs danswer-stack-api_server-1 --tail 300
and posting that here?
@nausher, I can replicate your error when not using a valid API Base address (I changed the port to a wrong one to test). Additionally, I got the same error after changing the address back to correct, but closing ollama . This leads me to believe your issue may be related to your ollama server itself (if you're sure you're using the right address pointing to it in Dawnser).
@exsodus2 you were right! While it quite wasn't ollama that had an issue it was with the API base address.
I'm using the rancher-desktop flavor of docker, so I had to change the base API address to http://host.rancher-desktop.internal:11434
. Ollama is working now!
I posted a question and Danswer was surprisingly snappy and quoted the right local documents.
Now, if I could get my other issue & code change indexing org files accepted, that would be the cherry on this thing. https://github.com/danswer-ai/danswer/issues/1415
I'd like to leave this issue open since, the .env
file is still not being picked up, though it can likely be a minor/downgraded issue for now.
Hi team Seems that I'm having a similar issue After struggling some time, I have found on Google that the right address to use in Windows for accessing the host from Docker is http://docker.for.win.localhost:11434/
Even using that address, I keep getting the infamous 'NoneType' object has no attribute 'request' error during Danswer setup
Here is the result of docker logs danswer-stack-api_server-1 --tail 300
:
05/13/2024 07:58:27 PM utils.py 228 : Failed to call LLM with the following error: 'NoneType' object has no attribute 'request' 05/13/2024 07:58:27 PM utils.py 228 : Failed to call LLM with the following error: 'NoneType' object has no attribute 'request'
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
INFO: 172.20.0.9:54288 - "POST /admin/llm/test HTTP/1.1" 400 Bad Request
Am I missing something more?
I managed to use the official ollama image (ollama/ollama) and not litellm/ollama. also (if you still haven't), try adding
extra_hosts:
- "host.docker.internal:host-gateway"
on the ollama service to allow containers communicate with each other
I managed to use the official ollama image (ollama/ollama) and not litellm/ollama. also (if you still haven't), try adding
extra_hosts: - "host.docker.internal:host-gateway"
on the ollama service to allow containers communicate with each other
Would you be so kind to summarize how to use ollama instead of litelllm. Is there a documentation section for that? Thanks in advance
yeah there is: https://docs.danswer.dev/gen_ai_configs/ollama
yeah there is: https://docs.danswer.dev/gen_ai_configs/ollama
Thanks again! Unfortunately I was unable to make it work using the ollama Windows Installer or Docker, same error Not sure how to move forward from here.
If you want to use Ollama, ignore the documentation configuration advice to use ollama_chat for GEN_AI_MODEL_PROVIDER, instead use GEN_AI_MODEL_PROVIDER=custom:
Then in Dasnwer create custom LLM as follows:
Additionally if working on Windows, go to Docker setting and Enable host networking.
I have Danswer up and running on my Mac. It is indexing files, I've also updated it to use Ollama that I have running locally. I used the configuration mentioned here - https://docs.danswer.dev/gen_ai_configs/ollama and have created/updated a .env file in the docker_compose directory, in addition I have also updated the kubernetes yaml file for good measure.
I've also restarted the service a few times. The service still continues to ask for an API key, skipping which results in a non-working LLM chat.