rashadphz / farfalle

🔍 AI search engine - self-host with local or cloud LLMs
https://www.farfalle.dev/
Apache License 2.0
2.21k stars 166 forks source link

500: Model is at capacity. Please try again later. #18

Closed wwjCMP closed 1 month ago

wwjCMP commented 1 month ago

image

rashadphz commented 1 month ago

Do you see "Ollama is running" when you visit http://localhost:11434/?

wwjCMP commented 1 month ago

I use a remote service http://192.168.101.19:11434 And I don't see Ollama receiving any requests

rashadphz commented 1 month ago

Is your .env set to: OLLAMA_HOST=http://192.168.101.19:11434

wwjCMP commented 1 month ago

Is your .env set to: OLLAMA_HOST=http://192.168.101.19:11434

services: backend: build: context: . dockerfile: ./src/backend/Dockerfile.dev restart: always ports:

arsaboo commented 1 month ago

Don't change the OLLAMA_HOST in compose. Instead, change it in the .env file.

This is mostly a docker networking issue. You will have to play around with docker networks to provide access to the host networking. Try removing extra_hosts, since your Ollama is on a different device. .

wwjCMP commented 1 month ago

image

wwjCMP commented 1 month ago

services: backend: build: context: . dockerfile: ./src/backend/Dockerfile.dev restart: always ports:

wwjCMP commented 1 month ago

TAVILY_API_KEY=t OLLAMA_HOST=http://192.168.101.19:11434

rashadphz commented 1 month ago

I don't have access to http://192.168.101.19:11434, but can you confirm that Ollama is running on the server?

P.S. Just to let you know, edit history is public on GitHub. You might want to disable your Tavily API key :)