Open reysic opened 3 months ago
Same here on windows 10. Temporary fix could be to change ollama:11434
with host.docker.internal:11434
in the config files.
Can you try to disable autopull images in settings.yaml
?
I am using Ollama 0.3.8 and getting the same issue. I also tried to disable autopull images and no luck.
same issue here
=/tmp/ollama2036586951/runners ollama_1 | time="2024-09-02T06:18:22Z" level=info msg="Configuration loaded from flags." private-gpt-ollama_1 | 06:18:23.067 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'docker'] private-gpt-ollama_1 | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used. private-gpt-ollama_1 | 06:18:30.438 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama private-gpt-ollama_1 | 06:18:30.519 [INFO ] httpx - HTTP Request: GET http://ollama:11434/api/tags "HTTP/1.1 503 Service Unavailable" private-gpt-ollama_1 | 06:18:30.520 [ERROR ] private_gpt.utils.ollama - Failed to connect to Ollama: Service Unavailable
Even when cloning the repo with the fix, I still get the same 403 error. @jaluma, is that to be expected?
@MandarUkrulkar check that you changed both PGPT_OLLAMA_API_BASE
and PGPT_OLLAMA_EMBEDDING_API_BASE
to use http://host.docker.internal:11434
You might also need to run ollama pull nomic-embed-text
and ollama pull llama3.2
beforehand because pulling the model from the container seems to timeout.
You have run OLLAMA_HOST=0.0.0.0 ollama serve
. By default, Ollama refuses all connections except localhost and returns status code 403. You should not need to modify these environment variables, everything is packed into the docker-compose.
@jaluma thanks for the reply. Indeed I did not have OLLAMA_HOST=0.0.0.0
set. That resolves 403.
In this thread there is also a 503, which seems to be because traefik is not ready. I added a simple healthcheck and a depends_on condition and private gpt works.
My docker-compose modifications below
services:
private-gpt-ollama:
depends_on:
ollama:
condition: service_healthy
ollama:
image: traefik:v2.10
healthcheck:
test: ["CMD", "sh", "-c", "wget -q --spider http://ollama:11434 || exit 1"]
interval: 10s
retries: 3
start_period: 5s
timeout: 5s
@meng-hui Thanks for sharing your modifications!!!! Can you open a PR with these changes to avoid this error to more users?
Pre-check
Description
Following the Quickstart documentation provided here for Ollama External API on macOS results in a 403 error in the PrivateGPT container when attempting to communicate with Ollama.
I've verified that Ollama is running locally by visiting http://localhost:11434/ and receiving the customary "Ollama is running".
Let me know if there's any additional info I can provide that would be helpful, thanks!
Steps to Reproduce
Expected Behavior
Successful access to Ollama locally installed on host from PrivateGPT
Actual Behavior
HTTP 403 error following issuance of docker-compose --profile ollama-api up command followed by container exit
Environment
macOS 14.6.1, Ollama 0.3.6, ollama-api profile
Additional Information
No response
Version
0.6.2
Setup Checklist
NVIDIA GPU Setup Checklist
nvidia-smi
to verify).sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi
)