Open sid-mandati opened 9 months ago
I also have ollama installed and running on my mac at the time i execute docker compose up
Quality of this software is below any level. After several tests I gave up.
The pull-model container is not intended to be long running. It is just there to make sure that the model is pulled.
On a mac, the value http://host.docker.internal:11434
for OLLAMA_BASE_URL is correct.
I'm not sure why you're seeing this error though. Ollama was either unable to pull llama2:latest
or unable to verify what it pulled.
@sid-mandati it could be helpful to run ollama pull llama2:latest
from the command line just to see if there are any networking or bandwidth issues.
hello. thanks for your reply. when i ping the host.docker.internal from my mac, i get a reply. i was also able to verify ollama was running on my mac by opening the ollama base url from my mac browser. i am guessing the docker is not able to access ollama running on my mac. if you believe that could be the issue. is there a fix you would recommend?
You can get more information about what went wrong using 'docker logs pull-model', what does it show?
You have to add config in /etc/hosts as follows:
127.0.0.1 host.docker.internal
When i do
docker compose up
, i get the below error. 6 out of the 7 containers are running, except for the pull-model-1. ollama is running on my mac M1 already. I have the following in my .env fileLLM=llama2:latest EMBEDDING_MODEL=sentence_transformer OLLAMA_BASE_URL=http://host.docker.internal:11434
Error Details: pull-model-1 | pulling ollama model llama2 using http://host.docker.internal:11434 pull-model-1 exited with code 0