Closed BenBatsir closed 1 week ago
I'm pretty sure this has nothing to do with private-gpt
but with networking shenanigans related to containers.
I've pretty much tried to dockerize it according to the docs
Is Ollama running from inside the container too? (localhost)
Please unsubscribe me.
2024년 4월 21일 (일) 오후 4:49, Pol Dellaiera @.***>님이 작성:
Is Ollama running from inside the container too? (localhost)
— Reply to this email directly, view it on GitHub https://github.com/zylon-ai/private-gpt/issues/1872#issuecomment-2067945925, or unsubscribe https://github.com/notifications/unsubscribe-auth/AA4ZVIQMMQD7SRSFZI2E4G3Y6NVRVAVCNFSM6AAAAABGRE3LGOVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDANRXHE2DKOJSGU . You are receiving this because you are subscribed to this thread.Message ID: @.***>
Is Ollama running from inside the container too? (localhost)
no
That's the issue. From the container localhost
IS the container, therefore, it will never reach the Ollama instance.
That's the issue. From the container
localhost
IS the container, therefore, it will never reach the Ollama instance.
it makes sense, but then again I'm surprised how come I need to change anything given I'm simply following the documentation.
Maybe I need to expose port 11434 to this container ?
Maybe I need to expose port 11434 to this container ?
No, that is pointless.
What you need to do instead is to replace occurrences of localhost:11434
with <host-ip>:11434
. This way, the container will be able to connect to it.
Maybe I need to expose port 11434 to this container ?
No, that is pointless.
What you need to do instead is to replace occurrences of
localhost:11434
with<host-ip>:11434
. This way, the container will be able to connect to it.
Not sure I understand where exactly. In the code itself?
private-gpt
uses YAML files for its configuration.
The ollama
address to connect to is hardcoded in these configuration files:
What you need to do is to create your custom YAML file and use it in Docker (via a directory binding?).
AFAIK and since there's no environment variable to change that Ollama URL, you don't have other choice.
private-gpt
uses YAML files for its configuration. Theollama
address to connect to is hardcoded in these configuration files:What you need to do is to create your custom YAML file and use it in Docker (via a directory binding?).
AFAIK and since there's no environment variable to change that Ollama URL, you don't have other choice.
ok... so localhost
should be change to what? 127.0.0.1 ? something else?
I guess your machine has a local IP address ? Then use it.
I guess your machine has a local IP address ? Then use it.
I ended up changing it to the container name:
ollama:
llm_model: mistral
embedding_model: nomic-embed-text
api_base: http://privategpt-ollama-1:11434
embedding_api_base: http://privategpt-ollama-1:11434
Now I'm having a different issue:
ValueError: Ollama call failed with status code 404. Details: model 'nomic-embed-text' not found, try pulling it first
That's very good news. That means you succeeded to connect to Ollama.
Now you need to read and hopefully tackle the next issue.
To fix it, just do: ollama pull nomic-embed-text
That's very good news. That means you succeeded to connect to Ollama.
Now you need to read and hopefully tackle the next issue.
To fix it, just do:
ollama pull nomic-embed-text
it worked!
so should I add it in Dockerfile.external
as a step ?
I have no knowledge in Dockerfile.external
, but very curious to learn how you fixed the issue. It would be nice if you could document it here too for future reference.
I have no knowledge in
Dockerfile.external
, but very curious to learn how you fixed the issue. It would be nice if you could document it here too for future reference.
actually this docker file belongs to the private-gpt image, so I'll need to figure this out somehow, but I will document it once I'll find a suitable solution.
Thanks a lot for your help
@BenBatsir You can't add this line to Dockerfile.external
, as it is something you need to run on the ollama
container.
In this situation, I have three ideas on how to fix it:
docker-compose
and replace it with something like:
ollama pull nomic-embed-text && ollama pull mistral && ollama serve
.Anyway, we will talk internally about this problem and try to fix a solution to prevent further related problems.
It works perfectly when running it locally (without docker), but when I tried to import a doc via UI (inside a container), I'm getting the bellow: