Open omerbsezer opened 10 months ago
The same issue :/
Hi
Edit 12 Feb 2024: These steps are suboptimal, scroll down in this conversation for the ideal way.
I ran into the same issue at first. Now it seems fixed for me after executing following steps:
volumes:
- ./local_data/:/home/worker/app/local_data
- ./models/:/home/worker/app/models
local:
llm_hf_repo_id: ${PGPT_HF_REPO_ID:TheBloke/Mistral-7B-Instruct-v0.1-GGUF}
llm_hf_model_file: ${PGPT_HF_MODEL_FILE:mistral-7b-instruct-v0.2.Q5_K_M.gguf} # The actual model you downloaded
embedding_hf_model_name: ${PGPT_EMBEDDING_HF_MODEL_NAME:BAAI/bge-small-en-v1.5}
environment:
PORT: 8080
PGPT_PROFILES: docker
PGPT_MODE: local
Hope this helps, if it does, make sure to give a 👍
Stale issue
You should run 'poetry run python scripts/setup' before make run
Hey @Robinsane, trying your suggestion didn't fix it for building the image for me. Now im trying to compose up, but get told
invalid interpolation format for services.private-gpt.local.llm_hf_repo_id.
You may need to escape any $ with another $.
Does your much larger brain hold any insights about this?
Hey @Robinsane, trying your suggestion didn't fix it for building the image for me. Now im trying to compose up, but get told
invalid interpolation format for services.private-gpt.local.llm_hf_repo_id. You may need to escape any $ with another $.
Does your much larger brain hold any insights about this?
Brain not that big, no clue about your problem. I can however say that the steps I described above are suboptimal. The ideal way to do it is described by Imartez on top of following PR: https://github.com/imartinez/privateGPT/pull/1445
@Robinsane thanks lots for that pointer! While i struggled to get it running as imartez described, i changed the docker compose run --rm --entrypoint="bash -c '[ -f scripts/setup ] && scripts/setup'" private-gpt
to docker compose run --rm --entrypoint="/usr/bin/env python3 scripts/setup" private-gpt
as i got a permission error when trying to use the original. Seems to have worked, as its downloading the models right now.
Update: Well, i'll be damned, it worked. pretty well at that. even changing the models works! Now i just need to figure out how to get it to use the gpu...
I install the container by using the docker compose file and the Dockerfile.openai
file.
docker-compose.yaml
services:
openai:
build:
context: .
dockerfile: Dockerfile.openai
image: laurap28/hidrogeniogpt
ports:
- 8080:8080/tcp
environment:
PGPT_PROFILES: openai
OPENAI_API_KEY: ${OPENAI_API_KEY}
OPENAI_API_BASE: https://api.openai.com/v1
OPENAI_MODEL: gpt-4o-mini
OPENAI_TEMPERATURE: 0.5
volumes:
- ./app:/app
Output
When I run the image, and the application startup complete, it doesnt initializated
I created the image using dockerfile.local running docker-compose.file. But, when I run the image, it cannot run, so I run it in interactive mode to view the problem. It cannot be initialized.
It would be better to download the model and dependencies automatically and/or the documentation on how to run with the container.
Run:
docker run -it privategpt-private-gpt:latest bash
Output: