zylon-ai / private-gpt

Interact with your documents using the power of GPT, 100% privately, no data leaks
https://privategpt.dev
Apache License 2.0
54.14k stars 7.29k forks source link

Optimizing the Dockerfile and/or the documentation on how to run with the container #1452

Open omerbsezer opened 10 months ago

omerbsezer commented 10 months ago

I created the image using dockerfile.local running docker-compose.file. But, when I run the image, it cannot run, so I run it in interactive mode to view the problem. It cannot be initialized.

It would be better to download the model and dependencies automatically and/or the documentation on how to run with the container.

Run: docker run -it privategpt-private-gpt:latest bash

Output:

16:03:51.306 [INFO    ] private_gpt.settings.settings_loader - Starting application with profiles=['default']
There was a problem when trying to write in your cache folder (/nonexistent/.cache/huggingface/hub). You should set the environment variable TRANSFORMERS_CACHE to a writable directory.
16:04:02.044 [WARNING ]                matplotlib - Matplotlib created a temporary cache directory at /tmp/matplotlib-vs3jk8yh because the default path (/nonexistent/.config/matplotlib) is not a writable directory; it is highly recommended to set the MPLCONFIGDIR environment variable to a writable directory, in particular to speed up the import of Matplotlib and to better support multiprocessing.
16:04:03.289 [INFO    ]   matplotlib.font_manager - generated new fontManager
tokenizer_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.46k/1.46k [00:00<00:00, 8.99MB/s]
tokenizer.model: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 493k/493k [00:00<00:00, 11.4MB/s]
tokenizer.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.80M/1.80M [00:00<00:00, 6.04MB/s]
special_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 72.0/72.0 [00:00<00:00, 267kB/s]
16:04:09.004 [INFO    ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=local
Traceback (most recent call last):
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
    return self._context[key]
           ~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.ui.ui.PrivateGptUi'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
    return self._context[key]
           ~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.server.ingest.ingest_service.IngestService'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
    return self._context[key]
           ~~~~~~~~~~~~~^^^^^
KeyError: <class 'private_gpt.components.llm.llm_component.LLMComponent'>

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "/home/worker/app/private_gpt/__main__.py", line 5, in <module>
    from private_gpt.main import app
  File "/home/worker/app/private_gpt/main.py", line 11, in <module>
    app = create_app(global_injector)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/private_gpt/launcher.py", line 50, in create_app
    ui = root_injector.get(PrivateGptUi)
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
    provider_instance = scope_instance.get(interface, binding.provider)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
    instance = self._get_instance(key, provider, self.injector)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
    return provider.get(injector)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
    return injector.create_object(self._cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
    self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
    dependencies = self.args_to_inject(
                   ^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
    instance: Any = self.get(interface)
                    ^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
    provider_instance = scope_instance.get(interface, binding.provider)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
    instance = self._get_instance(key, provider, self.injector)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
    return provider.get(injector)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
    return injector.create_object(self._cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
    self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
    dependencies = self.args_to_inject(
                   ^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
    instance: Any = self.get(interface)
                    ^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
    provider_instance = scope_instance.get(interface, binding.provider)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
    return function(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
    instance = self._get_instance(key, provider, self.injector)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
    return provider.get(injector)
           ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
    return injector.create_object(self._cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
    self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
  File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1040, in call_with_injection
    return callable(*full_args, **dependencies)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/worker/app/private_gpt/components/llm/llm_component.py", line 38, in __init__
    self.llm = LlamaCPP(
               ^^^^^^^^^
  File "/home/worker/app/.venv/lib/python3.11/site-packages/llama_index/llms/llama_cpp.py", line 119, in __init__
    raise ValueError(
ValueError: Provided model path does not exist. Please check the path or provide a model_url to download.
FulgerX2007 commented 10 months ago

The same issue :/

Robinsane commented 10 months ago

Hi

Edit 12 Feb 2024: These steps are suboptimal, scroll down in this conversation for the ideal way.

I ran into the same issue at first. Now it seems fixed for me after executing following steps:

  1. Download a model from huggingface.co
  2. Place this model in the "models" folder and made sure to create a volume for this models folder:
    volumes:
      - ./local_data/:/home/worker/app/local_data
      - ./models/:/home/worker/app/models
  3. adjust "settings-docker.yaml" with the filename of your model:
    local:
    llm_hf_repo_id: ${PGPT_HF_REPO_ID:TheBloke/Mistral-7B-Instruct-v0.1-GGUF}
    llm_hf_model_file: ${PGPT_HF_MODEL_FILE:mistral-7b-instruct-v0.2.Q5_K_M.gguf}  # The actual model you downloaded
    embedding_hf_model_name: ${PGPT_EMBEDDING_HF_MODEL_NAME:BAAI/bge-small-en-v1.5}
  4. make sure to use settings-docker.yaml, by setting the env variable PGPT_PROFILES to "docker":
    environment:
      PORT: 8080
      PGPT_PROFILES: docker
      PGPT_MODE: local

Hope this helps, if it does, make sure to give a 👍

github-actions[bot] commented 9 months ago

Stale issue

Wadinsky commented 9 months ago

You should run 'poetry run python scripts/setup' before make run

Apotrox commented 9 months ago

Hey @Robinsane, trying your suggestion didn't fix it for building the image for me. Now im trying to compose up, but get told

invalid interpolation format for services.private-gpt.local.llm_hf_repo_id.
You may need to escape any $ with another $.

Does your much larger brain hold any insights about this?

Robinsane commented 9 months ago

Hey @Robinsane, trying your suggestion didn't fix it for building the image for me. Now im trying to compose up, but get told

invalid interpolation format for services.private-gpt.local.llm_hf_repo_id.
You may need to escape any $ with another $.

Does your much larger brain hold any insights about this?

Brain not that big, no clue about your problem. I can however say that the steps I described above are suboptimal. The ideal way to do it is described by Imartez on top of following PR: https://github.com/imartinez/privateGPT/pull/1445

Apotrox commented 9 months ago

@Robinsane thanks lots for that pointer! While i struggled to get it running as imartez described, i changed the docker compose run --rm --entrypoint="bash -c '[ -f scripts/setup ] && scripts/setup'" private-gpt to docker compose run --rm --entrypoint="/usr/bin/env python3 scripts/setup" private-gpt as i got a permission error when trying to use the original. Seems to have worked, as its downloading the models right now. Update: Well, i'll be damned, it worked. pretty well at that. even changing the models works! Now i just need to figure out how to get it to use the gpu...

lauraparra28 commented 1 month ago

I install the container by using the docker compose file and the Dockerfile.openai file.

docker-compose.yaml

services:

  openai:

    build:
      context: .
      dockerfile: Dockerfile.openai
    image: laurap28/hidrogeniogpt
    ports:
      - 8080:8080/tcp
    environment:
      PGPT_PROFILES: openai
      OPENAI_API_KEY: ${OPENAI_API_KEY}
      OPENAI_API_BASE: https://api.openai.com/v1
      OPENAI_MODEL: gpt-4o-mini
      OPENAI_TEMPERATURE: 0.5
    volumes:
      - ./app:/app

image

Output image

When I run the image, and the application startup complete, it doesnt initializated

image