stitionai / devika

Devika is an Agentic AI Software Engineer that can understand high-level human instructions, break them down into steps, research relevant information, and write code to achieve the given objective. Devika aims to be a competitive open-source alternative to Devin by Cognition AI.
MIT License
18.49k stars 2.4k forks source link

ollama model in docker compose #300

Open yongjer opened 7 months ago

yongjer commented 7 months ago

Describe the bug cannot find pulled ollama model

To Reproduce Steps to reproduce the behavior:

  1. docker compose up
  2. attach shell of ollama container in docker-compose
  3. run ollama pull gemma:2b

Expected behavior able to choose ollama model in UI

Desktop (please complete the following information):

Additional context Add any other context about the problem here.

cpAtor commented 7 months ago

I am seeing same issue. not seeing anything under ollama when using docker compose up command in the devika root directory.

initially there were no models loaded in the ollama server when I first ran the docker compose up command. So, I manually loaded the phi model by running ollama pull phi in the ollama container followed by a docker compose restart in the devika root directory.

After this I am able to confirm the presence of phi model in ollama server as shown below image

Still, not seeing the option in devika UI as shown below. image

Has anyone been able to run devika using docker?

Volko61 commented 7 months ago

Same issue on linux with deepseek coder installed on ollama via the ollama run deepseekcoder.

even tried to restart the container or run ollama on host machine, still don't show up

RexWzh commented 7 months ago

Same for me, using either docker or Python.

ADD: In my case, no model shows in the frontend.

config.toml

❯ cat config.toml
[STORAGE]
SQLITE_DB = "db/devika.db"
SCREENSHOTS_DIR = "screenshots"
PDFS_DIR = "pdfs"
PROJECTS_DIR = "projects"
LOGS_DIR = "logs"
REPOS_DIR = "repos"

[API_KEYS]
BING = "39b..."
OPENAI = "sk-..."

[API_ENDPOINTS]
BING = "https://api.bing.microsoft.com/v7.0/search"
OLLAMA = "http://myollama:11434/v1"
OPENAI = "https://myendpoint/v1"

[LOGGING]
LOG_REST_API = "true"
LOG_PROMPTS = "false"

logs

Python logs:

❯ python devika.py
24.04.09 23:39:18: root: INFO   : Initializing Devika...
24.04.09 23:39:18: root: INFO   : Initializing Prerequisites Jobs...
24.04.09 23:39:20: root: INFO   : Loading sentence-transformer BERT models...
No sentence-transformers model found with name sentence-transformers/all-MiniLM-L6-v2. Creating a new one with MEAN pooling.
24.04.09 23:39:28: root: INFO   : BERT model loaded successfully.
24.04.09 23:39:29: root: INFO   : Ollama available
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
        - Avoid using `tokenizers` before the fork if possible
        - Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
24.04.09 23:39:30: root: INFO   : Devika is up and running!

Docker logs:

❯ dlog
devika-backend-engine  | 24.04.09 15:40:57: root: ERROR  : Failed to list Ollama models:
devika-backend-engine  | 24.04.09 15:41:04: root: INFO   : Booting up... This may take a few seconds
devika-backend-engine  | 24.04.09 15:41:04: root: INFO   : Initializing Devika...
devika-backend-engine  | 24.04.09 15:41:04: root: INFO   : Initializing Prerequisites Jobs...
devika-backend-engine  | 24.04.09 15:41:04: root: INFO   : Loading sentence-transformer BERT models...
devika-backend-engine  | 24.04.09 15:41:44: root: INFO   : BERT model loaded successfully.
devika-frontend-app    | $ vite dev --host
devika-frontend-app    | Forced re-optimization of dependencies
devika-frontend-app    |
devika-frontend-app    |   VITE v5.2.8  ready in 612 ms
devika-frontend-app    |
devika-backend-engine  |  * Serving Flask app 'devika'
devika-frontend-app    |   ➜  Local:   http://localhost:3000/
devika-backend-engine  |  * Debug mode: off
devika-frontend-app    |   ➜  Network: http://10.10.69.3:3000/
devika-frontend-app    | 3:38:40 PM [vite] ✨ new dependencies optimized: socket.io-client, xterm, xterm-addon-fit, tiktoken/lite
devika-frontend-app    | 3:38:40 PM [vite] ✨ optimized dependencies changed. reloading

screenshot

image
ARajgor commented 7 months ago

So docker files is not fully functional yet. You can try without running docker. If ollama is running in your system it should pick it up.

RexWzh commented 7 months ago

I think the primary challenge is with the backend services. It works when connecting locally via localhost, but can run into cross-origin problems when going through a domain name. Generally, the backend services should be designed to be accessed using URIs, for example by facilitating backend connections through endpoints like /api.