Closed Hely0n closed 1 year ago
I tried it with docker-compose now, and it still doesn't work, but with other errors.
This is very likely an environment issue. I know there are users who have had good luck with portainer. Please go to our discord server to ask for help: https://obico.io/discord/
I'm closing the issue now. We can reopen it if it turns out to be a bug or a feature request.
I know this has been closed, but I can confirm that setup via Portainer still fails in this way in 2024:
python: can't open file 'manage.py': [Errno 2] No such file or directory
I did discover that if I set up Obico-Server with Portainer, and then run docker-compose from the commandline (overwriting/updating what Portainer created) I end up with a stack that can be managed via Portainer and this error goes away. I think there is a bug here, not sure who's it is. In any case I wanted to document that this work around worked for me, hopefully it helps someone else.
docker compose up -d
behaves this way for me too. If it matters, I am building this on a Jetson Xavier NX.
docker logs [obico-server-web container]
produces python: can't open file '//manage.py': [Errno 2] No such file or directory
docker logs [obico-server-ml_api container]
produces
[2024-05-17 18:59:26 +0000] [7] [INFO] Worker exiting (pid: 7)
[2024-05-17 18:59:26 +0000] [1] [ERROR] Worker (pid:7) exited with code 3
[2024-05-17 18:59:26 +0000] [1] [ERROR] Shutting down: Master
[2024-05-17 18:59:26 +0000] [1] [ERROR] Reason: Worker failed to boot.
[2024-05-17 19:00:27 +0000] [1] [INFO] Starting gunicorn 21.2.0
[2024-05-17 19:00:27 +0000] [1] [INFO] Listening at: http://0.0.0.0:3333 (1)
[2024-05-17 19:00:27 +0000] [1] [INFO] Using worker: sync
[2024-05-17 19:00:27 +0000] [7] [INFO] Booting worker with pid: 7
[2024-05-17 19:00:27 +0000] [7] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.8/dist-packages/gunicorn/arbiter.py", line 609, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.8/dist-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.8/dist-packages/gunicorn/util.py", line 371, in import_app
mod = importlib.import_module(module)
File "/usr/lib/python3.8/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1014, in _gcd_import
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 973, in _find_and_load_unlocked
ModuleNotFoundError: No module named 'wsgi'
docker logs [obico-server-tasks container]
produces
Error: Invalid value for '-A' / '--app':
Unable to load celery application.
The module config was not found.
Usage: celery [OPTIONS] COMMAND [ARGS]...
Try 'celery --help' for help.
It seems like these containers are not being built correctly from all the missing dependencies?
Okay after examining the Dockerfile
and looking in the containers themselves, it's clear that the current docker-compose.yml
file doesn't work. manage.py
and wsgi
missing are a result of being in the incorrect work directory when the docker command is invoked. Adding the proper workdir to the container's definition in docker-compose.yml
fixed this. Additionally, I ran into some issue along the lines of
obico-server-web-1 | File "/app/lib/gcode_metadata.py", line 10, in <module>
obico-server-web-1 | from components.file_manager.metadata import *
obico-server-web-1 | ModuleNotFoundError: No module named 'components'
and this was because this components module is being imported from a projected called moonraker
. Although I see the env file being declared in the Dockerfile
, it is missing from the container hence components
not being found. Explicitly setting this in the docker-compose.yml
file made the error go away but perhaps the devs should take a look at this and test it more thoroughly?
I can at least get to the login screen on localhost:3334
now but I'm not sure if I'm doing things terribly wrong and others have no problems building the docker containers or if the docker files are out of date now?
Anyway, here's the full docker-compose.yml file with added working_dir
and PYTHONPATH
in case anyone runs into similar issues and want to try:
version: '2.4'
x-web-defaults: &web-defaults
restart: unless-stopped
build:
context: backend
dockerfile: 'Dockerfile'
volumes:
- ./backend:/app
- ./frontend:/frontend
depends_on:
- redis
working_dir: /app
environment:
PYTHONPATH: '/moonraker/moonraker'
OCTOPRINT_TUNNEL_PORT_RANGE: '0-0'
EMAIL_HOST: '${EMAIL_HOST-}'
EMAIL_HOST_USER: '${EMAIL_HOST_USER-}'
EMAIL_HOST_PASSWORD: '${EMAIL_HOST_PASSWORD-}'
EMAIL_PORT: '${EMAIL_PORT-587}'
EMAIL_USE_TLS: '${EMAIL_USE_TLS-True}'
DEFAULT_FROM_EMAIL: '${DEFAULT_FROM_EMAIL-changeme@example.com}'
DEBUG: '${DEBUG-False}' # Don't set DEBUG to True unless you know what you are doing. Otherwise the static files will be cached in browser until hard-refresh
ADMIN_IP_WHITELIST: '${ADMIN_IP_WHITELIST-}'
SITE_USES_HTTPS: '${SITE_USES_HTTPS-False}'
SITE_IS_PUBLIC: '${SITE_IS_PUBLIC-False}'
CSRF_TRUSTED_ORIGINS: '${CSRF_TRUSTED_ORIGINS-}'
SOCIAL_LOGIN: '${SOCIAL_LOGIN-False}'
REDIS_URL: '${REDIS_URL-redis://redis:6379}'
DATABASE_URL: '${DATABASE_URL-sqlite:////app/db.sqlite3}'
INTERNAL_MEDIA_HOST: '${INTERNAL_MEDIA_HOST-http://web:3334}'
ML_API_HOST: '${ML_API_HOST-http://ml_api:3333}'
ACCOUNT_ALLOW_SIGN_UP: '${ACCOUNT_ALLOW_SIGN_UP-False}'
WEBPACK_LOADER_ENABLED: '${WEBPACK_LOADER_ENABLED-False}'
TELEGRAM_BOT_TOKEN: '${TELEGRAM_BOT_TOKEN-}'
TWILIO_ACCOUNT_SID: '${TWILIO_ACCOUNT_SID-}'
TWILIO_AUTH_TOKEN: '${TWILIO_AUTH_TOKEN-}'
TWILIO_FROM_NUMBER: '${TWILIO_FROM_NUMBER-}'
SENTRY_DSN: '${SENTRY_DSN-}'
PUSHOVER_APP_TOKEN: '${PUSHOVER_APP_TOKEN-}'
SLACK_CLIENT_ID: '${SLACK_CLIENT_ID-}'
SLACK_CLIENT_SECRET: '${SLACK_CLIENT_SECRET-}'
DJANGO_SECRET_KEY: '${DJANGO_SECRET_KEY-}'
SYNDICATE: '${SYNDICATE-}'
VERSION:
services:
ml_api:
hostname: ml_api
restart: unless-stopped
build:
context: ml_api
environment:
DEBUG: 'True'
FLASK_APP: 'server.py'
# ML_API_TOKEN:
tty: true
working_dir: /app
command: bash -c "gunicorn --bind 0.0.0.0:3333 --workers 1 wsgi"
web:
<<: *web-defaults
hostname: web
ports:
- "3334:3334"
depends_on:
- ml_api
command: sh -c 'python manage.py migrate && python manage.py collectstatic -v 2 --noinput && daphne -b 0.0.0.0 -p 3334 config.routing:application'
tasks:
<<: *web-defaults
hostname: tasks
working_dir: /app
command: sh -c "celery -A config worker --beat -l info -c 2 -Q realtime,celery"
redis:
restart: unless-stopped
image: redis:7.2-alpine
@0xAl3xH thank you for investigating this issue. However, "moonraker" should be installed on the base image already: https://github.com/TheSpaghettiDetective/obico-server/blob/release/backend/Dockerfile.base#L12
Can you further investigate why it was missing for you?
@kennethjiang yes it was indeed installed on the base image but the issue is there because the env variable PYTHONPATH
was not set. I added this (PYTHONPATH: '/moonraker/moonraker'
) to the docker compose file posted above and was able to get rid of the error. I saw that this env variable was set in Dockerfile.base
but when running the image, this was not the case.
I'd focus on figuring out why PYTHONPATH is missing for some self-hosted servers (like yours) but not others.
I created a minimal example where the env variable specified in the Dockerfile
doesn't show up when using docker compose up
but exists if you build and run the container manually.
Dockerfile:
FROM thespaghettidetective/web:base-1.15
ENV HELLO="HELLO"
docker-compose.yml:
version: '2.4'
x-web-defaults: &web-defaults
restart: unless-stopped
build:
dockerfile: 'Dockerfile'
services:
test:
<<: *web-defaults
hostname: test2
command: sh -c 'env'
when running docker compose up
I get these env variables:
docker-test-test-1 | HOSTNAME=test2
docker-test-test-1 | SHLVL=1
docker-test-test-1 | HOME=/root
docker-test-test-1 | PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
docker-test-test-1 | PWD=/
the container built by docker build .
, run interactively
HOSTNAME=3c56109ab96e
PYTHON_VERSION=3.10.13
PWD=/app
HELLO=HELLO
PYTHON_SETUPTOOLS_VERSION=65.5.1
HOME=/root
LANG=C.UTF-8
...
PYTHONPATH=:/moonraker/moonraker
TERM=xterm
SHLVL=1
PYTHON_PIP_VERSION=23.0.1
...
PATH=/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
DEBIAN_FRONTEND=noninteractive
_=/usr/bin/env
It appears the image built by docker compose
is not respecting the ENV
declaration in the docker file. I wonder if this is a problem with nvidia-docker
on Jetpack 4.x? My docker version on the Jetson is Docker version 19.03.6, build 369ce74a3c
in case others run into this issue. I was unable to reproduce the issue on my Macbook running docker
version 20.10.21
... At any rate, setting the workdir and env appears to be a viable workaround.
Describe the bug I composed the stack as described in https://www.obico.io/docs/server-guides/install/ (but with Portainer) The containers get created successfully, but the web container isn't reachable and the log says:
"python: can't open file 'manage.py': [Errno 2] No such file or directory"
I use Portainer to create the stack, because my Docker run's on a QNAP NAS and I don't want to mess around with git, docker-compose etc. there. But I think this shouldn't matter.
To Reproduce Steps to reproduce the behavior:
Hosting environment (please complete the following information):