TheSpaghettiDetective / obico-server

Obico is a community-built, open-source smart 3D printing platform used by makers, enthusiasts, and tinkerers around the world.
https://obico.io
GNU Affero General Public License v3.0
1.47k stars 297 forks source link

DockerHub Images #560

Open alexyao2015 opened 2 years ago

alexyao2015 commented 2 years ago

I had just seen #42 which published the prebuilt images to Dockerhub, however, I'm seeing that many of these haven't been updated in quite some time. Additionally, I see there is a circleci workflow in the root directory that builds the images, but it doesn't appear to be running successfully. Are there plans to reactivate this or possibly better, migrate it to Github Actions to deploy them? Also is there a chance the docker-compose.yml files could be updated to pull from DockerHub instead?

In relation to this, I don't see any releases or tags on the repository. Can these be used to display when the last stable version was released? These could also be used to tag the images on Dockerhub.

twistedgrim commented 2 years ago

Ahh I see this too. Yep would be nice to get the docker images updated and / or maybe some kind of release process.

Ill try a build from local and see whats up..

kennethjiang commented 2 years ago

@alexyao2015 @twistedgrim The docker image is just a "base" that pulls in the system-native packages.

We release our server code continuously so we can't rely on release docker images. Otherwise everyone will have to download GBs of images when you upgrade.

If what you are looking for is how to update your private TSD server, the steps are much simpler than update docker images: https://github.com/TheSpaghettiDetective/TheSpaghettiDetective#upgrade-server

alexyao2015 commented 2 years ago

@kennethjiang You can still continually release docker images without having to download massive amounts of data as well. Just need a base image. Docker will recognize that the earlier layers are the same and will only download the changes

kennethjiang commented 2 years ago

@alexyao2015 The way docker does it is a lot less efficient than git. If you try to update the server using the link I provided, it'll be a few seconds. Plus it already worked.

What are the benefits for you if the updates are released as docker images?

Codel1417 commented 2 years ago

What are the benefits for you if the updates are released as docker images?

Works on container only OSs, supports auto updating with watchtower

I would rather download large binaries than deal with any issues updating through git

Github offers free Docker image hosting for public repos, combine that with github actions for auto releasing on push to master

kennethjiang commented 2 years ago

@Codel1417 Now I understand the benefit for you is on automation. This is actually how we do it in our cloud. However, since most Obico users don't have such sophisticated devops tools, using dockerhub images to release will make it harder, not easier for them to upgrade.

Maybe you can make the change in your own fork?

alexyao2015 commented 2 years ago

Normal users can simply use docker-compose pull && docker-compose down && docker-compose up -d. It's really not that hard to use Docker. Not to mention, its not the user that needs DevOps tools, the repo and not the user needs it.

Users also get a more consistent experience since the container is static and the same for all users. See this link in the section about immutability. https://cloud.google.com/architecture/best-practices-for-operating-containers

kennethjiang commented 2 years ago

I think I see your point here. Can you come up with a PR so that we can evaluate how this change will impact the release/upgrade process?

ptsa commented 2 years ago

Support the use of docker hub

lettore commented 2 years ago

I had done for me a simple CI to build your images automatically, it's very simple. Just copy backend and frontend folders in the container, than change the environment variable of the db position pointing to another folder, not the main backend but /backend/db for example. In this way I mount two volumes on the container, /backend/db for the database, and /backend/static_files/media for g-code uploads and time-lapse. I would suggest to use tags on the repository for version tracking, now it's impossible to me to know which version is the image.

kennethjiang commented 2 years ago

@lettore Great. Can you pull together what you did as a PR? Thanks!

lettore commented 2 years ago

Not really as I don't know your workflow, I can share the code I'm using on my gitlab instance. I made no modifications to docker compose file so it's just building the images as you already do now, copy the directories in the container and pushing the image to the register.

    - git clone -b release https://github.com/TheSpaghettiDetective/obico-server.git
    - cd obico-server
    - cp -r frontend backend/frontend
    - echo 'RUN mv -f /app/frontend /frontend' >> backend/Dockerfile
    - docker buildx bake -f docker-compose.yml web --set web.tags="repository/obico:tag" --push
    - docker buildx bake -f docker-compose.yml ml_api --set ml_api.tags="repository/obico-ml:tag" --push

Using docker buildx you can also build the same image for multiple arch like arm64 if you need. And this is a modified docker-compose.yml that works with my already build images so you can try how it works.

version: '3.8'

services:
  obico-web:
    image: lettore/obico:master
    hostname: obico-web
    depends_on:
      - redis    
    environment:
      EMAIL_HOST: mail.host
      EMAIL_HOST_USER: user@mail.host
      EMAIL_HOST_PASSWORD: password
      EMAIL_PORT: 587
      EMAIL_USE_TLS: 'True'
      DEFAULT_FROM_EMAIL: user@mail.host
      DEBUG: 'False'
      SITE_USES_HTTPS: 'True'
      SITE_IS_PUBLIC: 'False'
      SOCIAL_LOGIN: 'False'
      REDIS_URL: redis://obico-redis:6379
      DATABASE_URL: sqlite:////app/db/db.sqlite3
      INTERNAL_MEDIA_HOST: http://obico-web:3334
      ML_API_HOST: http://obico-ml_api:3333
      ACCOUNT_ALLOW_SIGN_UP: 'False'
      WEBPACK_LOADER_ENABLED: 'False'
      OCTOPRINT_TUNNEL_PORT_RANGE: 15853-15873
#      TELEGRAM_BOT_TOKEN: '${TELEGRAM_BOT_TOKEN-}'
#      TWILIO_ACCOUNT_SID: '${TWILIO_ACCOUNT_SID-}'
#      TWILIO_AUTH_TOKEN: '${TWILIO_AUTH_TOKEN-}'
#      TWILIO_FROM_NUMBER: '${TWILIO_FROM_NUMBER-}'
#      SENTRY_DSN: '${SENTRY_DSN-}'
#      PUSHOVER_APP_TOKEN: '${PUSHOVER_APP_TOKEN-}'
#      SLACK_CLIENT_ID: '${SLACK_CLIENT_ID-}'
#      SLACK_CLIENT_SECRET: '${SLACK_CLIENT_SECRET-}'
#      VERSION: 
      TZ: Europe/Rome
    networks:
      obico: 
    deploy:
      mode: replicated
      replicas: 1
      placement:
        max_replicas_per_node: 1
    volumes:
      - db:/app/db
      - media:/app/static_build/media   
    ports:
      - "3334:3334"
    command: sh -c 'python manage.py migrate && python manage.py collectstatic -v 2 --noinput && python manage.py runserver --nostatic --noreload 0.0.0.0:3334'

  obico-tasks:
    image: lettore/obico:master
    hostname: obico-tasks
    depends_on:
      - redis     
    environment:
      EMAIL_HOST: mail.host
      EMAIL_HOST_USER: user@mail.host
      EMAIL_HOST_PASSWORD: password
      EMAIL_PORT: 587
      EMAIL_USE_TLS: 'True'
      DEFAULT_FROM_EMAIL: user@mail.host
      DEBUG: 'False'
      SITE_USES_HTTPS: 'True'
      SITE_IS_PUBLIC: 'False'
      SOCIAL_LOGIN: 'False'
      REDIS_URL: redis://obico-redis:6379
      DATABASE_URL: sqlite:////app/db/db.sqlite3
      INTERNAL_MEDIA_HOST: http://obico-web:3334
      ML_API_HOST: http://obico-ml_api:3333
      ACCOUNT_ALLOW_SIGN_UP: 'False'
      WEBPACK_LOADER_ENABLED: 'False'
      OCTOPRINT_TUNNEL_PORT_RANGE: 15853-15873
#      TELEGRAM_BOT_TOKEN: '${TELEGRAM_BOT_TOKEN-}'
#      TWILIO_ACCOUNT_SID: '${TWILIO_ACCOUNT_SID-}'
#      TWILIO_AUTH_TOKEN: '${TWILIO_AUTH_TOKEN-}'
#      TWILIO_FROM_NUMBER: '${TWILIO_FROM_NUMBER-}'
#      SENTRY_DSN: '${SENTRY_DSN-}'
#      PUSHOVER_APP_TOKEN: '${PUSHOVER_APP_TOKEN-}'
#      SLACK_CLIENT_ID: '${SLACK_CLIENT_ID-}'
#      SLACK_CLIENT_SECRET: '${SLACK_CLIENT_SECRET-}'
#      VERSION: 
      TZ: Europe/Rome
    networks:
      obico: 
    deploy:
      mode: replicated
      replicas: 1
      placement:
        max_replicas_per_node: 1
    volumes:
      - db:/app/db
      - media:/app/static_build/media   
    command: sh -c "celery -A config worker --beat -l info -c 2 -Q realtime,celery"

  obico-ml_api:
    image: lettore/obico-ml:master
    hostname: obico-ml_api
    environment:
      DEBUG: 'True'
      FLASK_APP: 'server.py'    
      TZ: Europe/Rome
    networks:
      obico:
    deploy:
      mode: replicated
      replicas: 1
      placement:
        max_replicas_per_node: 1
    command: bash -c "gunicorn --bind 0.0.0.0:3333 --workers 1 wsgi"

  obico-redis:
    image: redis:5.0-alpine
    hostname: obico-redis
    environment:
      TZ: Europe/Rome
    networks:
      obico:
    deploy:
      mode: replicated
      replicas: 1
      placement:
        max_replicas_per_node: 1    

networks:
  obico: 

volumes:  
  db:
  media:

I'm not sure if the volumes should be mounted on both web and tasks service, and also if both need have environments variables declared, but this way is working. As I didn't find any tag on Github I tagged just for my personal use with the commit sha, but as I said it should be nice to do proper release and tags for versioning.

toxuin commented 1 year ago

Hi! It is currently impossible (very hard maybe?) to run this inside anything other than local docker-compose, albeit it being built on simple docker images.

The reason is that it relies on pre-built docker image (thespaghettidetective/ml_api:base-1.1) that is built manually (please correct me if I'm wrong!), while the app images themselves are supposed to be built as part of docker compose up by the client that has git repo cloned.

web image seems to rely on local copy of the whole repo as it mounts the /frontend folder from host.

My use case is rather odd, but it shows how inflexible this setup is: I'm trying to run this in a kubernetes cluster. And I can't do so without making a whole complete separate CI setup for just myself and writing my own images based on the ones I build at said CI.

There is also no way to pin my deployment to a specific major version, while allowing minor updates - the information from git does not propagate into docker images.

Please consider publishing a pre-built images that work without any git cloning, preferably automatically updated, with meaningful release tags.

Not having any images available makes Obico inaccessible for not just weirdos like me with a kubernetes cluster, but also to anyone who wants to just simply run this on their ansible-controlled raspberry-like SBC, systems like TrueNAS and Synology, etc.

alexyao2015 commented 1 year ago

I've found that a lot of 3d printing software is run this way unfortunately. It could be a lot neater with version control and static images but it seems like that is the current state of open source software ATM.

toxuin commented 1 year ago

I respectfully disagree.

For example, I'm running Octoprint in k8s just fine - complete with backups, native configmaps and secret management. I also have Frigate running in the same fashion - and while it is not a 3d printing software, it does some very similar things: camera stream capture, ML image analisys, REST API and notifications.

Yes, there exists some software that does not play well with containers. What is the reason that it would be fine to put Obico in this crappy bucket? It already has separate service for ML, it already relies on environmental variables for configuration - it should be a breeze to make it work in all container runtimes - not just local-docker-compose-on-top-of-git-repo-you-cloned-6-months-ago.

gabe565 commented 1 year ago

I wanted to host Obico in Kubernetes and so I couldn't rely on local Docker builds. Wanted to let everybody know that I've created a GitHub repo that automatically builds Obico releases. The git tag is updated by Renovate, so when a new version is pushed to the release branch of this repo, my repo will build the new version within a few hours. I also implemented a simple fix so that /frontend does not need to be volume bound. I plan to post my Kubernetes manifests to that repo soon, just need to clean them up a little bit.

See gabe565/docker-obico for more info.

toxuin commented 1 year ago

Thank you, @gabe565 !

I'd like to see this repo have something that wouldn't force us to do... that ^ in order to just run the app 😄

gabe565 commented 1 year ago

I'd like to see this repo have something that wouldn't force us to do... that ^ in order to just run the app 😄

I completely agree. This issue has been open for a while, though, and it was blocking me from deploying to Kubernetes so I decided to go ahead. I totally understand that there's more trust in an official image so feel free to fork my repo or just ignore it until there's an official image someday (hopefully) 😄

I also considered opening a PR that creates GitHub Workflow to build this repo and push to ghcr, but was unsure on the preferred way to handle the /frontend volume bind. @kennethjiang would you be opposed to this being copied into the container for the cloud builds?

toxuin commented 1 year ago

Would it also be possible to collect static files at the time of docker image build then, while we're at it? 😄

gabe565 commented 1 year ago

Possibly! I don't use Django personally, but from what I've seen, the collectstatic command is sometimes used to inject dynamic content into the generated static files (think like an API hostname, http/https URLs, analytics API key, etc). If that happens in Obico then the files will have to be generated at runtime. Otherwise, I agree that it'd be really nice to generate them during build. I'm investigating and testing in my cluster!

lettore commented 1 year ago

I use Obico in Docker Swarm so I faced the same problem, if you wish I have pre-built images https://hub.docker.com/repository/docker/lettore/obico https://hub.docker.com/repository/docker/lettore/obico-ml They are tagged with the version number of the Octoprint plugin as there are no tags on this repo.

tvories commented 1 year ago

@lettore do you have your dockerhub build stuff open source anywhere?

lettore commented 1 year ago

This is the build ci on my Gitlab server script:

It's just about copy frontend dir and modify the original dockerfile

BongoEADGC6 commented 1 year ago

@gabe565 Any update on the k8s manifests?

gabe565 commented 1 year ago

@BongoEADGC6 Sorry, I've been slacking on setting up the Helm chart! It's been running in my cluster for a while now with no issues so I'll go ahead and set that up. I'll update you when it's ready.

tvories commented 1 year ago

@gabe565 I don't know what magic you are doing in your kuberenets manifests, but I tried deploying obico into my kubernetes cluster and it choked pretty hard when trying to add a printer. The server comes up OK and allows me to create a user, but once I try to add a printer and connect it to my octoprint plugin, it errors so badly that it takes the node offline that it's running on.

gabe565 commented 1 year ago

@tvories Whoa I haven't had any issues like that. I've added a printer and have done many prints with no issues! Both Obico and the ML API are pretty light so I'm not sure what would have caused an entire node to fail like that.

I'm also using bjw-s/app-template so our deployments look pretty similar. The only thing I see that doesn't look correct is INTERNAL_MEDIA_HOST and ML_API_HOST.
I had to dig into this repo's code to figure out exactly how those work, and it looks like when prints are ongoing, obico-server occasionally makes requests to ML_API_HOST. That request includes a URL parameter that ml-api uses to load the latest printer image. Basically, ML_API_HOST needs to be a hostname that the obico-server can use to talk to ml-api, and INTERNAL_MEDIA_HOST needs to be a hostname that ml-api can use to download files from obico-server.
In my case, I'm using separate Deployments for these so that I can scale the ml-api independently, but in your case it's running as a sidecar, so I believe they should be set like this:

INTERNAL_MEDIA_HOST: "http://localhost:3334"
ML_API_HOST: "http://localhost:3333"

But still, that shouldn't make anything error out, it probably just means it'll always say "Looking good" during prints. That's what mine did before I figured those out...

Were you able to get any logs from the failure?

Edit: Also, if anybody's curious on Helm chart progress: I'm close to pushing the first release! Just need to find the time to finalize everything

tvories commented 1 year ago

@gabe565 the URLs I have appear to be working without changing to localhost, but I'll give that a shot.

Do you have the same startup commands that I have?

# web server
 command:
      [
        "/bin/sh",
        "-c",
        "python manage.py migrate && python manage.py collectstatic -v 2 --noinput && python manage.py runserver --nostatic --noreload 0.0.0.0:3334",
      ]

# ml-api
        command:
          - /bin/bash
          - -c
          - "gunicorn --bind 0.0.0.0:3333 --workers 1 wsgi"

Edit:

Just kidding, the issue was my redis URL:

  File "/usr/local/lib/python3.7/site-packages/rest_framework/serializers.py", line 517, in to_representation
    attribute = field.get_attribute(instance)
  File "/usr/local/lib/python3.7/site-packages/rest_framework/fields.py", line 454, in get_attribute
    return get_attribute(instance, self.source_attrs)
  File "/usr/local/lib/python3.7/site-packages/rest_framework/fields.py", line 107, in get_attribute
    instance = instance()
  File "/app/app/models.py", line 225, in not_watching_reason
    if not self.actively_printing():
  File "/app/app/models.py", line 234, in actively_printing
    printer_cur_state = cache.printer_status_get(self.id, 'state')
  File "/app/lib/cache.py", line 70, in printer_status_get
    status = REDIS.get(prefix)
  File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 1264, in get
    return self.execute_command('GET', name)
  File "/usr/local/lib/python3.7/site-packages/redis/client.py", line 772, in execute_command
    connection = pool.get_connection(command_name, **options)
  File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 994, in get_connection
    connection.connect()
  File "/usr/local/lib/python3.7/site-packages/redis/connection.py", line 497, in connect
    raise ConnectionError(self._error_message(e))
redis.exceptions.ConnectionError: Error -3 connecting to redis:6379. Try again.
HTTP GET /api/v1/octo/verify/?code=297781 500 [7.46, 10.244.0.123:57194]

Setting REDIS_URL: "redis://localhost:6379" fixed my issue. Thanks for the hint!

gabe565 commented 1 year ago

@tvories Sort of. I'm using init containers to run the collectstatic and migrate commands. I'm about to push my Helm chart, then I can link to it as an example!

tvories commented 1 year ago

@gabe565 I edited my previous post. My issue was the redis URL. I have gotten much further now.

gabe565 commented 1 year ago

@tvories Awesome! Be careful when you're first using it. I still think those envs are wrong, so it'll just keep saying "Looking good". You'll have to check the ml-api logs to see if it failed to access the images.

gabe565 commented 1 year ago

For the people looking into deploying this to Kubernetes: I just released a Helm chart for Obico! See here for the docs.

It will deploy obico-server and obico-ml-api as separate Deployments, so you can autoscale the ML API. It will also deploy a Redis chart and configure REDIS_URL automatically. The INTERNAL_MEDIA_HOST and ML_API_HOST will be configured correctly out of the box. Make sure to enable the persistence settings, but it should deploy without too many variables in your values.yaml. Feel free to open an issue in that repo if you have any questions!

tvories commented 1 year ago

Very nice work @gabe565! I'm going to try it out after this next print. You should join the https://k8s-at-home.com/ discord server. You're already familiar with the bjw-s charts.

tehniemer commented 1 year ago

Not really as I don't know your workflow, I can share the code I'm using on my gitlab instance. I made no modifications to docker compose file so it's just building the images as you already do now, copy the directories in the container and pushing the image to the register.

    - git clone -b release https://github.com/TheSpaghettiDetective/obico-server.git
    - cd obico-server
    - cp -r frontend backend/frontend
    - echo 'RUN mv -f /app/frontend /frontend' >> backend/Dockerfile
    - docker buildx bake -f docker-compose.yml web --set web.tags="repository/obico:tag" --push
    - docker buildx bake -f docker-compose.yml ml_api --set ml_api.tags="repository/obico-ml:tag" --push

Using docker buildx you can also build the same image for multiple arch like arm64 if you need.

@lettore, or anyone else really, is there a way to do this using github actions on a fork to push the images to ghcr.io instead of docker hub?

n1ght-hunter commented 8 months ago

@tehniemer @gabe565 has already got github images autobuilding over at https://github.com/gabe565/docker-obico/tree/main

n1ght-hunter commented 8 months ago

i thought i would jump in here. this a really cool project but majorly needs some infrastructure work. to give a bit of a background i run around 100 containers on my main server including stuff like frigate, immich, hass,firfly, jellyfin etc. and this is the first major open source project i have used that doesnt have proper versioning releases and auto building. many users now use portainer to setup and run their compose files etc which obico server currently is very incompatible with.

its very easy to auto build now days with github actions and also use the github image repository. theres a very good working example of this already over at @gabe565 repo here im sure many people including me would be happy to make a pr move that build.yml over to main repo and get an automated build proccess up and running. all that is really needed is for the maintainers to approve this.

nvtkaszpir commented 8 months ago

We release our server code continuously so we can't rely on release docker images. Otherwise everyone will have to download GBs of images when you upgrade.

That was a problem few years ago. Nowadays when we put AI models into containers it's another story - I've been working in AI/ML field where 20GB containers are normal. The main trick is to split it properly into easily replaceable layers for base image and app layers using multistage builds.

When I started obico docker-compose for the first time I was surprised it did insane amount of things live in the containers, such as fetching models (this should be baked in as image as separate layer), or worse - dynamically generated on start (some post-processed files).

Some suggestions:

Above should allow MUCH easier release control of the cloud services or supporting other platform (kubernetes \o/).

If you need help just ask me on discord (KaszpiR) and I can help you with all above.

Wetzel402 commented 2 weeks ago

I would love to see official docker images also along with official docker-compose.yaml. I'm trying to set this up on a QNAP NAS which requires setup via compose.

kennethjiang commented 2 weeks ago

I would love to see official docker images also along with official docker-compose.yaml. I'm trying to set this up on a QNAP NAS which requires setup via compose.

We do have official docker-compose.yaml https://github.com/TheSpaghettiDetective/obico-server/blob/release/docker-compose.yml. Actually docker-compose is the only way officially supported by the obico team and it's well-docjmented. Maybe I misunderstood what you mean?

Wetzel402 commented 2 weeks ago

I would love to see official docker images also along with official docker-compose.yaml. I'm trying to set this up on a QNAP NAS which requires setup via compose.

We do have official docker-compose.yaml https://github.com/TheSpaghettiDetective/obico-server/blob/release/docker-compose.yml. Actually docker-compose is the only way officially supported by the obico team and it's well-docjmented. Maybe I misunderstood what you mean?

Sorry, I should clarify. Most projects I've used have a docker compose that pulls in the image(s) from docker hub and spins up the container(s). In the case of obico the docker compose builds the image from source which isn't supported by some devices like QNAP. I was able to get it working but I had to work outside the box to get there. I...

Another solution would have been to use SSH but QNAP is limited on linux packages so that might not have worked properly.

I hope that all makes sense.

kennethjiang commented 2 weeks ago

I would love to see official docker images also along with official docker-compose.yaml. I'm trying to set this up on a QNAP NAS which requires setup via compose.

We do have official docker-compose.yaml https://github.com/TheSpaghettiDetective/obico-server/blob/release/docker-compose.yml. Actually docker-compose is the only way officially supported by the obico team and it's well-docjmented. Maybe I misunderstood what you mean?

Sorry, I should clarify. Most projects I've used have a docker compose that pulls in the image(s) from docker hub and spins up the container(s). In the case of obico the docker compose builds the image from source which isn't supported by some devices like QNAP. I was able to get it working but I had to work outside the box to get there. I...

* forked the project so I could edit the `docker-compose.yaml`

* used Portainer to create the stack from the Github repository

* attached a terminal to the container to create some missing folders

* finally uploaded static files for the Django server to my NAS and restarted the container.

Another solution would have been to use SSH but QNAP is limited on linux packages so that might not have worked properly.

I hope that all makes sense.

Gotcha. We actually built a base image and use docker compose to accomplish 2 things:

alexyao2015 commented 2 weeks ago

Usually you have a development environment and a ci/cd pipeline to automatically publish images on push. It seems like all 3d printing projects operate in this nonstandard way, similar to this repo. This isn't best practice and I've seriously only ever seen this type of setup in 3d printing open source for whatever reason.

alexyao2015 commented 2 weeks ago

Adding a bit more, you really don't need a base image. It just adds to the complexity here. For development, you could rely fully on local docker build cache, which should be more than sufficient. For production, it should build a fully immutable image for consistent results on all systems without requiring the user to checkout source code and build it on their system. The way it's done today is exactly how you get the all complexity of containers without fully taking advantage of its typical "build once, run everywhere" idea. This is build somewhere, build again, and run in some places currently.

kennethjiang commented 2 weeks ago

@alexyao2015 thank you for your suggestion. This open source project is owned and supported by the community. Feel free to use your expertise to contribute the community by taking a stab at the problem.