horahoradev / PrometheusTube

Steal fire (videos) from the Gods (other video sites)
https://prometheus.tube
BSD 2-Clause "Simplified" License
74 stars 2 forks source link

Build error when running up.sh #5

Open salimfadhley opened 9 months ago

salimfadhley commented 9 months ago

For context:

I'm trying to get this running on an entirely clean docker-in-docker system.

Steps to reproduce:


root@105f52fe9164:/hostroot/volume1/home/sal/software/PrometheusTube# ./up.sh

horahoradev commented 9 months ago

mount looks fine to me, idk what we're missing here

an alternative would be to manually run generate-compose.py on the host machine, but that's a pain

still working on usability issues so maybe I'll try to repro later

salimfadhley commented 9 months ago

Anything I can do to test this hypothesis?

Just to clarify - the system I am running on is kinda odd. It's an Asustor NAS which provides a very bare-bones host OS. All I can really do is spin up docker and then docker into a more fully featured operating system. At the moment, all i re-mounted was a basic ubuntu image with access to the docker demon and the root filesystem. I didn't remount devfs or anything fancy.

One really common use-case for self-hosters is to just run stuff in Portainer. In that set-up all we can really do is copy a docker-compose file into a UI and just run it, so the current script-based installation really limits how this thing can be run. It's also going to appeal only to self-hosters with a lot of time.

Is it possible that you could ship a pre-compiled docker-compose in the root of the project, that way people can copy it, change some variables and then quickly boot into the system?

horahoradev commented 9 months ago

Is portainer the docker-in-docker mechanism you're referring to?

in this circumstance I probably could. are you accessing the service from another location in your network, or is it on localhost?

getting rid of the templated docker-compose will take some work. I want to make this easier to run, but it's tricky ofc.

maybe I can ship all of the services in a single container, and publish the image... hmm...

salimfadhley commented 9 months ago

Portainer is just a dockerized GUI for managing docker. I'm not using it in this circumstance. It's what I'd like to use. It's a very common way for self-hosting apps. You just paste a docker-compose file into the GUI and it runs it.

I'm running docker-in-docker on the actual host. Here's what I did:

The issue is that the host OS is really barebones. It includes the essential NAS stuff, some basic UNIX commands, docker and not a whole lot else, so no Python3. I take advantage of the fact that it can run Docker

maybe I can ship all of the services in a single container, and publish the image... hmm...

Oh no! That would be a mess. Why not have a dockerfile with multiple targets (supported since ages ago), and then a docker-compose file that references each of those targets.

salimfadhley commented 9 months ago

Just to be clear, in your dockerfile you can have:

FROM --platform=linux/amd64 python:${PYTHON_VERSION}-slim-bullseye as python_stuff
... python build instructions

FROM go:latest AS go_stuff
... go stuff

And then in your compose file you can specify go_stuff and python_stuff as the names of the locally built images:

  python_service:
    platform: "linux/amd64"
    build:
      target: python_stuff
      context: .
      args:
        SOME_ENVIRONMENT_VARIABLE: 'Blah'

But it would be much better if people didn't need to build anything locally - if you have stuff already released on DockerHub it means people who are not running in build-friendly environments (i.e. me) can just docker-compose up and fetch down the latest released versions.

horahoradev commented 9 months ago

right, I'm proposing we have a single, multi-stage docker image that anyone can run to get the whole service running. I will publish the image, and anyone can just run the finished product. No one needs to build from source, they just pull the single image. Obviously that complicates a few things, but it simplifies setup and e.g. log aggregation. Setup would be a single command with relatively fewer moving parts.

env vars would be a little tricky, i'd need to move to .env files or something. i'll look into either that or simplifying the templating stuff tomorrow, i have to timebox my work on this project. there's too much to be done on setup for one day.

horahoradev commented 9 months ago

I haven't really articulated myself well here, but the problem really is:

  1. setup should be one command
  2. should work for all platforms
  3. should require minimal dependencies

and that's really hard, because this is a pretty heavy distributed system. Potential solutions:

  1. simplify the docker-compose templating stuff, ship a single compose file that accepts env var arguments for the origin
  2. ship some weird systemd-in-docker solution with a single published docker image, which has all the right defaults, and people can just pull down and run
  3. something else?
horahoradev commented 9 months ago

give me a few days to rip things out and simplify the process, there's a lot going on. In the end, i should have have a published docker-compose file in source control that people can just run. tomorrow might be enough, we'll see

salimfadhley commented 9 months ago

I will publish the image, and anyone can just run the finished product. No one needs to build from source, they just pull the single image.

I don't think there's any benefit in having a "single image" for all of the containers that have to run. You can have as many targets as you want, plus if you are dealing with compiled languages you probably want to compile in a compilation image, and then copy the executable output to an execution image. The alternative would be a very bloated image that ships all the compiler and dev tooling.

salimfadhley commented 9 months ago

2. ship some weird systemd-in-docker solution with a single published docker image, which has all the right defaults, and people can just pull down and run

I'm curious about what special issues Prometheus-tube might have that cannot be dealt with by normal Docker-compose stuff.

Most projects make things easy by shipping a docker-compose.yaml and Dockerfile in the root directory of the project. It's a given that you usually have to customize the project a bit, for example because ports. storage locations are always different. Some self-hosters might already have a database up and running and might not want to spin up an extra.

I notice that you compile the docker-compose file from a template, so couldn't you just pre-compile a bunch of them as part of your github-actions tooling? You'd have a developer docker-compose file and a typical user docker-compose file. Anybody wishing for something more complex can hand-edit or recompile themselves.

simplify the docker-compose templating stuff, ship a single compose file that accepts env var arguments for the origin

This would be great. And that's a really "normal" way of using Docker Compose. If you don't want to customize the project all that much, a docker-compose file should be all you need.

salimfadhley commented 9 months ago

FYI, I've discovered a likely cause - user error:

When building docker in docker, bind mounts refer to paths in the host-system, not the inner system.