mviereck / x11docker

Run GUI applications and desktops in docker and podman containers. Focus on security.
MIT License
5.62k stars 378 forks source link

orchestrating x11docker docker-compose? #227

Closed twyeld closed 4 years ago

twyeld commented 4 years ago

I can see from other remarks that you are not a fan of docker-compose "it is just syntactic sugar around the docker run command".

But I need to find a way to orchestrate multiple instances of the same x11docker container and multiple interconnected containers to a x11docker container.

So far a simple xterm example works: file docker-compose.yml version: '3' version: "2" services: app: image: basic-xterm command: xterm volumes: - /tmp/.X11-unix:/tmp/.X11-unix environment: - DISPLAY=$DISPLAY

to run: docker-compose up --scale app=5

I tried something similar for my x11docker container without success: version: '3' version: "2" services: app: image: [my_x11docker_image_with_unity3d_app] volumes: - /tmp/.X11-unix:/tmp/.X11-unix environment: - DISPLAY=$DISPLAY - x11docker

How to load/wrap my app with x11docker?

twyeld commented 4 years ago

I found this method - I wonder if it could be used for x11docker?: https://github.com/eywalker/nvidia-docker-compose

mviereck commented 4 years ago

Sorry, currently I cannot help here. I am not familar with docker-compose and do not have the time yet to investigate. I'll look at this again, but cannot help immediately. A custom script that runs multiple x11docker instances could be a workaround.

twyeld commented 4 years ago

i am just heading off to a meeting so won't be online for a while - but just to let you know I have had some success getting x11docker to work with docker-compose

here is my d-c file so far: version: '3' version: "2" services: app: # entrypoint: xterm image: nvidia-base-tw-game-app-xorg-gl-unity-autoexec-xterm volumes: - /tmp/.X11-unix:/tmp/.X11-unix environment: # - DISPLAY=$DISPLAY x11docker: "--hostdisplay" x11docker: "--gpu" # command: ./../../usr/bin/xterm

# to run: # docker-compose up --scale app=5

this almost works - but I can't get the x11docker env to provide a display for the xterm yet

more later...

twyeld commented 4 years ago

ok - there doesn't seem to be any way to prefix a docker-compose script with x11docker

I have been looking into the x11docker bash script you provide on github - but it is so fully-featured it is difficult to work out how to extract just the --hostdisplay and --gpu functions for xorg

certainly you have done a better job than nvidia in isolating what is need to add these functions to a running container (I give up trying to get nvidia-docker working as a substitute for x11docker!)

but for my purposes it is crucial I can use docker-compose

do you have any idea how to isolate just the xserver (xorg) component of x11docker so I can set it as an environment variable in docker-compose directly for hardware acceleration

so far I can set the display environment explicitly for xterm but not for a unity3d game...

mviereck commented 4 years ago

do you have any idea how to isolate just the xserver (xorg) component of x11docker so I can set it as an environment variable in docker-compose directly for hardware acceleration

That is a possible attempt. You can use x11docker to get access credentials to Xorg:

$ x11docker --hostdisplay --gpu --showenv --quiet
 DISPLAY=:0.0 XAUTHORITY=/home/lauscher/.cache/x11docker/xonly-01435547051/share/Xauthority.client XSOCKET=/tmp/.X11-unix/X0 XDG_RUNTIME_DIR=/run/user/1000

From this output you need DISPLAY and XAUTHORITY. Example:

read Xenv < <(x11docker --hostdisplay --gpu --showenv --quiet)
export $Xenv

In the dc-file:

volumes:
- /tmp/.X11-unix:/tmp/.X11-unix
- /dev/dri:/dev/dri
environment:
- DISPLAY=$DISPLAY
- XAUTHORITY=$XAUTHORITY

You also need to share the GPU devices as volumes. /dev/dri like noted above and all files matching /dev/nvidia*. You also need the nvidia driver in the image.

(Instead of sharing the GPU files and setting up the nvidia driver, you can use an nvidia-docker image as a base and somehow enable option --runtime=nvidia in the dc-file.)

twyeld commented 4 years ago

ok this is what I get from x11docker --hostdisplay --gpu --showenv --quiet

DISPLAY=:1 XAUTHORITY=/home/twyeld3/.cache/x11docker/xonly-03297316377/share/Xauthority.client XSOCKET=/tmp/.X11-unix/X1 XDG_RUNTIME_DIR=

mviereck commented 4 years ago

You don't need this command directly. Instead, run the example with read Xenv that reads these variables into variable Xenv Than export the variables with export $Xenv. Than run docker compose.

I suggest one further change to the dc-file: Instead of volume - /tmp/.X11-unix:/tmp/.X11-unix use volume - $XSOCKET:$XSOCKET

twyeld commented 4 years ago

oh I see - cache the Xenv ready for d-c to run

runtime_env.sh

read Xenv < <(x11docker --hostdisplay --gpu --showenv --quiet)
export $Xenv

d-c file

version: '2'
services:
  app:
    image: nvidia-base-game-app-xorg-gl-unity
    volumes:
#      - /tmp/.X11-unix:/tmp/.X11-unix
      - $XSOCKET:$XSOCKET
      - /dev/dri:/dev/dri
    environment:
#       - x11docker=--hostdisplay
#       - x11docker=--gpu
       - DISPLAY=$DISPLAY
       - XAUTHORITY=$XAUTHORITY

almost:

WARNING: The XSOCKET variable is not set. Defaulting to a blank string.
Recreating autoexec-unity_app_1 ... error

ERROR: for autoexec-unity_app_1  Cannot create container for service app: b'invalid volume specification: \'.:.:rw\': invalid mount config for type "volume": invalid mount path: \'.\' mount path must be absolute'

ERROR: for app  Cannot create container for service app: b'invalid volume specification: \'.:.:rw\': invalid mount config for type "volume": invalid mount path: \'.\' mount path must be absolute'
ERROR: Encountered errors while bringing up the project.
mviereck commented 4 years ago

You are running export $Xenv inside a script. The export is lost after the script terminates. Run docker-compose within the script.

twyeld commented 4 years ago

ok - I got the following to run

read Xenv < <(x11docker --hostdisplay --gpu --showenv --quiet)
export $Xenv
exec < <(docker-compose up 
"version: '2'
services:
  app:
    image: nvidia-base-game-app-xorg-gl-unity
    volumes:
#      - /tmp/.X11-unix:/tmp/.X11-unix
      - $XSOCKET:$XSOCKET
      - /dev/dri:/dev/dri
    environment:
#       - x11docker=--hostdisplay
#       - x11docker=--gpu
       - DISPLAY=$DISPLAY
       - XAUTHORITY=$XAUTHORITY")

returns:

Recreating autoexec-unity_app_1 ... done
./runtime_env.sh: line 16: version: '2'
services:
  app:
    image: nvidia-base-game-app-xorg-gl-unity
    volumes:
#      - /tmp/.X11-unix:/tmp/.X11-unix
      - /tmp/.X11-unix/X1:/tmp/.X11-unix/X1
      - /dev/dri:/dev/dri
    environment:
#       - x11docker=--hostdisplay
#       - x11docker=--gpu
       - DISPLAY=:1
       - XAUTHORITY=/home/twyeld3/.cache/x11docker/xonly-08412020695/share/Xauthority.client: No such file or directory

and then it hangs...

mviereck commented 4 years ago

I doubt that your exec ... syntax is valid. Try to create a regular d-c file and run docker-compose without exec.

twyeld commented 4 years ago

yes - I am not sure about the syntax...

tried this:

read Xenv < <(x11docker --hostdisplay --gpu --showenv --quiet)
export $Xenv
docker-compose up 
"version: '2'
services:
  app:
    image: nvidia-base-game-app-xorg-gl-unity
    volumes:
#      - /tmp/.X11-unix:/tmp/.X11-unix
      - $XSOCKET:$XSOCKET
      - /dev/dri:/dev/dri
    environment:
#       - x11docker=--hostdisplay
#       - x11docker=--gpu
       - DISPLAY=$DISPLAY
       - XAUTHORITY=$XAUTHORITY"

returns:

root@twyeld3: more_path_here#./runtime_env.sh
Starting yaml-testing_test_app_1 ... done
Attaching to yaml-testing_test_app_1
test_app_1  | Set current directory to /
test_app_1  | Found path: /../../home/app/b2c-w-graphs.x86_64
test_app_1  | Mono path[0] = '/../../home/app/b2c-w-graphs_Data/Managed'
test_app_1  | Mono config path = '/../../home/app/b2c-w-graphs_Data/MonoBleedingEdge/etc'
test_app_1  | Preloaded 'libgrpc_csharp_ext.x64.so'
test_app_1  | Unable to preload the following plugins:
test_app_1  |   ScreenSelector.so
test_app_1  | Logging to /root/.config/unity3d/Unity Technologies/Unity Environment/Player.log

then it hangs

the part where it says Unable to preload the following plugins: test_app_1 | ScreenSelector.so shouldn't be a problem - in regular x11docker runs it shows same warning but still launches ok

mviereck commented 4 years ago

Try with a simpler image command, e.g. xterm.

twyeld commented 4 years ago

tried that - weird thing is it shows that still caching the game image - even though I removed any dangling images...

new image using in the script: xterm-temp (just a basic xterm launcher) but returns this message (for different/prior image):

root@twyeld3:# ./runtime_env3.sh
Starting yaml-testing_test_app_1 ... done
Attaching to yaml-testing_test_app_1
test_app_1  | Set current directory to /
test_app_1  | Found path: /../../home/app/b2c-w-graphs.x86_64
test_app_1  | Mono path[0] = '/../../home/app/b2c-w-graphs_Data/Managed'
test_app_1  | Mono config path = '/../../home/app/b2c-w-graphs_Data/MonoBleedingEdge/etc'
test_app_1  | Preloaded 'libgrpc_csharp_ext.x64.so'
test_app_1  | Unable to preload the following plugins:
test_app_1  |   ScreenSelector.so
test_app_1  | Logging to /root/.config/unity3d/Unity Technologies/Unity Environment/Player.log

and yes I stopped any running containers... must be a Xenv thing?

twyeld commented 4 years ago

ok - figured out what was happening - it got to docker-compose up in the script and found a suitable docker-compose.yml file elsewhere in the directory and loaded that!

I have since moved the script out to a temp directory so it can'f find any rogue .yml files - but clearly syntax is still wrong - it doesn't step through the parameters for the d-c component...

how to execute the d-c component from inside the shell script?

mviereck commented 4 years ago

how to execute the d-c component from inside the shell script?

I can't tell how to run docker-compose in a script. I don't have it myself. But that should be possible and is not an x11docker issue so far.

twyeld commented 4 years ago

I have really painted myself into a corner - I left the orchestration stuff until last, thinking it would be pretty straightforward - didn't realise docker-compose doesn't really support prepending the running of a container with something like x11docker env bash script

mviereck commented 4 years ago

Is it possible at all to run docker-compose in a script? I would assume it.

mviereck commented 4 years ago

x11docker just provides you environment variables DISPLAY and XAUTHORITY and the path to the X socket that must be shared ($XSOCKET).

for sure it is possible to provide this to docker-compose. But for handling docker-compose, rather ask someone who is familar with it.

twyeld commented 4 years ago

ok - I think we are missing something - it appears to be running the d-c script component now (once I got it away from the other yml file)

here is the latest .sh

read Xenv < <(x11docker --hostdisplay --gpu --showenv --quiet)
export $Xenv
docker-compose up
"version: '2'
services:
  app3:
     image: xterm-temp
    volumes:
#      - /tmp/.X11-unix:/tmp/.X11-unix
      - $XSOCKET:$XSOCKET
      - /dev/dri:/dev/dri
    environment:
#       - x11docker=--hostdisplay
#       - x11docker=--gpu
       - DISPLAY=$DISPLAY
       - XAUTHORITY=$XAUTHORITY"

and her is the trace

root@twyeld3:/home/twyeld3/temp# ./runtime_env3.sh
ERROR: 
        Can't find a suitable configuration file in this directory or any
        parent. Are you in the right directory?

        Supported filenames: docker-compose.yml, docker-compose.yaml

./runtime_env3.sh: line 16: version: '2'
services:
  app3:
     image: xterm-temp
    volumes:
#      - /tmp/.X11-unix:/tmp/.X11-unix
      - /tmp/.X11-unix/X1:/tmp/.X11-unix/X1
      - /dev/dri:/dev/dri
    environment:
#       - x11docker=--hostdisplay
#       - x11docker=--gpu
       - DISPLAY=:1
       - XAUTHORITY=/home/twyeld3/.cache/x11docker/xonly-40939285484/share/Xauthority.client: No such file or directory

$XSOCKET:$XSOCKET is being replaced and so is the path for $XAUTHORITY

it simply can't find Xauthority.client ?

mviereck commented 4 years ago

It looks odd that you have the lines below docker-compose up within the script. What if you store them in an yml file?

twyeld commented 4 years ago

but we tried that and doesn't that mean once the script ends the xenv values are no longer available?

mviereck commented 4 years ago

In that case you ran docker-compose after running the script. Run it within the script. Something like this:

read Xenv < <(x11docker --hostdisplay --gpu --showenv --quiet)
export $Xenv
docker-compose up -f xy.yml

xy.yml contains the parameters:


"version: '2'
services:
  app3:
     image: xterm-temp
    volumes:
#      - /tmp/.X11-unix:/tmp/.X11-unix
      - $XSOCKET:$XSOCKET
      - /dev/dri:/dev/dri
    environment:
#       - x11docker=--hostdisplay
#       - x11docker=--gpu
       - DISPLAY=$DISPLAY
       - XAUTHORITY=$XAUTHORITY"
twyeld commented 4 years ago

yes - that's exactly what I did and it seems to be working - at least it launches a xterm - with coloured text.

next I will try to embed a game app to see if it will run...

twyeld commented 4 years ago

ok - same as before no protocol specified when I try to run nvidia-base-game-app-xorg-gl-unity-autoexec image

next I will try to log into the xterm-temp running container and copy a game app to it and try to run

that returned a few bad GL and GLX errors...

twyeld commented 4 years ago

this time I used the following dockerfile:

#xterm-temp-x11d-game

#FROM ubuntu:18.04
FROM x11docker/nvidia-base

RUN apt-get update && apt-get install -y xterm
RUN useradd -ms /bin/bash xterm
COPY dec2019-x86_64compile /home/app/
USER xterm
WORKDIR /home/xterm
CMD xterm

but I still get the same GL/GLX errors - shouldn't these be included in the x11docker/nvidia-base image?

maybe it can't access the nvidia drivers?

these are specified when running the normal way: x11docker --hostdisplay --gpu [image_name]

twyeld commented 4 years ago

when I run x11docker --hostdisplay --gpu xterm-temp-x11d-game everything works fine

so, clearly something is being left behind in the exporting of the xenv ?

mviereck commented 4 years ago

that returned a few bad GL and GLX errors... maybe it can't access the nvidia drivers?

Now you need to share the device files /dev/dri and all of /dev/nvidia*.

docker run needs --device instead of --volume. I'm not sure how to share in an yml file.

twyeld commented 4 years ago

ok - now I am getting desperate - I need something for my boss in the morning.

I do want to crack this nut but am simply running out of time now

I think I will just use a bash script to launch multiple sessions of the same container on different port numbers using: x11docker --hostdisplay --gpu [my_image]

but the -p flag doesn't work for x11docker - so how to expose a port on start up?

mviereck commented 4 years ago

I do want to crack this nut but am simply running out of time now

I think you're close already. Add a devices: section (explained here):

devices:
      - /dev/dri:/dev/dri

and also add all files appearing with ls /dev/nvidia* in the devices: section.

but the -p flag doesn't work for x11docker - so how to expose a port on start up?

You can add custom docker run options. From x11docker --help:

  x11docker [OPTIONS] -- DOCKER_RUN_OPTIONS -- IMAGE [COMMAND [ARG1 ARG2 ...]]

Something like this should work:


 x11docker --hostdisplay --gpu -- -p 5000:5000 -- [image_name]
twyeld commented 4 years ago

you must get tired of people calling you a genius...!

here is the d-c file that worked:

version: '2'
services:
   app3:
     image: xterm-temp-x11d-game
     volumes:
       - $XSOCKET:$XSOCKET
     devices:
       - /dev/dri:/dev/dri
       - /dev/nvidia0
       - /dev/nvidiactl
       - /dev/nvidia-modeset
       - /dev/nvidia-uvm
       - /dev/nvidia-uvm-tools
       - /dev/vga_arbiter
     environment:
       - DISPLAY=$DISPLAY
       - XAUTHORITY=$XAUTHORITY

I just lifted the /dev/nvidia* info from a normal x11docker --hostdisplay etc run output

and here is the .sh script with scaling - which is why I was asked to use compose in the first place

read Xenv < <(x11docker --hostdisplay --gpu --showenv --quiet)
export $Xenv
docker-compose up --scale [name_used_in d-c file: app3]=[number of instances]
twyeld commented 4 years ago

I need to sleep now - more in a few hours...

mviereck commented 4 years ago

Great that it works now!

you must get tired of people calling you a genius...!

:smile: Thanks! Can't hear that often enough ... ;-)

mviereck commented 4 years ago

Just a small note:

x11docker creates some cache files in ~/.cache/x11docker. Normally x11docker removes the cache files when it terminates. But in this case it runs in background and never terminates except on shutdown:

read Xenv < <(x11docker --hostdisplay --gpu --showenv --quiet)

This can lead to remaining cache files, getting more and more over time. This is not a great issue, but you should know for the full picture.

x11docker --cleanup removes dangling cache files and also terminates all currently running x11docker sessions.

mviereck commented 4 years ago

I just see that your working example misses a volume entry for XAUTHORITY. It should look like:

     volumes:
       - $XSOCKET:$XSOCKET
       - $XAUTHORITY:$XAUTHORITY

Without this entry your setup should fail. There is one exception: You are using --hostdisplay and your host X server has a user entry in xhost. That is often the case, but you should not rely on it. I recommend to add the XAUTHORITY volume entry.

I think we can close the ticket now.