wanjohiryan / Arc3dia

Self-Hosted Stadia: Play with your friends online from any device and at any time
https://nestri.io
GNU Affero General Public License v3.0
844 stars 14 forks source link

Running linux based games with neko-rooms #18

Closed impromedia closed 1 year ago

impromedia commented 1 year ago

I want to run an Ubuntu game packaged in an AppImage.

I've changed the file /etc/entrypoint.sh to point to the game:

!/bin/bash -e

Add VirtualGL directories to path

export PATH="${PATH}:/opt/VirtualGL/bin"

Use VirtualGL to run wine with OpenGL if the GPU is available, otherwise use barebone wine if [ -n "$(nvidia-smi --query-gpu=uuid --format=csv | sed -n 2p)" ]; then export VGL_DISPLAY="${VGL_DISPLAY:-egl}" export VGL_REFRESHRATE="$REFRESH" cd /games && vglrun +wm game.AppImage --appimage-extract-and-run else cd /games && game.AppImage --appimage-extract-and-run fi

it seems that the GPU are not available on the docker even I've installed nvidia-docker and Nvidia container toolkit I'm using neeko-rooms to instantiate the qwantify sessions. it is a work around to enable the GPU's in the container when it is started by neko-rooms?

wanjohiryan commented 1 year ago

Hi @impromedia

I haven't looked at neko-rooms yet. But this seems interesting.

What happens when you ssh into the container(s) and run nvidia-smi (assuming you are using a Nvidia gpu)?

impromedia commented 1 year ago

Thank you for your fast replay. If I run nvidia-smi inside the container it is working: +-----------------------------------------------------------------------------+ | NVIDIA-SMI 525.60.13 Driver Version: 525.60.13 CUDA Version: 12.0
|-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | MIG M. | | 0 Tesla T4 On | 00000000:0B:00.0 Off | 0 | | N/A 36C P8 9W / 70W | 70MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | 1 Tesla T4 On | 00000000:13:00.0 Off | 0 | | N/A 35C P8 9W / 70W | 25MiB / 15360MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | +-----------------------------------------------------------------------------+

I've extracted the app image and when I start it, I get "segmentation fault (core dumped)" error The app need minimum 8GB to run, maybe this is the issue. How to increase it (on the host I have 320 GB) ?

impromedia commented 1 year ago

It looks that the app Interface is allowed to use only 2GB memory.

image

wanjohiryan commented 1 year ago

There seems to be no issue with the containers accessing your Nvidia GPUs.

It looks that the app Interface is allowed to use only 2GB memory.

Try changing the shared memory shm to '8gb' in the docker-compose.yaml and see whether that helps

shm_size: '8gb'