Closed quincarter closed 7 months ago
Thanks for opening your first issue here! Be sure to follow the relevant issue templates, or risk having this issue marked as invalid.
Any reason you're using vaapi Vs nvenc which is supported in emby?
https://github.com/linuxserver/docker-emby?tab=readme-ov-file#nvidia
Readme doesn't say to map /dev/dri for Nvidia
https://github.com/linuxserver/docker-emby?tab=readme-ov-file#nvidia
Readme doesn't say to map /dev/dri for Nvidia
I guess to be fair it doesn't say what exactly to map for my device(s) (exactly how to do it)
I have had this configured for a while and it was working in emby. Can you share your configuration for Nvidia cards?
You dont need a explicit mapping for nvidia cards, the toolkit/runtime handles that for you
Or should I just comment out the cards mapping?
So why aren't any encoding options showing up here? I just spun up my container without the devices mapped
Do you get nvidia-smi
from within the container?
To be clear, we don't do anything for Nvidia other than setting one env var (capabilities) in the image. Everything is handled by Nvidia toolkit and the drivers installed on host.
If they're installed correctly, the container is using Nvidia runtime and the other env is set, it should work. But again, none of that is under our control.
Emby displays all valid gpu options it detects in its gui.
Do you get
nvidia-smi
from within the container?
actually, no i don't
docker exec -it emby /bin/bash
root@de7ec57a98a0:/# nvidia-smi
Failed to initialize NVML: Unknown Error
So why would i not have nvidia-smi
in the container?
Have you installed the nvidia container toolkit?
Have you installed the nvidia container toolkit?
Yeah multiple times i feel like.
nvidia-ctk -v
NVIDIA Container Toolkit CLI version 1.14.6
commit: 5605d191332dcfeea802c4497360d60a65c7887e
also my current /etc/docker/daemon.json
and my ~/.config/docker/daemon.json
are identical:
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"args": [],
"path": "nvidia-container-runtime"
}
},
"exec-opts": ["native.cgroupdriver=cgroupfs"]
}
this was the command i ran to configure the nvidia-ctk
runtime:
nvidia-ctk runtime configure --runtime=docker --config=$HOME/.config/docker/daemon.json
How did you install docker?
Note - it might be better you to swing over to our discord for easier help. Github issues are mainly for bugs where this seems to be an issue with your host.
How did you install docker?
Note - it might be better you to swing over to our discord for easier help. Github issues are mainly for bugs where this seems to be an issue with your host.
Sure. Which channel should i post in? I just joined
Linking here for visibility in case someone finds this searching - added a discord post here: https://discord.com/channels/354974912613449730/1226289649530568725
I do see in the most recent diff comparing the two versions current and previous -- that the NVIDIA_DRIVER_CAPABILITIES
is now being set by the docker image instead of being specified in the docker compose file. See this you can see this is changed. I currently set this to NVIDIA_DRIVER_CAPABILITIES=all
in my docker compose. I know that in order to use nvidia-smi
in the docker container, you need the utility
option. See Driver Capabilities docs.
I am not sure if that is affecting the image at all. But worth mentioning here. Also, why would you not want NVIDIA_DRIVER_CAPABILITIES=all
?
Also ----- posted this in discord too:
So i just followed this guide to see if i was truly affected by the bug listed there. And i am not:
$ docker run -d --rm --runtime=nvidia --gpus all \
--device=/dev/nvidia-uvm \
--device=/dev/nvidia-uvm-tools \
--device=/dev/nvidia-modeset \
--device=/dev/nvidiactl \
--device=/dev/nvidia0 \
--device=/dev/nvidia1 \
nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04 bash -c "while [ true ]; do nvidia-smi -L; sleep 5; done"
Unable to find image 'nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04' locally
12.0.0-base-ubuntu20.04: Pulling from nvidia/cuda
96d54c3075c9: Pull complete
ac03447731ca: Pull complete
80e3d5e18f2e: Pull complete
b45dc012c6e2: Pull complete
5acbbc202509: Pull complete
Digest: sha256:dcea6188cf23600a396033b88132f86e295f35aa5ef8fee79187280ff6ecc81a
Status: Downloaded newer image for nvcr.io/nvidia/cuda:12.0.0-base-ubuntu20.04
88d8da4340c92cba1f0618e237ec859a2af3aa4e47325c6a364f620030ba252d
$ docker logs 88d8da4340c92cba1f0618e237ec859a2af3aa4e47325c6a364f620030ba252d
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
$ sudo systemctl daemon-reload
$ docker logs 88d8da4340c92cba1f0618e237ec859a2af3aa4e47325c6a364f620030ba252d
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
GPU 0: NVIDIA GeForce GTX 1080 (UUID: GPU-e574f97e-40e5-79f0-9128-76c79d5d0c40)
GPU 1: NVIDIA GeForce GTX 1080 (UUID: GPU-4661d3ec-fd8d-9c8d-167d-b1e1502b0fc6)
So i don't know - it may be an issue with the linuxserver image maybe? Pulling that myself looks like it worked fine from nvidia.
TLDR - solved this in Discord for anyone that comes across this in google and wants to try this solution.
The container wouldn't initialize the nvidia-smi
and any cards associated with it. So mapping the nvidia stuff directly (as seen in the command above in the docker command) is what ultimately solved it.
--device=/dev/nvidia-uvm \ --device=/dev/nvidia-uvm-tools \ --device=/dev/nvidia-modeset \ --device=/dev/nvidiactl \ --device=/dev/nvidia0 \ --device=/dev/nvidia1 \
In my docker compose it looks like this (thanks salty):
emby:
image: lscr.io/linuxserver/emby:latest
container_name: emby
environment:
- PUID=${EMBY_UID}
- PGID=${EMBY_GID}
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=all
- TZ=${TIMEZONE}
volumes:
- ./.containers/Emby:/config # Configuration directory
- ./images/logowhite.png:/app/emby/system/dashboard-ui/modules/logoscreensaver/logowhite.png
- ./images/logowhite.png:/app/emby/system/dashboard-ui/modules/themes/logowhite.png
- ./images/logodark.png:/app/emby/system/dashboard-ui/modules/themes/logodark.png
- ${LOCAL_TV_PATH}:/media/Synology/Television # Media directory
- ${LOCAL_MOVIES_PATH}:/media/Synology/Movies # Media directory
- ${LOCAL_BACKUPS_PATH}:/media/Synology/Backups #Backups Directory
- /ssl/fullchain.pem:/ssl/fullchain.pem
- /ssl/privkey.pem:/ssl/privkey.pem
- /ssl/token:/ssl/token
ports:
- ${EMBY_HOST_PORT}:8096 #http port
- ${EMBY_HOST_PORT_SSL}:8920 #ssl port
runtime: nvidia
restart: unless-stopped
devices:
- /dev/nvidia-uvm:/dev/nvidia-uvm # Added nvidia devices here
- /dev/nvidia-uvm-tools:/dev/nvidia-uvm-tools # Added nvidia devices here
- /dev/nvidia-modeset:/dev/nvidia-modeset # Added nvidia devices here
- /dev/nvidiactl:/dev/nvidiactl # Added nvidia devices here
- /dev/nvidia0:/dev/nvidia0 # Added nvidia devices here
- /dev/nvidia1:/dev/nvidia1 # Added nvidia devices here (i have a 2nd GPU so i needed this too)
- /dev/dri:/dev/dri # I added this per suggestion, but this is for VAAPI so i don't know if this actually works -- this was what was failing before.
profiles:
- emby
nvidia-smi
in the container
Inside of emby in the transcoding section:
I think this is solved per the suggestion above. So i am closing this now! Thanks for everyone's help! I am not sure why i had to add the devices manually in the docker compose but that's what i had to do to get them to show up now.
Is there an existing issue for this?
Current Behavior
I'm on an amd64 based host system with Ubuntu 20.04. I have 2 SLI GTX 1080TI cards. Looks like
nvidia-smi
registers them properly on the host and the logs on the container look like the card permissions are okay.The actual error occurs when trying to initialize the VA API within emby itself (just looking at all the logs)
See below for all the info I have.
nvidia-smi
output/dev/dri
in the containerContainer output stating permission are okay
Actual error I see from the emby logs.
Expected Behavior
The video cards should mount(?) and the VA API should initialize
Steps To Reproduce
I have used the Nvidia-ctk to set the runtime to docker. (Not sure if I did that correctly but found quite a few references that looked like I was - I followed the steps)
Environment
CPU architecture
x86-64
Docker creation
Container logs