Closed Alex-Orsholits closed 2 years ago
We don't build the container in this case, there is nothing we can do about this. :(
Ah, I was afraid of this... Just one question: on container launch, how does jellyfin get the GPU capabilities? It seems that it is possible to explicitly state the required Nvidia hooks when launching for example a docker container with NVIDIA_DRIVER_CAPABILITIES=video,compute,utility
afaik the container should always start with the driver capabilities set. But even so afaik @stavros-k made sure it was also forced from our side on these cases.
afaik the container should always start with the driver capabilities set. But even so afaik @stavros-k made sure it was also forced from our side on these cases.
Actually no, we only force removal of capabilities when no GPU is selected. If I'm not mistaken, iX injects those capabilities when selecting GPU.
@meh301 You can verify the capabilities by opening a bash to jellyfin (3-dots > shell), and doing env
(or env | grep NVIDIA
for shorter list).
So we can be sure that we are not braking anything here.
To be clear: even if they are not there, that is primarily the responsibility of the container creator.
Thank you for your replies, env shows a visible nvidia device but not much else
$ env | grep NVIDIA
NVIDIA_VISIBLE_DEVICES=GPU-b9c6b00b-95b2-0893-a633-772387351cf6
The issue is most probably due to the container itself sadly
We might want to override capabilities=all like k8s-at-home is doing in their containers, for all containers that get an nvidia GPU assigned... @stavros-k ?
We might want to override capabilities=all like k8s-at-home is doing in their containers, for all containers that get an nvidia GPU assigned... @stavros-k ?
Yes I'll take a look at it the next days
@all-contributors please add @Alex-Orsholits for bugs
@Ornias1993
I've put up a pull request to add @Alex-Orsholits! :tada:
This issue is locked to prevent necro-posting on closed issues. Please create a new issue or contact staff on discord of the problem persists
App Name
jellyfin
SCALE Version
22.02.0
App Version
10.7.7_9.0.43
Application Events
Application Logs
Application Configuration
I launched the jellyfin docker with a mostly stock configuration. Below are the only settings I changed (apart from adding additional app storage)
Custom Resource Limits
Describe the bug
When attemping to transcode content using the nvenc encoder, FFMpeg exists with error code 1. The jellyfin logs do not specify the actual reason for failure, but running the logged command directly in the pod shell provides the error:
[h264 @ 0x55783247bf40] Cannot load libnvcuvid.so.1
[h264 @ 0x55783247bf40] Failed loading nvcuvid.
[h264 @ 0x55783247bf40] Failed setup for format cuda: hwaccel initialisation returned error.
Impossible to convert between the formats supported by the filter 'graph 0 input from stream 0:0' and the filter 'auto_scaler_0'
Error reinitializing filters!
Failed to inject frame into filter network: Function not implemented
Error while processing the decoded data for stream #0:0
Conversion failed!
I verified that the graphics card is passed to the pod and is available using the nvidia-smi in the pod shell:
To Reproduce
Expected Behavior
FFMpeg returns success status and provides the user with a transcoded stream
Screenshots
Additional Context
My truenas SCALE hardware is as follows:
The GPU is not isolated from the host OS and correctly shows up in both TrueNAS and Jellyfin container.
I've read and agree with the following